diff --git a/.agents/skills/e2e-cucumber-playwright/SKILL.md b/.agents/skills/e2e-cucumber-playwright/SKILL.md new file mode 100644 index 0000000000..de6b58f26d --- /dev/null +++ b/.agents/skills/e2e-cucumber-playwright/SKILL.md @@ -0,0 +1,79 @@ +--- +name: e2e-cucumber-playwright +description: Write, update, or review Dify end-to-end tests under `e2e/` that use Cucumber, Gherkin, and Playwright. Use when the task involves `.feature` files, `features/step-definitions/`, `features/support/`, `DifyWorld`, scenario tags, locator/assertion choices, or E2E testing best practices for this repository. +--- + +# Dify E2E Cucumber + Playwright + +Use this skill for Dify's repository-level E2E suite in `e2e/`. Use [`e2e/AGENTS.md`](../../../e2e/AGENTS.md) as the canonical guide for local architecture and conventions, then apply Playwright/Cucumber best practices only where they fit the current suite. + +## Scope + +- Use this skill for `.feature` files, Cucumber step definitions, `DifyWorld`, hooks, tags, and E2E review work under `e2e/`. +- Do not use this skill for Vitest or React Testing Library work under `web/`; use `frontend-testing` instead. +- Do not use this skill for backend test or API review tasks under `api/`. + +## Read Order + +1. Read [`e2e/AGENTS.md`](../../../e2e/AGENTS.md) first. +2. Read only the files directly involved in the task: + - target `.feature` files under `e2e/features/` + - related step files under `e2e/features/step-definitions/` + - `e2e/features/support/hooks.ts` and `e2e/features/support/world.ts` when session lifecycle or shared state matters + - `e2e/scripts/run-cucumber.ts` and `e2e/cucumber.config.ts` when tags or execution flow matter +3. Read [`references/playwright-best-practices.md`](references/playwright-best-practices.md) only when locator, assertion, isolation, or waiting choices are involved. +4. Read [`references/cucumber-best-practices.md`](references/cucumber-best-practices.md) only when scenario wording, step granularity, tags, or expression design are involved. +5. Re-check official docs with Context7 before introducing a new Playwright or Cucumber pattern. + +## Local Rules + +- `e2e/` uses Cucumber for scenarios and Playwright as the browser layer. +- `DifyWorld` is the per-scenario context object. Type `this` as `DifyWorld` and use `async function`, not arrow functions. +- Keep glue organized by capability under `e2e/features/step-definitions/`; use `common/` only for broadly reusable steps. +- Browser session behavior comes from `features/support/hooks.ts`: + - default: authenticated session with shared storage state + - `@unauthenticated`: clean browser context + - `@authenticated`: readability/selective-run tag only unless implementation changes + - `@fresh`: only for `e2e:full*` flows +- Do not import Playwright Test runner patterns that bypass the current Cucumber + `DifyWorld` architecture unless the task is explicitly about changing that architecture. + +## Workflow + +1. Rebuild local context. + - Inspect the target feature area. + - Reuse an existing step when wording and behavior already match. + - Add a new step only for a genuinely new user action or assertion. + - Keep edits close to the current capability folder unless the step is broadly reusable. +2. Write behavior-first scenarios. + - Describe user-observable behavior, not DOM mechanics. + - Keep each scenario focused on one workflow or outcome. + - Keep scenarios independent and re-runnable. +3. Write step definitions in the local style. + - Keep one step to one user-visible action or one assertion. + - Prefer Cucumber Expressions such as `{string}` and `{int}`. + - Scope locators to stable containers when the page has repeated elements. + - Avoid page-object layers or extra helper abstractions unless repeated complexity clearly justifies them. +4. Use Playwright in the local style. + - Prefer user-facing locators: `getByRole`, `getByLabel`, `getByPlaceholder`, `getByText`, then `getByTestId` for explicit contracts. + - Use web-first `expect(...)` assertions. + - Do not use `waitForTimeout`, manual polling, or raw visibility checks when a locator action or retrying assertion already expresses the behavior. +5. Validate narrowly. + - Run the narrowest tagged scenario or flow that exercises the change. + - Run `pnpm -C e2e check`. + - Broaden verification only when the change affects hooks, tags, setup, or shared step semantics. + +## Review Checklist + +- Does the scenario describe behavior rather than implementation? +- Does it fit the current session model, tags, and `DifyWorld` usage? +- Should an existing step be reused instead of adding a new one? +- Are locators user-facing and assertions web-first? +- Does the change introduce hidden coupling across scenarios, tags, or instance state? +- Does it document or implement behavior that differs from the real hooks or configuration? + +Lead findings with correctness, flake risk, and architecture drift. + +## References + +- [`references/playwright-best-practices.md`](references/playwright-best-practices.md) +- [`references/cucumber-best-practices.md`](references/cucumber-best-practices.md) diff --git a/.agents/skills/e2e-cucumber-playwright/agents/openai.yaml b/.agents/skills/e2e-cucumber-playwright/agents/openai.yaml new file mode 100644 index 0000000000..605cce041d --- /dev/null +++ b/.agents/skills/e2e-cucumber-playwright/agents/openai.yaml @@ -0,0 +1,4 @@ +interface: + display_name: "E2E Cucumber + Playwright" + short_description: "Write and review Dify E2E scenarios." + default_prompt: "Use $e2e-cucumber-playwright to write or review a Dify E2E scenario under e2e/." diff --git a/.agents/skills/e2e-cucumber-playwright/references/cucumber-best-practices.md b/.agents/skills/e2e-cucumber-playwright/references/cucumber-best-practices.md new file mode 100644 index 0000000000..d7a1a52852 --- /dev/null +++ b/.agents/skills/e2e-cucumber-playwright/references/cucumber-best-practices.md @@ -0,0 +1,93 @@ +# Cucumber Best Practices For Dify E2E + +Use this reference when writing or reviewing Gherkin scenarios, step definitions, parameter expressions, and step reuse in Dify's `e2e/` suite. + +Official sources: + +- https://cucumber.io/docs/guides/10-minute-tutorial/ +- https://cucumber.io/docs/cucumber/step-definitions/ +- https://cucumber.io/docs/cucumber/cucumber-expressions/ + +## What Matters Most + +### 1. Treat scenarios as executable specifications + +Cucumber scenarios should describe examples of behavior, not test implementation recipes. + +Apply it like this: + +- write what the user does and what should happen +- avoid UI-internal wording such as selector details, DOM structure, or component names +- keep language concrete enough that the scenario reads like living documentation + +### 2. Keep scenarios focused + +A scenario should usually prove one workflow or business outcome. If a scenario wanders across several unrelated behaviors, split it. + +In Dify's suite, this means: + +- one capability-focused scenario per feature path +- no long setup chains when existing bootstrap or reusable steps already cover them +- no hidden dependency on another scenario's side effects + +### 3. Reuse steps, but only when behavior really matches + +Good reuse reduces duplication. Bad reuse hides meaning. + +Prefer reuse when: + +- the user action is genuinely the same +- the expected outcome is genuinely the same +- the wording stays natural across features + +Write a new step when: + +- the behavior is materially different +- reusing the old wording would make the scenario misleading +- a supposedly generic step would become an implementation-detail wrapper + +### 4. Prefer Cucumber Expressions + +Use Cucumber Expressions for parameters unless regex is clearly necessary. + +Common examples: + +- `{string}` for labels, names, and visible text +- `{int}` for counts +- `{float}` for decimal values +- `{word}` only when the value is truly a single token + +Keep expressions readable. If a step needs complicated parsing logic, first ask whether the scenario wording should be simpler. + +### 5. Keep step definitions thin and meaningful + +Step definitions are glue between Gherkin and automation, not a second abstraction language. + +For Dify: + +- type `this` as `DifyWorld` +- use `async function` +- keep each step to one user-visible action or assertion +- rely on `DifyWorld` and existing support code for shared context +- avoid leaking cross-scenario state + +### 6. Use tags intentionally + +Tags should communicate run scope or session semantics, not become ad hoc metadata. + +In Dify's current suite: + +- capability tags group related scenarios +- `@unauthenticated` changes session behavior +- `@authenticated` is descriptive/selective, not a behavior switch by itself +- `@fresh` belongs to reset/full-install flows only + +If a proposed tag implies behavior, verify that hooks or runner configuration actually implement it. + +## Review Questions + +- Does the scenario read like a real example of product behavior? +- Are the steps behavior-oriented instead of implementation-oriented? +- Is a reused step still truthful in this feature? +- Is a new tag documenting real behavior, or inventing semantics that the suite does not implement? +- Would a new reader understand the outcome without opening the step-definition file? diff --git a/.agents/skills/e2e-cucumber-playwright/references/playwright-best-practices.md b/.agents/skills/e2e-cucumber-playwright/references/playwright-best-practices.md new file mode 100644 index 0000000000..02e763d46b --- /dev/null +++ b/.agents/skills/e2e-cucumber-playwright/references/playwright-best-practices.md @@ -0,0 +1,96 @@ +# Playwright Best Practices For Dify E2E + +Use this reference when writing or reviewing locator, assertion, isolation, or synchronization logic for Dify's Cucumber-based E2E suite. + +Official sources: + +- https://playwright.dev/docs/best-practices +- https://playwright.dev/docs/locators +- https://playwright.dev/docs/test-assertions +- https://playwright.dev/docs/browser-contexts + +## What Matters Most + +### 1. Keep scenarios isolated + +Playwright's model is built around clean browser contexts so one test does not leak into another. In Dify's suite, that principle maps to per-scenario session setup in `features/support/hooks.ts` and `DifyWorld`. + +Apply it like this: + +- do not depend on another scenario having run first +- do not persist ad hoc scenario state outside `DifyWorld` +- do not couple ordinary scenarios to `@fresh` behavior +- when a flow needs special auth/session semantics, express that through the existing tag model or explicit hook changes + +### 2. Prefer user-facing locators + +Playwright recommends built-in locators that reflect what users perceive on the page. + +Preferred order in this repository: + +1. `getByRole` +2. `getByLabel` +3. `getByPlaceholder` +4. `getByText` +5. `getByTestId` when an explicit test contract is the most stable option + +Avoid raw CSS/XPath selectors unless no stable user-facing contract exists and adding one is not practical. + +Also remember: + +- repeated content usually needs scoping to a stable container +- exact text matching is often too brittle when role/name or label already exists +- `getByTestId` is acceptable when semantics are weak but the contract is intentional + +### 3. Use web-first assertions + +Playwright assertions auto-wait and retry. Prefer them over manual state inspection. + +Prefer: + +- `await expect(page).toHaveURL(...)` +- `await expect(locator).toBeVisible()` +- `await expect(locator).toBeHidden()` +- `await expect(locator).toBeEnabled()` +- `await expect(locator).toHaveText(...)` + +Avoid: + +- `expect(await locator.isVisible()).toBe(true)` +- custom polling loops for DOM state +- `waitForTimeout` as synchronization + +If a condition genuinely needs custom retry logic, use Playwright's polling/assertion tools deliberately and keep that choice local and explicit. + +### 4. Let actions wait for actionability + +Locator actions already wait for the element to be actionable. Do not preface every click/fill with extra timing logic unless the action needs a specific visible/ready assertion for clarity. + +Good pattern: + +- assert a meaningful visible state when that is part of the behavior +- then click/fill/select via locator APIs + +Bad pattern: + +- stack arbitrary waits before every action +- wait on unstable implementation details instead of the visible state the user cares about + +### 5. Match debugging to the current suite + +Playwright's wider ecosystem supports traces and rich debugging tools. Dify's current suite already captures: + +- full-page screenshots +- page HTML +- console errors +- page errors + +Use the existing artifact flow by default. If a task is specifically about improving diagnostics, confirm the change fits the current Cucumber architecture before importing broader Playwright tooling. + +## Review Questions + +- Would this locator survive DOM refactors that do not change user-visible behavior? +- Is this assertion using Playwright's retrying semantics? +- Is any explicit wait masking a real readiness problem? +- Does this code preserve per-scenario isolation? +- Is a new abstraction really needed, or does it bypass the existing `DifyWorld` + step-definition model? diff --git a/.claude/skills/e2e-cucumber-playwright b/.claude/skills/e2e-cucumber-playwright new file mode 120000 index 0000000000..71b0eae34f --- /dev/null +++ b/.claude/skills/e2e-cucumber-playwright @@ -0,0 +1 @@ +../../.agents/skills/e2e-cucumber-playwright \ No newline at end of file diff --git a/.devcontainer/post_create_command.sh b/.devcontainer/post_create_command.sh index b92d4c35a8..7460636824 100755 --- a/.devcontainer/post_create_command.sh +++ b/.devcontainer/post_create_command.sh @@ -7,7 +7,7 @@ cd web && pnpm install pipx install uv echo "alias start-api=\"cd $WORKSPACE_ROOT/api && uv run python -m flask run --host 0.0.0.0 --port=5001 --debug\"" >> ~/.bashrc -echo "alias start-worker=\"cd $WORKSPACE_ROOT/api && uv run python -m celery -A app.celery worker -P threads -c 1 --loglevel INFO -Q dataset,dataset_summary,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention\"" >> ~/.bashrc +echo "alias start-worker=\"cd $WORKSPACE_ROOT/api && uv run python -m celery -A app.celery worker -P threads -c 1 --loglevel INFO -Q dataset,dataset_summary,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_publisher,trigger_refresh_executor,retention\"" >> ~/.bashrc echo "alias start-web=\"cd $WORKSPACE_ROOT/web && pnpm dev:inspect\"" >> ~/.bashrc echo "alias start-web-prod=\"cd $WORKSPACE_ROOT/web && pnpm build && pnpm start\"" >> ~/.bashrc echo "alias start-containers=\"cd $WORKSPACE_ROOT/docker && docker-compose -f docker-compose.middleware.yaml -p dify --env-file middleware.env up -d\"" >> ~/.bashrc diff --git a/.github/dependabot.yml b/.github/dependabot.yml index a183f0b58c..266fa17c29 100644 --- a/.github/dependabot.yml +++ b/.github/dependabot.yml @@ -1,106 +1,6 @@ version: 2 updates: - - package-ecosystem: "pip" - directory: "/api" - open-pull-requests-limit: 10 - schedule: - interval: "weekly" - groups: - flask: - patterns: - - "flask" - - "flask-*" - - "werkzeug" - - "gunicorn" - google: - patterns: - - "google-*" - - "googleapis-*" - opentelemetry: - patterns: - - "opentelemetry-*" - pydantic: - patterns: - - "pydantic" - - "pydantic-*" - llm: - patterns: - - "langfuse" - - "langsmith" - - "litellm" - - "mlflow*" - - "opik" - - "weave*" - - "arize*" - - "tiktoken" - - "transformers" - database: - patterns: - - "sqlalchemy" - - "psycopg2*" - - "psycogreen" - - "redis*" - - "alembic*" - storage: - patterns: - - "boto3*" - - "botocore*" - - "azure-*" - - "bce-*" - - "cos-python-*" - - "esdk-obs-*" - - "google-cloud-storage" - - "opendal" - - "oss2" - - "supabase*" - - "tos*" - vdb: - patterns: - - "alibabacloud*" - - "chromadb" - - "clickhouse-*" - - "clickzetta-*" - - "couchbase" - - "elasticsearch" - - "opensearch-py" - - "oracledb" - - "pgvect*" - - "pymilvus" - - "pymochow" - - "pyobvector" - - "qdrant-client" - - "intersystems-*" - - "tablestore" - - "tcvectordb" - - "tidb-vector" - - "upstash-*" - - "volcengine-*" - - "weaviate-*" - - "xinference-*" - - "mo-vector" - - "mysql-connector-*" - dev: - patterns: - - "coverage" - - "dotenv-linter" - - "faker" - - "lxml-stubs" - - "basedpyright" - - "ruff" - - "pytest*" - - "types-*" - - "boto3-stubs" - - "hypothesis" - - "pandas-stubs" - - "scipy-stubs" - - "import-linter" - - "celery-types" - - "mypy*" - - "pyrefly" - python-packages: - patterns: - - "*" - package-ecosystem: "uv" directory: "/api" open-pull-requests-limit: 10 diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index a069b6cbc7..1e848612ec 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -7,6 +7,7 @@ ## Summary + ## Screenshots @@ -17,7 +18,7 @@ ## Checklist - [ ] This change requires a documentation update, included: [Dify Document](https://github.com/langgenius/dify-docs) -- [x] I understand that this PR may be closed in case there was no previous discussion or issues. (This doesn't apply to typos!) -- [x] I've added a test for each change that was introduced, and I tried as much as possible to make a single atomic change. -- [x] I've updated the documentation accordingly. -- [x] I ran `make lint` and `make type-check` (backend) and `cd web && pnpm exec vp staged` (frontend) to appease the lint gods +- [ ] I understand that this PR may be closed in case there was no previous discussion or issues. (This doesn't apply to typos!) +- [ ] I've added a test for each change that was introduced, and I tried as much as possible to make a single atomic change. +- [ ] I've updated the documentation accordingly. +- [ ] I ran `make lint && make type-check` (backend) and `cd web && pnpm exec vp staged` (frontend) to appease the lint gods diff --git a/.github/scripts/generate-i18n-changes.mjs b/.github/scripts/generate-i18n-changes.mjs new file mode 100644 index 0000000000..3d25115ac3 --- /dev/null +++ b/.github/scripts/generate-i18n-changes.mjs @@ -0,0 +1,82 @@ +import { execFileSync } from 'node:child_process' +import fs from 'node:fs' +import path from 'node:path' + +const repoRoot = process.cwd() +const baseSha = process.env.BASE_SHA || '' +const headSha = process.env.HEAD_SHA || '' +const files = (process.env.CHANGED_FILES || '').split(/\s+/).filter(Boolean) +const outputPath = process.env.I18N_CHANGES_OUTPUT_PATH || '/tmp/i18n-changes.json' + +const englishPath = fileStem => path.join(repoRoot, 'web', 'i18n', 'en-US', `${fileStem}.json`) + +const readCurrentJson = (fileStem) => { + const filePath = englishPath(fileStem) + if (!fs.existsSync(filePath)) + return null + + return JSON.parse(fs.readFileSync(filePath, 'utf8')) +} + +const readBaseJson = (fileStem) => { + if (!baseSha) + return null + + try { + const relativePath = `web/i18n/en-US/${fileStem}.json` + const content = execFileSync('git', ['show', `${baseSha}:${relativePath}`], { encoding: 'utf8' }) + return JSON.parse(content) + } + catch { + return null + } +} + +const compareJson = (beforeValue, afterValue) => JSON.stringify(beforeValue) === JSON.stringify(afterValue) + +const changes = {} + +for (const fileStem of files) { + const currentJson = readCurrentJson(fileStem) + const beforeJson = readBaseJson(fileStem) || {} + const afterJson = currentJson || {} + const added = {} + const updated = {} + const deleted = [] + + for (const [key, value] of Object.entries(afterJson)) { + if (!(key in beforeJson)) { + added[key] = value + continue + } + + if (!compareJson(beforeJson[key], value)) { + updated[key] = { + before: beforeJson[key], + after: value, + } + } + } + + for (const key of Object.keys(beforeJson)) { + if (!(key in afterJson)) + deleted.push(key) + } + + changes[fileStem] = { + fileDeleted: currentJson === null, + added, + updated, + deleted, + } +} + +fs.writeFileSync( + outputPath, + JSON.stringify({ + baseSha, + headSha, + files, + changes, + }) +) diff --git a/.github/workflows/api-tests.yml b/.github/workflows/api-tests.yml index cd967b76cf..fd910531db 100644 --- a/.github/workflows/api-tests.yml +++ b/.github/workflows/api-tests.yml @@ -54,7 +54,7 @@ jobs: run: uv run --project api bash dev/pytest/pytest_unit_tests.sh - name: Upload unit coverage data - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 with: name: api-coverage-unit path: coverage-unit @@ -129,7 +129,7 @@ jobs: api/tests/test_containers_integration_tests - name: Upload integration coverage data - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 with: name: api-coverage-integration path: coverage-integration diff --git a/.github/workflows/autofix.yml b/.github/workflows/autofix.yml index 772ab8dd56..3946834e09 100644 --- a/.github/workflows/autofix.yml +++ b/.github/workflows/autofix.yml @@ -120,7 +120,6 @@ jobs: - name: ESLint autofix if: github.event_name != 'merge_group' && steps.web-changes.outputs.any_changed == 'true' run: | - cd web vp exec eslint --concurrency=2 --prune-suppressions --quiet || true - if: github.event_name != 'merge_group' diff --git a/.github/workflows/build-push.yml b/.github/workflows/build-push.yml index 79ecdb5938..5f16fc6927 100644 --- a/.github/workflows/build-push.yml +++ b/.github/workflows/build-push.yml @@ -81,7 +81,7 @@ jobs: - name: Build Docker image id: build - uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0 + uses: docker/build-push-action@bcafcacb16a39f128d818304e6c9c0c18556b85f # v7.1.0 with: context: ${{ matrix.build_context }} file: ${{ matrix.file }} @@ -101,7 +101,7 @@ jobs: touch "/tmp/digests/${sanitized_digest}" - name: Upload digest - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 with: name: digests-${{ matrix.artifact_context }}-${{ env.PLATFORM_PAIR }} path: /tmp/digests/* diff --git a/.github/workflows/docker-build.yml b/.github/workflows/docker-build.yml index cd9d69d871..5752076c36 100644 --- a/.github/workflows/docker-build.yml +++ b/.github/workflows/docker-build.yml @@ -6,14 +6,7 @@ on: - "main" paths: - api/Dockerfile - - web/docker/** - web/Dockerfile - - packages/** - - package.json - - pnpm-lock.yaml - - pnpm-workspace.yaml - - .npmrc - - .nvmrc concurrency: group: docker-build-${{ github.head_ref || github.run_id }} @@ -50,7 +43,7 @@ jobs: uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0 - name: Build Docker Image - uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0 + uses: docker/build-push-action@bcafcacb16a39f128d818304e6c9c0c18556b85f # v7.1.0 with: push: false context: ${{ matrix.context }} diff --git a/.github/workflows/main-ci.yml b/.github/workflows/main-ci.yml index 59c38b6e7e..ba36b5c07a 100644 --- a/.github/workflows/main-ci.yml +++ b/.github/workflows/main-ci.yml @@ -92,6 +92,7 @@ jobs: vdb: - 'api/core/rag/datasource/**' - 'api/tests/integration_tests/vdb/**' + - 'api/providers/vdb/*/tests/**' - '.github/workflows/vdb-tests.yml' - '.github/workflows/expose_service_ports.sh' - 'docker/.env.example' diff --git a/.github/workflows/pyrefly-diff-comment.yml b/.github/workflows/pyrefly-diff-comment.yml index 0278e1e0d3..eefb1ebbb9 100644 --- a/.github/workflows/pyrefly-diff-comment.yml +++ b/.github/workflows/pyrefly-diff-comment.yml @@ -21,7 +21,7 @@ jobs: if: ${{ github.event.workflow_run.conclusion == 'success' && github.event.workflow_run.pull_requests[0].head.repo.full_name != github.repository }} steps: - name: Download pyrefly diff artifact - uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0 + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 with: github-token: ${{ secrets.GITHUB_TOKEN }} script: | @@ -49,7 +49,7 @@ jobs: run: unzip -o pyrefly_diff.zip - name: Post comment - uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0 + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 with: github-token: ${{ secrets.GITHUB_TOKEN }} script: | diff --git a/.github/workflows/pyrefly-diff.yml b/.github/workflows/pyrefly-diff.yml index 8623d35b04..ac3732579c 100644 --- a/.github/workflows/pyrefly-diff.yml +++ b/.github/workflows/pyrefly-diff.yml @@ -66,7 +66,7 @@ jobs: echo ${{ github.event.pull_request.number }} > pr_number.txt - name: Upload pyrefly diff - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 with: name: pyrefly_diff path: | @@ -75,7 +75,7 @@ jobs: - name: Comment PR with pyrefly diff if: ${{ github.event.pull_request.head.repo.full_name == github.repository && steps.line_count_check.outputs.same == 'false' }} - uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0 + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 with: github-token: ${{ secrets.GITHUB_TOKEN }} script: | diff --git a/.github/workflows/pyrefly-type-coverage-comment.yml b/.github/workflows/pyrefly-type-coverage-comment.yml new file mode 100644 index 0000000000..974da99aad --- /dev/null +++ b/.github/workflows/pyrefly-type-coverage-comment.yml @@ -0,0 +1,118 @@ +name: Comment with Pyrefly Type Coverage + +on: + workflow_run: + workflows: + - Pyrefly Type Coverage + types: + - completed + +permissions: {} + +jobs: + comment: + name: Comment PR with type coverage + runs-on: ubuntu-latest + permissions: + actions: read + contents: read + issues: write + pull-requests: write + if: ${{ github.event.workflow_run.conclusion == 'success' && github.event.workflow_run.pull_requests[0].head.repo.full_name != github.repository }} + steps: + - name: Checkout default branch (trusted code) + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - name: Setup Python & UV + uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0 + with: + enable-cache: true + + - name: Install dependencies + run: uv sync --project api --dev + + - name: Download type coverage artifact + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + with: + github-token: ${{ secrets.GITHUB_TOKEN }} + script: | + const fs = require('fs'); + const artifacts = await github.rest.actions.listWorkflowRunArtifacts({ + owner: context.repo.owner, + repo: context.repo.repo, + run_id: ${{ github.event.workflow_run.id }}, + }); + const match = artifacts.data.artifacts.find((artifact) => + artifact.name === 'pyrefly_type_coverage' + ); + if (!match) { + throw new Error('pyrefly_type_coverage artifact not found'); + } + const download = await github.rest.actions.downloadArtifact({ + owner: context.repo.owner, + repo: context.repo.repo, + artifact_id: match.id, + archive_format: 'zip', + }); + fs.writeFileSync('pyrefly_type_coverage.zip', Buffer.from(download.data)); + + - name: Unzip artifact + run: unzip -o pyrefly_type_coverage.zip + + - name: Render coverage markdown from structured data + id: render + run: | + comment_body="$(uv run --directory api python libs/pyrefly_type_coverage.py \ + --base base_report.json \ + < pr_report.json)" + + { + echo "### Pyrefly Type Coverage" + echo "" + echo "$comment_body" + } > /tmp/type_coverage_comment.md + + - name: Post comment + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + with: + github-token: ${{ secrets.GITHUB_TOKEN }} + script: | + const fs = require('fs'); + const body = fs.readFileSync('/tmp/type_coverage_comment.md', { encoding: 'utf8' }); + let prNumber = null; + try { + prNumber = parseInt(fs.readFileSync('pr_number.txt', { encoding: 'utf8' }), 10); + } catch (err) { + const prs = context.payload.workflow_run.pull_requests || []; + if (prs.length > 0 && prs[0].number) { + prNumber = prs[0].number; + } + } + if (!prNumber) { + throw new Error('PR number not found in artifact or workflow_run payload'); + } + + // Update existing comment if one exists, otherwise create new + const { data: comments } = await github.rest.issues.listComments({ + issue_number: prNumber, + owner: context.repo.owner, + repo: context.repo.repo, + }); + const marker = '### Pyrefly Type Coverage'; + const existing = comments.find(c => c.body.startsWith(marker)); + + if (existing) { + await github.rest.issues.updateComment({ + comment_id: existing.id, + owner: context.repo.owner, + repo: context.repo.repo, + body, + }); + } else { + await github.rest.issues.createComment({ + issue_number: prNumber, + owner: context.repo.owner, + repo: context.repo.repo, + body, + }); + } diff --git a/.github/workflows/pyrefly-type-coverage.yml b/.github/workflows/pyrefly-type-coverage.yml new file mode 100644 index 0000000000..c795c32e31 --- /dev/null +++ b/.github/workflows/pyrefly-type-coverage.yml @@ -0,0 +1,120 @@ +name: Pyrefly Type Coverage + +on: + pull_request: + paths: + - 'api/**/*.py' + +permissions: + contents: read + +jobs: + pyrefly-type-coverage: + runs-on: ubuntu-latest + permissions: + contents: read + issues: write + pull-requests: write + steps: + - name: Checkout PR branch + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + with: + fetch-depth: 0 + + - name: Setup Python & UV + uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0 + with: + enable-cache: true + + - name: Install dependencies + run: uv sync --project api --dev + + - name: Run pyrefly report on PR branch + run: | + uv run --directory api --dev pyrefly report 2>/dev/null > /tmp/pyrefly_report_pr.tmp && \ + mv /tmp/pyrefly_report_pr.tmp /tmp/pyrefly_report_pr.json || \ + echo '{}' > /tmp/pyrefly_report_pr.json + + - name: Save helper script from base branch + run: | + git show ${{ github.event.pull_request.base.sha }}:api/libs/pyrefly_type_coverage.py > /tmp/pyrefly_type_coverage.py 2>/dev/null \ + || cp api/libs/pyrefly_type_coverage.py /tmp/pyrefly_type_coverage.py + + - name: Checkout base branch + run: git checkout ${{ github.base_ref }} + + - name: Run pyrefly report on base branch + run: | + uv run --directory api --dev pyrefly report 2>/dev/null > /tmp/pyrefly_report_base.tmp && \ + mv /tmp/pyrefly_report_base.tmp /tmp/pyrefly_report_base.json || \ + echo '{}' > /tmp/pyrefly_report_base.json + + - name: Generate coverage comparison + id: coverage + run: | + comment_body="$(uv run --directory api python /tmp/pyrefly_type_coverage.py \ + --base /tmp/pyrefly_report_base.json \ + < /tmp/pyrefly_report_pr.json)" + + { + echo "### Pyrefly Type Coverage" + echo "" + echo "$comment_body" + } | tee -a "$GITHUB_STEP_SUMMARY" > /tmp/type_coverage_comment.md + + # Save structured data for the fork-PR comment workflow + cp /tmp/pyrefly_report_pr.json pr_report.json + cp /tmp/pyrefly_report_base.json base_report.json + + - name: Save PR number + run: | + echo ${{ github.event.pull_request.number }} > pr_number.txt + + - name: Upload type coverage artifact + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 + with: + name: pyrefly_type_coverage + path: | + pr_report.json + base_report.json + pr_number.txt + + - name: Comment PR with type coverage + if: ${{ github.event.pull_request.head.repo.full_name == github.repository }} + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + with: + github-token: ${{ secrets.GITHUB_TOKEN }} + script: | + const fs = require('fs'); + const marker = '### Pyrefly Type Coverage'; + let body; + try { + body = fs.readFileSync('/tmp/type_coverage_comment.md', { encoding: 'utf8' }); + } catch { + body = `${marker}\n\n_Coverage report unavailable._`; + } + const prNumber = context.payload.pull_request.number; + + // Update existing comment if one exists, otherwise create new + const { data: comments } = await github.rest.issues.listComments({ + issue_number: prNumber, + owner: context.repo.owner, + repo: context.repo.repo, + }); + const existing = comments.find(c => c.body.startsWith(marker)); + + if (existing) { + await github.rest.issues.updateComment({ + comment_id: existing.id, + owner: context.repo.owner, + repo: context.repo.repo, + body, + }); + } else { + await github.rest.issues.createComment({ + issue_number: prNumber, + owner: context.repo.owner, + repo: context.repo.repo, + body, + }); + } diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml index 5cf52daed2..c74f4a670a 100644 --- a/.github/workflows/stale.yml +++ b/.github/workflows/stale.yml @@ -23,8 +23,8 @@ jobs: days-before-issue-stale: 15 days-before-issue-close: 3 repo-token: ${{ secrets.GITHUB_TOKEN }} - stale-issue-message: "Close due to it's no longer active, if you have any questions, you can reopen it." - stale-pr-message: "Close due to it's no longer active, if you have any questions, you can reopen it." + stale-issue-message: "Closed due to inactivity. If you have any questions, you can reopen it." + stale-pr-message: "Closed due to inactivity. If you have any questions, you can reopen it." stale-issue-label: 'no-issue-activity' stale-pr-label: 'no-pr-activity' - any-of-labels: 'duplicate,question,invalid,wontfix,no-issue-activity,no-pr-activity,enhancement,cant-reproduce,help-wanted' + any-of-labels: '🌚 invalid,🙋‍♂️ question,wont-fix,no-issue-activity,no-pr-activity,💪 enhancement,🤔 cant-reproduce,🙏 help wanted' diff --git a/.github/workflows/style.yml b/.github/workflows/style.yml index c32fc9d0cb..29f5b090f8 100644 --- a/.github/workflows/style.yml +++ b/.github/workflows/style.yml @@ -77,6 +77,8 @@ jobs: with: files: | web/** + e2e/** + sdks/nodejs-client/** packages/** package.json pnpm-lock.yaml @@ -95,14 +97,14 @@ jobs: id: eslint-cache-restore uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4 with: - path: web/.eslintcache - key: ${{ runner.os }}-web-eslint-${{ hashFiles('web/package.json', 'pnpm-lock.yaml', 'web/eslint.config.mjs', 'web/eslint.constants.mjs', 'web/plugins/eslint/**') }}-${{ github.sha }} + path: .eslintcache + key: ${{ runner.os }}-eslint-${{ hashFiles('pnpm-lock.yaml', 'eslint.config.mjs', 'web/eslint.config.mjs', 'web/eslint.constants.mjs', 'web/plugins/eslint/**') }}-${{ github.sha }} restore-keys: | - ${{ runner.os }}-web-eslint-${{ hashFiles('web/package.json', 'pnpm-lock.yaml', 'web/eslint.config.mjs', 'web/eslint.constants.mjs', 'web/plugins/eslint/**') }}- + ${{ runner.os }}-eslint-${{ hashFiles('pnpm-lock.yaml', 'eslint.config.mjs', 'web/eslint.config.mjs', 'web/eslint.constants.mjs', 'web/plugins/eslint/**') }}- - name: Web style check if: steps.changed-files.outputs.any_changed == 'true' - working-directory: ./web + working-directory: . run: vp run lint:ci - name: Web tsslint @@ -112,7 +114,7 @@ jobs: - name: Web type check if: steps.changed-files.outputs.any_changed == 'true' - working-directory: ./web + working-directory: . run: vp run type-check - name: Web dead code check @@ -124,7 +126,7 @@ jobs: if: steps.changed-files.outputs.any_changed == 'true' && success() && steps.eslint-cache-restore.outputs.cache-hit != 'true' uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4 with: - path: web/.eslintcache + path: .eslintcache key: ${{ steps.eslint-cache-restore.outputs.cache-primary-key }} superlinter: diff --git a/.github/workflows/translate-i18n-claude.yml b/.github/workflows/translate-i18n-claude.yml index a813c87cec..541200293d 100644 --- a/.github/workflows/translate-i18n-claude.yml +++ b/.github/workflows/translate-i18n-claude.yml @@ -68,89 +68,7 @@ jobs: " web/i18n-config/languages.ts | sed 's/[[:space:]]*$//') generate_changes_json() { - node <<'NODE' - const { execFileSync } = require('node:child_process') - const fs = require('node:fs') - const path = require('node:path') - - const repoRoot = process.cwd() - const baseSha = process.env.BASE_SHA || '' - const headSha = process.env.HEAD_SHA || '' - const files = (process.env.CHANGED_FILES || '').split(/\s+/).filter(Boolean) - - const englishPath = fileStem => path.join(repoRoot, 'web', 'i18n', 'en-US', `${fileStem}.json`) - - const readCurrentJson = (fileStem) => { - const filePath = englishPath(fileStem) - if (!fs.existsSync(filePath)) - return null - - return JSON.parse(fs.readFileSync(filePath, 'utf8')) - } - - const readBaseJson = (fileStem) => { - if (!baseSha) - return null - - try { - const relativePath = `web/i18n/en-US/${fileStem}.json` - const content = execFileSync('git', ['show', `${baseSha}:${relativePath}`], { encoding: 'utf8' }) - return JSON.parse(content) - } - catch (error) { - return null - } - } - - const compareJson = (beforeValue, afterValue) => JSON.stringify(beforeValue) === JSON.stringify(afterValue) - - const changes = {} - - for (const fileStem of files) { - const currentJson = readCurrentJson(fileStem) - const beforeJson = readBaseJson(fileStem) || {} - const afterJson = currentJson || {} - const added = {} - const updated = {} - const deleted = [] - - for (const [key, value] of Object.entries(afterJson)) { - if (!(key in beforeJson)) { - added[key] = value - continue - } - - if (!compareJson(beforeJson[key], value)) { - updated[key] = { - before: beforeJson[key], - after: value, - } - } - } - - for (const key of Object.keys(beforeJson)) { - if (!(key in afterJson)) - deleted.push(key) - } - - changes[fileStem] = { - fileDeleted: currentJson === null, - added, - updated, - deleted, - } - } - - fs.writeFileSync( - '/tmp/i18n-changes.json', - JSON.stringify({ - baseSha, - headSha, - files, - changes, - }) - ) - NODE + node .github/scripts/generate-i18n-changes.mjs } if [ "${{ github.event_name }}" = "repository_dispatch" ]; then @@ -240,7 +158,7 @@ jobs: - name: Run Claude Code for Translation Sync if: steps.context.outputs.CHANGED_FILES != '' - uses: anthropics/claude-code-action@6e2bd52842c65e914eba5c8badd17560bd26b5de # v1.0.89 + uses: anthropics/claude-code-action@b47fd721da662d48c5680e154ad16a73ed74d2e0 # v1.0.93 with: anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} github_token: ${{ secrets.GITHUB_TOKEN }} @@ -270,7 +188,7 @@ jobs: Tool rules: - Use Read for repository files. - Use Edit for JSON updates. - - Use Bash only for `pnpm`. + - Use Bash only for `vp`. - Do not use Bash for `git`, `gh`, or branch management. Required execution plan: @@ -292,7 +210,7 @@ jobs: - Read the current English JSON file for any file that still exists so wording, placeholders, and surrounding terminology stay accurate. - If `Structured change set available` is `false`, treat this as a scoped full sync and use the current English files plus scoped checks as the source of truth. 4. Run a scoped pre-check before editing: - - `pnpm --dir ${{ github.workspace }}/web run i18n:check ${{ steps.context.outputs.FILE_ARGS }} ${{ steps.context.outputs.LANG_ARGS }}` + - `vp run dify-web#i18n:check ${{ steps.context.outputs.FILE_ARGS }} ${{ steps.context.outputs.LANG_ARGS }}` - Use this command as the source of truth for missing and extra keys inside the current scope. 5. Apply translations. - For every target language and scoped file: @@ -300,19 +218,19 @@ jobs: - If the locale file does not exist yet, create it with `Write` and then continue with `Edit` as needed. - ADD missing keys. - UPDATE stale translations when the English value changed. - - DELETE removed keys. Prefer `pnpm --dir ${{ github.workspace }}/web run i18n:check ${{ steps.context.outputs.FILE_ARGS }} ${{ steps.context.outputs.LANG_ARGS }} --auto-remove` for extra keys so deletions stay in scope. + - DELETE removed keys. Prefer `vp run dify-web#i18n:check ${{ steps.context.outputs.FILE_ARGS }} ${{ steps.context.outputs.LANG_ARGS }} --auto-remove` for extra keys so deletions stay in scope. - Preserve placeholders exactly: `{{variable}}`, `${variable}`, HTML tags, component tags, and variable names. - Match the existing terminology and register used by each locale. - Prefer one Edit per file when stable, but prioritize correctness over batching. 6. Verify only the edited files. - - Run `pnpm --dir ${{ github.workspace }}/web lint:fix --quiet -- ` - - Run `pnpm --dir ${{ github.workspace }}/web run i18n:check ${{ steps.context.outputs.FILE_ARGS }} ${{ steps.context.outputs.LANG_ARGS }}` + - Run `vp run dify-web#lint:fix --quiet -- ` + - Run `vp run dify-web#i18n:check ${{ steps.context.outputs.FILE_ARGS }} ${{ steps.context.outputs.LANG_ARGS }}` - If verification fails, fix the remaining problems before continuing. 7. Stop after the scoped locale files are updated and verification passes. - Do not create branches, commits, or pull requests. claude_args: | --max-turns 120 - --allowedTools "Read,Write,Edit,Bash(pnpm *),Bash(pnpm:*),Glob,Grep" + --allowedTools "Read,Write,Edit,Bash(vp *),Bash(vp:*),Glob,Grep" - name: Prepare branch metadata id: pr_meta @@ -354,6 +272,7 @@ jobs: - name: Create or update translation PR if: steps.pr_meta.outputs.has_changes == 'true' env: + GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} BRANCH_NAME: ${{ steps.pr_meta.outputs.branch_name }} FILES_IN_SCOPE: ${{ steps.context.outputs.CHANGED_FILES }} TARGET_LANGS: ${{ steps.context.outputs.TARGET_LANGS }} @@ -402,8 +321,8 @@ jobs: '', '## Verification', '', - `- \`pnpm --dir web run i18n:check --file ${process.env.FILES_IN_SCOPE} --lang ${process.env.TARGET_LANGS}\``, - `- \`pnpm --dir web lint:fix --quiet -- \``, + `- \`vp run dify-web#i18n:check --file ${process.env.FILES_IN_SCOPE} --lang ${process.env.TARGET_LANGS}\``, + `- \`vp run dify-web#lint:fix --quiet -- \``, '', '## Notes', '', diff --git a/.github/workflows/trigger-i18n-sync.yml b/.github/workflows/trigger-i18n-sync.yml index a1ca42b26e..790ea9126d 100644 --- a/.github/workflows/trigger-i18n-sync.yml +++ b/.github/workflows/trigger-i18n-sync.yml @@ -42,88 +42,7 @@ jobs: fi export BASE_SHA HEAD_SHA CHANGED_FILES - node <<'NODE' - const { execFileSync } = require('node:child_process') - const fs = require('node:fs') - const path = require('node:path') - - const repoRoot = process.cwd() - const baseSha = process.env.BASE_SHA || '' - const headSha = process.env.HEAD_SHA || '' - const files = (process.env.CHANGED_FILES || '').split(/\s+/).filter(Boolean) - - const englishPath = fileStem => path.join(repoRoot, 'web', 'i18n', 'en-US', `${fileStem}.json`) - - const readCurrentJson = (fileStem) => { - const filePath = englishPath(fileStem) - if (!fs.existsSync(filePath)) - return null - - return JSON.parse(fs.readFileSync(filePath, 'utf8')) - } - - const readBaseJson = (fileStem) => { - if (!baseSha) - return null - - try { - const relativePath = `web/i18n/en-US/${fileStem}.json` - const content = execFileSync('git', ['show', `${baseSha}:${relativePath}`], { encoding: 'utf8' }) - return JSON.parse(content) - } - catch (error) { - return null - } - } - - const compareJson = (beforeValue, afterValue) => JSON.stringify(beforeValue) === JSON.stringify(afterValue) - - const changes = {} - - for (const fileStem of files) { - const beforeJson = readBaseJson(fileStem) || {} - const afterJson = readCurrentJson(fileStem) || {} - const added = {} - const updated = {} - const deleted = [] - - for (const [key, value] of Object.entries(afterJson)) { - if (!(key in beforeJson)) { - added[key] = value - continue - } - - if (!compareJson(beforeJson[key], value)) { - updated[key] = { - before: beforeJson[key], - after: value, - } - } - } - - for (const key of Object.keys(beforeJson)) { - if (!(key in afterJson)) - deleted.push(key) - } - - changes[fileStem] = { - fileDeleted: readCurrentJson(fileStem) === null, - added, - updated, - deleted, - } - } - - fs.writeFileSync( - '/tmp/i18n-changes.json', - JSON.stringify({ - baseSha, - headSha, - files, - changes, - }) - ) - NODE + node .github/scripts/generate-i18n-changes.mjs if [ -n "$CHANGED_FILES" ]; then echo "has_changes=true" >> "$GITHUB_OUTPUT" @@ -137,7 +56,7 @@ jobs: - name: Trigger i18n sync workflow if: steps.detect.outputs.has_changes == 'true' - uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0 + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 env: BASE_SHA: ${{ steps.detect.outputs.base_sha }} HEAD_SHA: ${{ steps.detect.outputs.head_sha }} diff --git a/.github/workflows/vdb-tests-full.yml b/.github/workflows/vdb-tests-full.yml index 72b3ea9aac..f0def8fe7a 100644 --- a/.github/workflows/vdb-tests-full.yml +++ b/.github/workflows/vdb-tests-full.yml @@ -89,7 +89,7 @@ jobs: cp api/tests/integration_tests/.env.example api/tests/integration_tests/.env # - name: Check VDB Ready (TiDB) -# run: uv run --project api python api/tests/integration_tests/vdb/tidb_vector/check_tiflash_ready.py +# run: uv run --project api python api/providers/vdb/tidb-vector/tests/integration_tests/check_tiflash_ready.py - name: Test Vector Stores run: uv run --project api bash dev/pytest/pytest_vdb.sh diff --git a/.github/workflows/vdb-tests.yml b/.github/workflows/vdb-tests.yml index 47ec70f603..f3966f15b9 100644 --- a/.github/workflows/vdb-tests.yml +++ b/.github/workflows/vdb-tests.yml @@ -81,12 +81,12 @@ jobs: cp api/tests/integration_tests/.env.example api/tests/integration_tests/.env # - name: Check VDB Ready (TiDB) -# run: uv run --project api python api/tests/integration_tests/vdb/tidb_vector/check_tiflash_ready.py +# run: uv run --project api python api/providers/vdb/tidb-vector/tests/integration_tests/check_tiflash_ready.py - name: Test Vector Stores run: | uv run --project api pytest --timeout "${PYTEST_TIMEOUT:-180}" \ - api/tests/integration_tests/vdb/chroma \ - api/tests/integration_tests/vdb/pgvector \ - api/tests/integration_tests/vdb/qdrant \ - api/tests/integration_tests/vdb/weaviate + api/providers/vdb/vdb-chroma/tests/integration_tests \ + api/providers/vdb/vdb-pgvector/tests/integration_tests \ + api/providers/vdb/vdb-qdrant/tests/integration_tests \ + api/providers/vdb/vdb-weaviate/tests/integration_tests diff --git a/.github/workflows/web-e2e.yml b/.github/workflows/web-e2e.yml index eb752619be..10dc31bde8 100644 --- a/.github/workflows/web-e2e.yml +++ b/.github/workflows/web-e2e.yml @@ -53,7 +53,7 @@ jobs: - name: Upload Cucumber report if: ${{ !cancelled() }} - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 with: name: cucumber-report path: e2e/cucumber-report @@ -61,7 +61,7 @@ jobs: - name: Upload E2E logs if: ${{ !cancelled() }} - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 with: name: e2e-logs path: e2e/.logs diff --git a/.github/workflows/web-tests.yml b/.github/workflows/web-tests.yml index 3c36335e79..f3ab4c62c7 100644 --- a/.github/workflows/web-tests.yml +++ b/.github/workflows/web-tests.yml @@ -43,7 +43,7 @@ jobs: - name: Upload blob report if: ${{ !cancelled() }} - uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 with: name: blob-report-${{ matrix.shardIndex }} path: web/.vitest-reports/* diff --git a/.gitignore b/.gitignore index 53dea88899..3493a7c756 100644 --- a/.gitignore +++ b/.gitignore @@ -203,6 +203,7 @@ sdks/python-client/dify_client.egg-info .vscode/* !.vscode/launch.json.template +!.vscode/settings.example.json !.vscode/README.md api/.vscode # vscode Code History Extension @@ -242,3 +243,5 @@ scripts/stress-test/reports/ # Code Agent Folder .qoder/* + +.eslintcache diff --git a/.vite-hooks/pre-commit b/.vite-hooks/pre-commit index db5c504606..d48381bce2 100755 --- a/.vite-hooks/pre-commit +++ b/.vite-hooks/pre-commit @@ -56,44 +56,9 @@ if $api_modified; then fi fi -if $web_modified; then - if $skip_web_checks; then - echo "Git operation in progress, skipping web checks" - exit 0 - fi - - echo "Running ESLint on web module" - - if git diff --cached --quiet -- 'web/**/*.ts' 'web/**/*.tsx'; then - web_ts_modified=false - else - ts_diff_status=$? - if [ $ts_diff_status -eq 1 ]; then - web_ts_modified=true - else - echo "Unable to determine staged TypeScript changes (git exit code: $ts_diff_status)." - exit $ts_diff_status - fi - fi - - cd ./web || exit 1 - vp staged - - if $web_ts_modified; then - echo "Running TypeScript type-check:tsgo" - if ! pnpm run type-check:tsgo; then - echo "Type check failed. Please run 'pnpm run type-check:tsgo' to fix the errors." - exit 1 - fi - else - echo "No staged TypeScript changes detected, skipping type-check:tsgo" - fi - - echo "Running knip" - if ! pnpm run knip; then - echo "Knip check failed. Please run 'pnpm run knip' to fix the errors." - exit 1 - fi - - cd ../ +if $skip_web_checks; then + echo "Git operation in progress, skipping web checks" + exit 0 fi + +vp staged diff --git a/.vscode/launch.json.template b/.vscode/launch.json.template index c3e2c50c52..2611b75c6c 100644 --- a/.vscode/launch.json.template +++ b/.vscode/launch.json.template @@ -2,21 +2,10 @@ "version": "0.2.0", "configurations": [ { - "name": "Python: Flask API", + "name": "Python: API (gevent)", "type": "debugpy", "request": "launch", - "module": "flask", - "env": { - "FLASK_APP": "app.py", - "FLASK_ENV": "development" - }, - "args": [ - "run", - "--host=0.0.0.0", - "--port=5001", - "--no-debugger", - "--no-reload" - ], + "program": "${workspaceFolder}/api/app.py", "jinja": true, "justMyCode": true, "cwd": "${workspaceFolder}/api", diff --git a/web/.vscode/settings.example.json b/.vscode/settings.example.json similarity index 86% rename from web/.vscode/settings.example.json rename to .vscode/settings.example.json index 4b356f5b7a..7cdbc51a3b 100644 --- a/web/.vscode/settings.example.json +++ b/.vscode/settings.example.json @@ -1,12 +1,16 @@ { - // Disable the default formatter, use eslint instead - "prettier.enable": false, - "editor.formatOnSave": false, + "cucumber.features": [ + "e2e/features/**/*.feature", + ], + "cucumber.glue": [ + "e2e/features/**/*.ts", + ], + + "tailwindCSS.experimental.configFile": "web/app/styles/globals.css", // Auto fix "editor.codeActionsOnSave": { "source.fixAll.eslint": "explicit", - "source.organizeImports": "never" }, // Silent the stylistic rules in your IDE, but still auto fix them diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 775401bfa5..d7f007af67 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -97,8 +97,3 @@ Feel free to reach out if you encounter any issues during the setup process. ## Getting Help If you ever get stuck or get a burning question while contributing, simply shoot your queries our way via the related GitHub issue, or hop onto our [Discord](https://discord.gg/8Tpq4AcN9c) for a quick chat. - -## Automated Agent Contributions - -> [!NOTE] -> If you are an automated agent, just add 🤖🤖🤖 to the end of the PR title to opt-in. diff --git a/api/.env.example b/api/.env.example index c6541731e6..7455d4a0e9 100644 --- a/api/.env.example +++ b/api/.env.example @@ -33,6 +33,9 @@ TRIGGER_URL=http://localhost:5001 # The time in seconds after the signature is rejected FILES_ACCESS_TIMEOUT=300 +# Collaboration mode toggle +ENABLE_COLLABORATION_MODE=false + # Access token expiration time in minutes ACCESS_TOKEN_EXPIRE_MINUTES=60 @@ -57,6 +60,9 @@ REDIS_SSL_CERTFILE= REDIS_SSL_KEYFILE= # Path to client private key file for SSL authentication REDIS_DB=0 +# Optional global prefix for Redis keys, topics, streams, and Celery Redis transport artifacts. +# Leave empty to preserve current unprefixed behavior. +REDIS_KEY_PREFIX= # redis Sentinel configuration. REDIS_USE_SENTINEL=false @@ -71,6 +77,13 @@ REDIS_USE_CLUSTERS=false REDIS_CLUSTERS= REDIS_CLUSTERS_PASSWORD= +REDIS_RETRY_RETRIES=3 +REDIS_RETRY_BACKOFF_BASE=1.0 +REDIS_RETRY_BACKOFF_CAP=10.0 +REDIS_SOCKET_TIMEOUT=5.0 +REDIS_SOCKET_CONNECT_TIMEOUT=5.0 +REDIS_HEALTH_CHECK_INTERVAL=30 + # celery configuration CELERY_BROKER_URL=redis://:difyai123456@localhost:${REDIS_PORT}/1 CELERY_BACKEND=redis @@ -102,6 +115,7 @@ S3_BUCKET_NAME=your-bucket-name S3_ACCESS_KEY=your-access-key S3_SECRET_KEY=your-secret-key S3_REGION=your-region +S3_ADDRESS_STYLE=auto # Workflow run and Conversation archive storage (S3-compatible) ARCHIVE_STORAGE_ENABLED=false diff --git a/api/.ruff.toml b/api/.ruff.toml index 2a825f1ef0..dd78024a02 100644 --- a/api/.ruff.toml +++ b/api/.ruff.toml @@ -69,8 +69,6 @@ ignore = [ "FURB152", # math-constant "UP007", # non-pep604-annotation "UP032", # f-string - "UP045", # non-pep604-annotation-optional - "B005", # strip-with-multi-characters "B006", # mutable-argument-default "B007", # unused-loop-control-variable "B026", # star-arg-unpacking-after-keyword-arg @@ -84,7 +82,6 @@ ignore = [ "SIM102", # collapsible-if "SIM103", # needless-bool "SIM105", # suppressible-exception - "SIM107", # return-in-try-except-finally "SIM108", # if-else-block-instead-of-if-exp "SIM113", # enumerate-for-loop "SIM117", # multiple-with-statements @@ -93,32 +90,22 @@ ignore = [ ] [lint.per-file-ignores] -"__init__.py" = [ - "F401", # unused-import - "F811", # redefined-while-unused -] "configs/*" = [ "N802", # invalid-function-name ] -"graphon/model_runtime/callbacks/base_callback.py" = ["T201"] -"core/workflow/callbacks/workflow_logging_callback.py" = ["T201"] "libs/gmpy2_pkcs10aep_cipher.py" = [ "N803", # invalid-argument-name ] "tests/*" = [ - "F811", # redefined-while-unused "T201", # allow print in tests, "S110", # allow ignoring exceptions in tests code (currently) - ] -"controllers/console/explore/trial.py" = ["TID251"] -"controllers/console/human_input_form.py" = ["TID251"] -"controllers/web/human_input_form.py" = ["TID251"] - -[lint.flake8-tidy-imports] [lint.flake8-tidy-imports.banned-api."flask_restx.reqparse"] msg = "Use Pydantic payload/query models instead of reqparse." [lint.flake8-tidy-imports.banned-api."flask_restx.reqparse.RequestParser"] msg = "Use Pydantic payload/query models instead of reqparse." + +[lint.isort] +known-first-party = ["graphon"] \ No newline at end of file diff --git a/api/.vscode/launch.json.example b/api/.vscode/launch.json.example index 6bdfa2c039..1001559176 100644 --- a/api/.vscode/launch.json.example +++ b/api/.vscode/launch.json.example @@ -3,29 +3,21 @@ "compounds": [ { "name": "Launch Flask and Celery", - "configurations": ["Python: Flask", "Python: Celery"] + "configurations": ["Python: API (gevent)", "Python: Celery"] } ], "configurations": [ { - "name": "Python: Flask", - "consoleName": "Flask", + "name": "Python: API (gevent)", + "consoleName": "API", "type": "debugpy", "request": "launch", "python": "${workspaceFolder}/.venv/bin/python", "cwd": "${workspaceFolder}", "envFile": ".env", - "module": "flask", + "program": "${workspaceFolder}/app.py", "justMyCode": true, - "jinja": true, - "env": { - "FLASK_APP": "app.py", - "GEVENT_SUPPORT": "True" - }, - "args": [ - "run", - "--port=5001" - ] + "jinja": true }, { "name": "Python: Celery", diff --git a/api/Dockerfile b/api/Dockerfile index 7e0a439954..6098652573 100644 --- a/api/Dockerfile +++ b/api/Dockerfile @@ -21,8 +21,9 @@ RUN apt-get update \ # for building gmpy2 libmpfr-dev libmpc-dev -# Install Python dependencies +# Install Python dependencies (workspace members under providers/vdb/) COPY pyproject.toml uv.lock ./ +COPY providers ./providers RUN uv sync --locked --no-dev # production stage diff --git a/api/app.py b/api/app.py index c018c8a045..e53b037be5 100644 --- a/api/app.py +++ b/api/app.py @@ -1,5 +1,6 @@ from __future__ import annotations +import logging import sys from typing import TYPE_CHECKING, cast @@ -9,17 +10,35 @@ if TYPE_CHECKING: celery: Celery +HOST = "0.0.0.0" +PORT = 5001 +logger = logging.getLogger(__name__) + + def is_db_command() -> bool: if len(sys.argv) > 1 and sys.argv[0].endswith("flask") and sys.argv[1] == "db": return True return False +def log_startup_banner(host: str, port: int) -> None: + debugger_attached = sys.gettrace() is not None + logger.info("Serving Dify API via gevent WebSocket server") + logger.info("Bound to http://%s:%s", host, port) + logger.info("Debugger attached: %s", "on" if debugger_attached else "off") + logger.info("Press CTRL+C to quit") + + # create app +flask_app = None +socketio_app = None + if is_db_command(): from app_factory import create_migrations_app app = create_migrations_app() + socketio_app = app + flask_app = app else: # Gunicorn and Celery handle monkey patching automatically in production by # specifying the `gevent` worker class. Manual monkey patching is not required here. @@ -30,8 +49,14 @@ else: from app_factory import create_app - app = create_app() + socketio_app, flask_app = create_app() + app = flask_app celery = cast("Celery", app.extensions["celery"]) if __name__ == "__main__": - app.run(host="0.0.0.0", port=5001) + from gevent import pywsgi + from geventwebsocket.handler import WebSocketHandler # type: ignore[reportMissingTypeStubs] + + log_startup_banner(HOST, PORT) + server = pywsgi.WSGIServer((HOST, PORT), socketio_app, handler_class=WebSocketHandler) + server.serve_forever() diff --git a/api/app_factory.py b/api/app_factory.py index 76838f9925..48e50ceae9 100644 --- a/api/app_factory.py +++ b/api/app_factory.py @@ -1,6 +1,7 @@ import logging import time +import socketio # type: ignore[reportMissingTypeStubs] from flask import request from opentelemetry.trace import get_current_span from opentelemetry.trace.span import INVALID_SPAN_ID, INVALID_TRACE_ID @@ -10,6 +11,7 @@ from contexts.wrapper import RecyclableContextVar from controllers.console.error import UnauthorizedAndForceLogout from core.logging.context import init_request_context from dify_app import DifyApp +from extensions.ext_socketio import sio from services.enterprise.enterprise_service import EnterpriseService from services.feature_service import LicenseStatus @@ -122,14 +124,18 @@ def create_flask_app_with_configs() -> DifyApp: return dify_app -def create_app() -> DifyApp: +def create_app() -> tuple[socketio.WSGIApp, DifyApp]: start_time = time.perf_counter() app = create_flask_app_with_configs() initialize_extensions(app) + + sio.app = app + socketio_app = socketio.WSGIApp(sio, app) + end_time = time.perf_counter() if dify_config.DEBUG: logger.info("Finished create_app (%s ms)", round((end_time - start_time) * 1000, 2)) - return app + return socketio_app, app def initialize_extensions(app: DifyApp): diff --git a/api/commands/account.py b/api/commands/account.py index 84af7a5ae6..6a2a2e0428 100644 --- a/api/commands/account.py +++ b/api/commands/account.py @@ -2,7 +2,6 @@ import base64 import secrets import click -from sqlalchemy.orm import sessionmaker from constants.languages import languages from extensions.ext_database import db @@ -25,30 +24,31 @@ def reset_password(email, new_password, password_confirm): return normalized_email = email.strip().lower() - with sessionmaker(db.engine, expire_on_commit=False).begin() as session: - account = AccountService.get_account_by_email_with_case_fallback(email.strip(), session=session) + account = AccountService.get_account_by_email_with_case_fallback(email.strip()) - if not account: - click.echo(click.style(f"Account not found for email: {email}", fg="red")) - return + if not account: + click.echo(click.style(f"Account not found for email: {email}", fg="red")) + return - try: - valid_password(new_password) - except: - click.echo(click.style(f"Invalid password. Must match {password_pattern}", fg="red")) - return + try: + valid_password(new_password) + except: + click.echo(click.style(f"Invalid password. Must match {password_pattern}", fg="red")) + return - # generate password salt - salt = secrets.token_bytes(16) - base64_salt = base64.b64encode(salt).decode() + # generate password salt + salt = secrets.token_bytes(16) + base64_salt = base64.b64encode(salt).decode() - # encrypt password with salt - password_hashed = hash_password(new_password, salt) - base64_password_hashed = base64.b64encode(password_hashed).decode() - account.password = base64_password_hashed - account.password_salt = base64_salt - AccountService.reset_login_error_rate_limit(normalized_email) - click.echo(click.style("Password reset successfully.", fg="green")) + # encrypt password with salt + password_hashed = hash_password(new_password, salt) + base64_password_hashed = base64.b64encode(password_hashed).decode() + account = db.session.merge(account) + account.password = base64_password_hashed + account.password_salt = base64_salt + db.session.commit() + AccountService.reset_login_error_rate_limit(normalized_email) + click.echo(click.style("Password reset successfully.", fg="green")) @click.command("reset-email", help="Reset the account email.") @@ -65,21 +65,22 @@ def reset_email(email, new_email, email_confirm): return normalized_new_email = new_email.strip().lower() - with sessionmaker(db.engine, expire_on_commit=False).begin() as session: - account = AccountService.get_account_by_email_with_case_fallback(email.strip(), session=session) + account = AccountService.get_account_by_email_with_case_fallback(email.strip()) - if not account: - click.echo(click.style(f"Account not found for email: {email}", fg="red")) - return + if not account: + click.echo(click.style(f"Account not found for email: {email}", fg="red")) + return - try: - email_validate(normalized_new_email) - except: - click.echo(click.style(f"Invalid email: {new_email}", fg="red")) - return + try: + email_validate(normalized_new_email) + except: + click.echo(click.style(f"Invalid email: {new_email}", fg="red")) + return - account.email = normalized_new_email - click.echo(click.style("Email updated successfully.", fg="green")) + account = db.session.merge(account) + account.email = normalized_new_email + db.session.commit() + click.echo(click.style("Email updated successfully.", fg="green")) @click.command("create-tenant", help="Create account and tenant.") diff --git a/api/commands/vector.py b/api/commands/vector.py index cb7eb7c452..956f20d6bb 100644 --- a/api/commands/vector.py +++ b/api/commands/vector.py @@ -341,11 +341,10 @@ def add_qdrant_index(field: str): click.echo(click.style("No dataset collection bindings found.", fg="red")) return import qdrant_client + from dify_vdb_qdrant.qdrant_vector import PathQdrantParams, QdrantConfig from qdrant_client.http.exceptions import UnexpectedResponse from qdrant_client.http.models import PayloadSchemaType - from core.rag.datasource.vdb.qdrant.qdrant_vector import PathQdrantParams, QdrantConfig - for binding in bindings: if dify_config.QDRANT_URL is None: raise ValueError("Qdrant URL is required.") diff --git a/api/configs/feature/__init__.py b/api/configs/feature/__init__.py index d37cff63e9..ae49ae47d0 100644 --- a/api/configs/feature/__init__.py +++ b/api/configs/feature/__init__.py @@ -1274,6 +1274,13 @@ class PositionConfig(BaseSettings): return {item.strip() for item in self.POSITION_TOOL_EXCLUDES.split(",") if item.strip() != ""} +class CollaborationConfig(BaseSettings): + ENABLE_COLLABORATION_MODE: bool = Field( + description="Whether to enable collaboration mode features across the workspace", + default=False, + ) + + class LoginConfig(BaseSettings): ENABLE_EMAIL_CODE_LOGIN: bool = Field( description="whether to enable email code login", @@ -1399,6 +1406,7 @@ class FeatureConfig( WorkflowConfig, WorkflowNodeExecutionConfig, WorkspaceConfig, + CollaborationConfig, LoginConfig, AccountConfig, SwaggerUIConfig, diff --git a/api/configs/middleware/__init__.py b/api/configs/middleware/__init__.py index 15ac8bf0bf..c392b8840f 100644 --- a/api/configs/middleware/__init__.py +++ b/api/configs/middleware/__init__.py @@ -1,5 +1,5 @@ import os -from typing import Any, Literal +from typing import Any, Literal, TypedDict from urllib.parse import parse_qsl, quote_plus from pydantic import Field, NonNegativeFloat, NonNegativeInt, PositiveFloat, PositiveInt, computed_field @@ -107,6 +107,17 @@ class KeywordStoreConfig(BaseSettings): ) +class SQLAlchemyEngineOptionsDict(TypedDict): + pool_size: int + max_overflow: int + pool_recycle: int + pool_pre_ping: bool + connect_args: dict[str, str] + pool_use_lifo: bool + pool_reset_on_return: None + pool_timeout: int + + class DatabaseConfig(BaseSettings): # Database type selector DB_TYPE: Literal["postgresql", "mysql", "oceanbase", "seekdb"] = Field( @@ -149,6 +160,16 @@ class DatabaseConfig(BaseSettings): default="", ) + DB_SESSION_TIMEZONE_OVERRIDE: str = Field( + description=( + "PostgreSQL session timezone override injected via startup options." + " Default is 'UTC' for out-of-the-box consistency." + " Set to empty string to disable app-level timezone injection, for example when using RDS Proxy" + " together with a database-side default timezone." + ), + default="UTC", + ) + @computed_field # type: ignore[prop-decorator] @property def SQLALCHEMY_DATABASE_URI_SCHEME(self) -> str: @@ -209,21 +230,22 @@ class DatabaseConfig(BaseSettings): @computed_field # type: ignore[prop-decorator] @property - def SQLALCHEMY_ENGINE_OPTIONS(self) -> dict[str, Any]: + def SQLALCHEMY_ENGINE_OPTIONS(self) -> SQLAlchemyEngineOptionsDict: # Parse DB_EXTRAS for 'options' db_extras_dict = dict(parse_qsl(self.DB_EXTRAS)) options = db_extras_dict.get("options", "") - connect_args = {} + connect_args: dict[str, str] = {} # Use the dynamic SQLALCHEMY_DATABASE_URI_SCHEME property if self.SQLALCHEMY_DATABASE_URI_SCHEME.startswith("postgresql"): - timezone_opt = "-c timezone=UTC" - if options: - merged_options = f"{options} {timezone_opt}" - else: - merged_options = timezone_opt - connect_args = {"options": merged_options} + merged_options = options.strip() + session_timezone_override = self.DB_SESSION_TIMEZONE_OVERRIDE.strip() + if session_timezone_override: + timezone_opt = f"-c timezone={session_timezone_override}" + merged_options = f"{merged_options} {timezone_opt}".strip() if merged_options else timezone_opt + if merged_options: + connect_args = {"options": merged_options} - return { + result: SQLAlchemyEngineOptionsDict = { "pool_size": self.SQLALCHEMY_POOL_SIZE, "max_overflow": self.SQLALCHEMY_MAX_OVERFLOW, "pool_recycle": self.SQLALCHEMY_POOL_RECYCLE, @@ -233,6 +255,7 @@ class DatabaseConfig(BaseSettings): "pool_reset_on_return": None, "pool_timeout": self.SQLALCHEMY_POOL_TIMEOUT, } + return result class CeleryConfig(DatabaseConfig): diff --git a/api/configs/middleware/cache/redis_config.py b/api/configs/middleware/cache/redis_config.py index 3b91207545..2def0a0d4e 100644 --- a/api/configs/middleware/cache/redis_config.py +++ b/api/configs/middleware/cache/redis_config.py @@ -32,6 +32,11 @@ class RedisConfig(BaseSettings): default=0, ) + REDIS_KEY_PREFIX: str = Field( + description="Optional global prefix for Redis keys, topics, and transport artifacts", + default="", + ) + REDIS_USE_SSL: bool = Field( description="Enable SSL/TLS for the Redis connection", default=False, @@ -117,6 +122,37 @@ class RedisConfig(BaseSettings): default=None, ) + REDIS_RETRY_RETRIES: NonNegativeInt = Field( + description="Maximum number of retries per Redis command on " + "transient failures (ConnectionError, TimeoutError, socket.timeout)", + default=3, + ) + + REDIS_RETRY_BACKOFF_BASE: PositiveFloat = Field( + description="Base delay in seconds for exponential backoff between retries", + default=1.0, + ) + + REDIS_RETRY_BACKOFF_CAP: PositiveFloat = Field( + description="Maximum backoff delay in seconds between retries", + default=10.0, + ) + + REDIS_SOCKET_TIMEOUT: PositiveFloat | None = Field( + description="Socket timeout in seconds for Redis read/write operations", + default=5.0, + ) + + REDIS_SOCKET_CONNECT_TIMEOUT: PositiveFloat | None = Field( + description="Socket timeout in seconds for Redis connection establishment", + default=5.0, + ) + + REDIS_HEALTH_CHECK_INTERVAL: NonNegativeInt = Field( + description="Interval in seconds between Redis connection health checks (0 to disable)", + default=30, + ) + @field_validator("REDIS_MAX_CONNECTIONS", mode="before") @classmethod def _empty_string_to_none_for_max_conns(cls, v): diff --git a/api/configs/middleware/vdb/hologres_config.py b/api/configs/middleware/vdb/hologres_config.py index 9812cce268..788b3cfb78 100644 --- a/api/configs/middleware/vdb/hologres_config.py +++ b/api/configs/middleware/vdb/hologres_config.py @@ -1,4 +1,3 @@ -from holo_search_sdk.types import BaseQuantizationType, DistanceType, TokenizerType from pydantic import Field from pydantic_settings import BaseSettings @@ -42,17 +41,17 @@ class HologresConfig(BaseSettings): default="public", ) - HOLOGRES_TOKENIZER: TokenizerType = Field( + HOLOGRES_TOKENIZER: str = Field( description="Tokenizer for full-text search index (e.g., 'jieba', 'ik', 'standard', 'simple').", default="jieba", ) - HOLOGRES_DISTANCE_METHOD: DistanceType = Field( + HOLOGRES_DISTANCE_METHOD: str = Field( description="Distance method for vector index (e.g., 'Cosine', 'Euclidean', 'InnerProduct').", default="Cosine", ) - HOLOGRES_BASE_QUANTIZATION_TYPE: BaseQuantizationType = Field( + HOLOGRES_BASE_QUANTIZATION_TYPE: str = Field( description="Base quantization type for vector index (e.g., 'rabitq', 'sq8', 'fp16', 'fp32').", default="rabitq", ) diff --git a/api/configs/middleware/vdb/iris_config.py b/api/configs/middleware/vdb/iris_config.py index c532d191c3..f5993dd8f8 100644 --- a/api/configs/middleware/vdb/iris_config.py +++ b/api/configs/middleware/vdb/iris_config.py @@ -1,5 +1,7 @@ """Configuration for InterSystems IRIS vector database.""" +from typing import Any + from pydantic import Field, PositiveInt, model_validator from pydantic_settings import BaseSettings @@ -64,7 +66,7 @@ class IrisVectorConfig(BaseSettings): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict) -> dict: + def validate_config(cls, values: dict[str, Any]) -> dict[str, Any]: """Validate IRIS configuration values. Args: diff --git a/api/controllers/common/controller_schemas.py b/api/controllers/common/controller_schemas.py index e13bf025fc..c12d576473 100644 --- a/api/controllers/common/controller_schemas.py +++ b/api/controllers/common/controller_schemas.py @@ -1,4 +1,5 @@ from typing import Any, Literal +from uuid import UUID from pydantic import BaseModel, Field, model_validator @@ -23,9 +24,9 @@ class ConversationRenamePayload(BaseModel): class MessageListQuery(BaseModel): - conversation_id: UUIDStrOrEmpty - first_id: UUIDStrOrEmpty | None = None - limit: int = Field(default=20, ge=1, le=100) + conversation_id: UUIDStrOrEmpty = Field(description="Conversation UUID") + first_id: UUIDStrOrEmpty | None = Field(default=None, description="First message ID for pagination") + limit: int = Field(default=20, ge=1, le=100, description="Number of messages to return (1-100)") class MessageFeedbackPayload(BaseModel): @@ -48,16 +49,56 @@ class SavedMessageCreatePayload(BaseModel): # --- Workflow schemas --- +class DefaultBlockConfigQuery(BaseModel): + q: str | None = None + + +class WorkflowListQuery(BaseModel): + page: int = Field(default=1, ge=1, le=99999) + limit: int = Field(default=10, ge=1, le=100) + user_id: str | None = None + named_only: bool = False + + class WorkflowRunPayload(BaseModel): inputs: dict[str, Any] files: list[dict[str, Any]] | None = None +class WorkflowUpdatePayload(BaseModel): + marked_name: str | None = Field(default=None, max_length=20) + marked_comment: str | None = Field(default=None, max_length=100) + + +# --- Dataset schemas --- + + +DOCUMENT_BATCH_DOWNLOAD_ZIP_MAX_DOCS = 100 + + +class ChildChunkCreatePayload(BaseModel): + content: str + + +class ChildChunkUpdatePayload(BaseModel): + content: str + + +class DocumentBatchDownloadZipPayload(BaseModel): + """Request payload for bulk downloading documents as a zip archive.""" + + document_ids: list[UUID] = Field(..., min_length=1, max_length=DOCUMENT_BATCH_DOWNLOAD_ZIP_MAX_DOCS) + + +class MetadataUpdatePayload(BaseModel): + name: str + + # --- Audio schemas --- class TextToAudioPayload(BaseModel): - message_id: str | None = None - voice: str | None = None - text: str | None = None - streaming: bool | None = None + message_id: str | None = Field(default=None, description="Message ID") + voice: str | None = Field(default=None, description="Voice to use for TTS") + text: str | None = Field(default=None, description="Text to convert to audio") + streaming: bool | None = Field(default=None, description="Enable streaming response") diff --git a/api/controllers/common/fields.py b/api/controllers/common/fields.py index 4fe3fc9062..8e665c1386 100644 --- a/api/controllers/common/fields.py +++ b/api/controllers/common/fields.py @@ -2,9 +2,9 @@ from __future__ import annotations from typing import Any -from graphon.file import helpers as file_helpers from pydantic import BaseModel, ConfigDict, computed_field +from graphon.file import helpers as file_helpers from models.model import IconType type JSONValue = str | int | float | bool | None | dict[str, Any] | list[Any] diff --git a/api/controllers/console/__init__.py b/api/controllers/console/__init__.py index d624b10b22..980e828945 100644 --- a/api/controllers/console/__init__.py +++ b/api/controllers/console/__init__.py @@ -65,6 +65,7 @@ from .app import ( statistic, workflow, workflow_app_log, + workflow_comment, workflow_draft_variable, workflow_run, workflow_statistic, @@ -116,6 +117,7 @@ from .explore import ( saved_message, trial, ) +from .socketio import workflow as socketio_workflow # pyright: ignore[reportUnusedImport] # Import tag controllers from .tag import tags @@ -201,6 +203,7 @@ __all__ = [ "saved_message", "setup", "site", + "socketio_workflow", "spec", "statistic", "tags", @@ -211,6 +214,7 @@ __all__ = [ "website", "workflow", "workflow_app_log", + "workflow_comment", "workflow_draft_variable", "workflow_run", "workflow_statistic", diff --git a/api/controllers/console/apikey.py b/api/controllers/console/apikey.py index 772bb9d0f1..b03d9b4a4c 100644 --- a/api/controllers/console/apikey.py +++ b/api/controllers/console/apikey.py @@ -1,12 +1,16 @@ +from datetime import datetime + import flask_restx -from flask_restx import Resource, fields, marshal_with +from flask_restx import Resource from flask_restx._http import HTTPStatus +from pydantic import field_validator from sqlalchemy import delete, func, select from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import Forbidden +from controllers.common.schema import register_schema_models from extensions.ext_database import db -from libs.helper import TimestampField +from fields.base import ResponseModel from libs.login import current_account_with_tenant, login_required from models.dataset import Dataset from models.enums import ApiTokenType @@ -16,21 +20,31 @@ from services.api_token_service import ApiTokenCache from . import console_ns from .wraps import account_initialization_required, edit_permission_required, setup_required -api_key_fields = { - "id": fields.String, - "type": fields.String, - "token": fields.String, - "last_used_at": TimestampField, - "created_at": TimestampField, -} -api_key_item_model = console_ns.model("ApiKeyItem", api_key_fields) +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value -api_key_list = {"data": fields.List(fields.Nested(api_key_item_model), attribute="items")} -api_key_list_model = console_ns.model( - "ApiKeyList", {"data": fields.List(fields.Nested(api_key_item_model), attribute="items")} -) +class ApiKeyItem(ResponseModel): + id: str + type: str + token: str + last_used_at: int | None = None + created_at: int | None = None + + @field_validator("last_used_at", "created_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class ApiKeyList(ResponseModel): + data: list[ApiKeyItem] + + +register_schema_models(console_ns, ApiKeyItem, ApiKeyList) def _get_resource(resource_id, tenant_id, resource_model): @@ -54,7 +68,6 @@ class BaseApiKeyListResource(Resource): token_prefix: str | None = None max_keys = 10 - @marshal_with(api_key_list_model) def get(self, resource_id): assert self.resource_id_field is not None, "resource_id_field must be set" resource_id = str(resource_id) @@ -66,9 +79,8 @@ class BaseApiKeyListResource(Resource): ApiToken.type == self.resource_type, getattr(ApiToken, self.resource_id_field) == resource_id ) ).all() - return {"items": keys} + return ApiKeyList.model_validate({"data": keys}, from_attributes=True).model_dump(mode="json") - @marshal_with(api_key_item_model) @edit_permission_required def post(self, resource_id): assert self.resource_id_field is not None, "resource_id_field must be set" @@ -100,7 +112,7 @@ class BaseApiKeyListResource(Resource): api_token.type = self.resource_type db.session.add(api_token) db.session.commit() - return api_token, 201 + return ApiKeyItem.model_validate(api_token, from_attributes=True).model_dump(mode="json"), 201 class BaseApiKeyResource(Resource): @@ -147,7 +159,7 @@ class AppApiKeyListResource(BaseApiKeyListResource): @console_ns.doc("get_app_api_keys") @console_ns.doc(description="Get all API keys for an app") @console_ns.doc(params={"resource_id": "App ID"}) - @console_ns.response(200, "Success", api_key_list_model) + @console_ns.response(200, "API keys retrieved successfully", console_ns.models[ApiKeyList.__name__]) def get(self, resource_id): # type: ignore """Get all API keys for an app""" return super().get(resource_id) @@ -155,7 +167,7 @@ class AppApiKeyListResource(BaseApiKeyListResource): @console_ns.doc("create_app_api_key") @console_ns.doc(description="Create a new API key for an app") @console_ns.doc(params={"resource_id": "App ID"}) - @console_ns.response(201, "API key created successfully", api_key_item_model) + @console_ns.response(201, "API key created successfully", console_ns.models[ApiKeyItem.__name__]) @console_ns.response(400, "Maximum keys exceeded") def post(self, resource_id): # type: ignore """Create a new API key for an app""" @@ -187,7 +199,7 @@ class DatasetApiKeyListResource(BaseApiKeyListResource): @console_ns.doc("get_dataset_api_keys") @console_ns.doc(description="Get all API keys for a dataset") @console_ns.doc(params={"resource_id": "Dataset ID"}) - @console_ns.response(200, "Success", api_key_list_model) + @console_ns.response(200, "API keys retrieved successfully", console_ns.models[ApiKeyList.__name__]) def get(self, resource_id): # type: ignore """Get all API keys for a dataset""" return super().get(resource_id) @@ -195,7 +207,7 @@ class DatasetApiKeyListResource(BaseApiKeyListResource): @console_ns.doc("create_dataset_api_key") @console_ns.doc(description="Create a new API key for a dataset") @console_ns.doc(params={"resource_id": "Dataset ID"}) - @console_ns.response(201, "API key created successfully", api_key_item_model) + @console_ns.response(201, "API key created successfully", console_ns.models[ApiKeyItem.__name__]) @console_ns.response(400, "Maximum keys exceeded") def post(self, resource_id): # type: ignore """Create a new API key for a dataset""" diff --git a/api/controllers/console/app/advanced_prompt_template.py b/api/controllers/console/app/advanced_prompt_template.py index 3bd61feb44..ed66da1be5 100644 --- a/api/controllers/console/app/advanced_prompt_template.py +++ b/api/controllers/console/app/advanced_prompt_template.py @@ -5,7 +5,7 @@ from pydantic import BaseModel, Field from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, setup_required from libs.login import login_required -from services.advanced_prompt_template_service import AdvancedPromptTemplateService +from services.advanced_prompt_template_service import AdvancedPromptTemplateArgs, AdvancedPromptTemplateService class AdvancedPromptTemplateQuery(BaseModel): @@ -35,5 +35,10 @@ class AdvancedPromptTemplateList(Resource): @account_initialization_required def get(self): args = AdvancedPromptTemplateQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore - - return AdvancedPromptTemplateService.get_prompt(args.model_dump()) + prompt_args: AdvancedPromptTemplateArgs = { + "app_mode": args.app_mode, + "model_mode": args.model_mode, + "model_name": args.model_name, + "has_context": args.has_context, + } + return AdvancedPromptTemplateService.get_prompt(prompt_args) diff --git a/api/controllers/console/app/annotation.py b/api/controllers/console/app/annotation.py index 9931bb5dd7..528785931e 100644 --- a/api/controllers/console/app/annotation.py +++ b/api/controllers/console/app/annotation.py @@ -25,7 +25,13 @@ from fields.annotation_fields import ( ) from libs.helper import uuid_value from libs.login import login_required -from services.annotation_service import AppAnnotationService +from services.annotation_service import ( + AppAnnotationService, + EnableAnnotationArgs, + UpdateAnnotationArgs, + UpdateAnnotationSettingArgs, + UpsertAnnotationArgs, +) DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" @@ -120,7 +126,12 @@ class AnnotationReplyActionApi(Resource): args = AnnotationReplyPayload.model_validate(console_ns.payload) match action: case "enable": - result = AppAnnotationService.enable_app_annotation(args.model_dump(), app_id) + enable_args: EnableAnnotationArgs = { + "score_threshold": args.score_threshold, + "embedding_provider_name": args.embedding_provider_name, + "embedding_model_name": args.embedding_model_name, + } + result = AppAnnotationService.enable_app_annotation(enable_args, app_id) case "disable": result = AppAnnotationService.disable_app_annotation(app_id) return result, 200 @@ -161,7 +172,8 @@ class AppAnnotationSettingUpdateApi(Resource): args = AnnotationSettingUpdatePayload.model_validate(console_ns.payload) - result = AppAnnotationService.update_app_annotation_setting(app_id, annotation_setting_id, args.model_dump()) + setting_args: UpdateAnnotationSettingArgs = {"score_threshold": args.score_threshold} + result = AppAnnotationService.update_app_annotation_setting(app_id, annotation_setting_id, setting_args) return result, 200 @@ -237,8 +249,16 @@ class AnnotationApi(Resource): def post(self, app_id): app_id = str(app_id) args = CreateAnnotationPayload.model_validate(console_ns.payload) - data = args.model_dump(exclude_none=True) - annotation = AppAnnotationService.up_insert_app_annotation_from_message(data, app_id) + upsert_args: UpsertAnnotationArgs = {} + if args.answer is not None: + upsert_args["answer"] = args.answer + if args.content is not None: + upsert_args["content"] = args.content + if args.message_id is not None: + upsert_args["message_id"] = args.message_id + if args.question is not None: + upsert_args["question"] = args.question + annotation = AppAnnotationService.up_insert_app_annotation_from_message(upsert_args, app_id) return Annotation.model_validate(annotation, from_attributes=True).model_dump(mode="json") @setup_required @@ -315,9 +335,12 @@ class AnnotationUpdateDeleteApi(Resource): app_id = str(app_id) annotation_id = str(annotation_id) args = UpdateAnnotationPayload.model_validate(console_ns.payload) - annotation = AppAnnotationService.update_app_annotation_directly( - args.model_dump(exclude_none=True), app_id, annotation_id - ) + update_args: UpdateAnnotationArgs = {} + if args.answer is not None: + update_args["answer"] = args.answer + if args.question is not None: + update_args["question"] = args.question + annotation = AppAnnotationService.update_app_annotation_directly(update_args, app_id, annotation_id) return Annotation.model_validate(annotation, from_attributes=True).model_dump(mode="json") @setup_required diff --git a/api/controllers/console/app/app.py b/api/controllers/console/app/app.py index c4b9bf6540..051d08aa36 100644 --- a/api/controllers/console/app/app.py +++ b/api/controllers/console/app/app.py @@ -5,11 +5,9 @@ from typing import Any, Literal from flask import request from flask_restx import Resource -from graphon.enums import WorkflowExecutionStatus -from graphon.file import helpers as file_helpers from pydantic import AliasChoices, BaseModel, Field, computed_field, field_validator from sqlalchemy import select -from sqlalchemy.orm import sessionmaker +from sqlalchemy.orm import Session from werkzeug.exceptions import BadRequest from controllers.common.helpers import FileInfo @@ -31,12 +29,15 @@ from core.rag.retrieval.retrieval_methods import RetrievalMethod from core.trigger.constants import TRIGGER_NODE_TYPES from extensions.ext_database import db from fields.base import ResponseModel +from graphon.enums import WorkflowExecutionStatus +from libs.helper import build_icon_url from libs.login import current_account_with_tenant, login_required from models import App, DatasetPermissionEnum, Workflow from models.model import IconType -from services.app_dsl_service import AppDslService, ImportMode +from services.app_dsl_service import AppDslService from services.app_service import AppService from services.enterprise.enterprise_service import EnterpriseService +from services.entities.dsl_entities import ImportMode, ImportStatus from services.entities.knowledge_entities.knowledge_entities import ( DataSource, InfoList, @@ -160,15 +161,6 @@ def _to_timestamp(value: datetime | int | None) -> int | None: return value -def _build_icon_url(icon_type: str | IconType | None, icon: str | None) -> str | None: - if icon is None or icon_type is None: - return None - icon_type_value = icon_type.value if isinstance(icon_type, IconType) else str(icon_type) - if icon_type_value.lower() != IconType.IMAGE: - return None - return file_helpers.get_signed_file_url(icon) - - class Tag(ResponseModel): id: str name: str @@ -291,7 +283,7 @@ class Site(ResponseModel): @computed_field(return_type=str | None) # type: ignore @property def icon_url(self) -> str | None: - return _build_icon_url(self.icon_type, self.icon) + return build_icon_url(self.icon_type, self.icon) @field_validator("icon_type", mode="before") @classmethod @@ -341,7 +333,7 @@ class AppPartial(ResponseModel): @computed_field(return_type=str | None) # type: ignore @property def icon_url(self) -> str | None: - return _build_icon_url(self.icon_type, self.icon) + return build_icon_url(self.icon_type, self.icon) @field_validator("created_at", "updated_at", mode="before") @classmethod @@ -389,7 +381,7 @@ class AppDetailWithSite(AppDetail): @computed_field(return_type=str | None) # type: ignore @property def icon_url(self) -> str | None: - return _build_icon_url(self.icon_type, self.icon) + return build_icon_url(self.icon_type, self.icon) class AppPagination(ResponseModel): @@ -631,7 +623,7 @@ class AppCopyApi(Resource): args = CopyAppPayload.model_validate(console_ns.payload or {}) - with sessionmaker(db.engine, expire_on_commit=False).begin() as session: + with Session(db.engine, expire_on_commit=False) as session: import_service = AppDslService(session) yaml_content = import_service.export_dsl(app_model=app_model, include_secret=True) result = import_service.import_app( @@ -644,6 +636,13 @@ class AppCopyApi(Resource): icon=args.icon, icon_background=args.icon_background, ) + if result.status == ImportStatus.FAILED: + session.rollback() + return result.model_dump(mode="json"), 400 + if result.status == ImportStatus.PENDING: + session.rollback() + return result.model_dump(mode="json"), 202 + session.commit() # Inherit web app permission from original app if result.app_id and FeatureService.get_system_features().webapp_auth.enabled: diff --git a/api/controllers/console/app/app_import.py b/api/controllers/console/app/app_import.py index 16e1fa3245..e91dc9cfe5 100644 --- a/api/controllers/console/app/app_import.py +++ b/api/controllers/console/app/app_import.py @@ -1,7 +1,8 @@ -from flask_restx import Resource, fields, marshal_with +from flask_restx import Resource from pydantic import BaseModel, Field -from sqlalchemy.orm import sessionmaker +from sqlalchemy.orm import Session +from controllers.common.schema import register_schema_models from controllers.console.app.wraps import get_app_model from controllers.console.wraps import ( account_initialization_required, @@ -10,34 +11,15 @@ from controllers.console.wraps import ( setup_required, ) from extensions.ext_database import db -from fields.app_fields import ( - app_import_check_dependencies_fields, - app_import_fields, - leaked_dependency_fields, -) from libs.login import current_account_with_tenant, login_required from models.model import App -from services.app_dsl_service import AppDslService, ImportStatus +from services.app_dsl_service import AppDslService, Import from services.enterprise.enterprise_service import EnterpriseService +from services.entities.dsl_entities import CheckDependenciesResult, ImportStatus from services.feature_service import FeatureService from .. import console_ns -# Register models for flask_restx to avoid dict type issues in Swagger -# Register base model first -leaked_dependency_model = console_ns.model("LeakedDependency", leaked_dependency_fields) - -app_import_model = console_ns.model("AppImport", app_import_fields) - -# For nested models, need to replace nested dict with registered model -app_import_check_dependencies_fields_copy = app_import_check_dependencies_fields.copy() -app_import_check_dependencies_fields_copy["leaked_dependencies"] = fields.List(fields.Nested(leaked_dependency_model)) -app_import_check_dependencies_model = console_ns.model( - "AppImportCheckDependencies", app_import_check_dependencies_fields_copy -) - -DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" - class AppImportPayload(BaseModel): mode: str = Field(..., description="Import mode") @@ -51,18 +33,18 @@ class AppImportPayload(BaseModel): app_id: str | None = Field(None) -console_ns.schema_model( - AppImportPayload.__name__, AppImportPayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0) -) +register_schema_models(console_ns, AppImportPayload, Import, CheckDependenciesResult) @console_ns.route("/apps/imports") class AppImportApi(Resource): @console_ns.expect(console_ns.models[AppImportPayload.__name__]) + @console_ns.response(200, "Import completed", console_ns.models[Import.__name__]) + @console_ns.response(202, "Import pending confirmation", console_ns.models[Import.__name__]) + @console_ns.response(400, "Import failed", console_ns.models[Import.__name__]) @setup_required @login_required @account_initialization_required - @marshal_with(app_import_model) @cloud_edition_billing_resource_check("apps") @edit_permission_required def post(self): @@ -70,8 +52,9 @@ class AppImportApi(Resource): current_user, _ = current_account_with_tenant() args = AppImportPayload.model_validate(console_ns.payload) - # Create service with session - with sessionmaker(db.engine).begin() as session: + # AppDslService performs internal commits for some creation paths, so use a plain + # Session here instead of nesting it inside sessionmaker(...).begin(). + with Session(db.engine, expire_on_commit=False) as session: import_service = AppDslService(session) # Import app account = current_user @@ -87,35 +70,45 @@ class AppImportApi(Resource): icon_background=args.icon_background, app_id=args.app_id, ) + if result.status == ImportStatus.FAILED: + session.rollback() + else: + session.commit() if result.app_id and FeatureService.get_system_features().webapp_auth.enabled: # update web app setting as private EnterpriseService.WebAppAuth.update_app_access_mode(result.app_id, "private") # Return appropriate status code based on result status = result.status - if status == ImportStatus.FAILED: - return result.model_dump(mode="json"), 400 - elif status == ImportStatus.PENDING: - return result.model_dump(mode="json"), 202 - return result.model_dump(mode="json"), 200 + match status: + case ImportStatus.FAILED: + return result.model_dump(mode="json"), 400 + case ImportStatus.PENDING: + return result.model_dump(mode="json"), 202 + case ImportStatus.COMPLETED | ImportStatus.COMPLETED_WITH_WARNINGS: + return result.model_dump(mode="json"), 200 @console_ns.route("/apps/imports//confirm") class AppImportConfirmApi(Resource): + @console_ns.response(200, "Import confirmed", console_ns.models[Import.__name__]) + @console_ns.response(400, "Import failed", console_ns.models[Import.__name__]) @setup_required @login_required @account_initialization_required - @marshal_with(app_import_model) @edit_permission_required def post(self, import_id): # Check user role first current_user, _ = current_account_with_tenant() - # Create service with session - with sessionmaker(db.engine).begin() as session: + with Session(db.engine, expire_on_commit=False) as session: import_service = AppDslService(session) # Confirm import account = current_user result = import_service.confirm_import(import_id=import_id, account=account) + if result.status == ImportStatus.FAILED: + session.rollback() + else: + session.commit() # Return appropriate status code based on result if result.status == ImportStatus.FAILED: @@ -125,14 +118,14 @@ class AppImportConfirmApi(Resource): @console_ns.route("/apps/imports//check-dependencies") class AppImportCheckDependenciesApi(Resource): + @console_ns.response(200, "Dependencies checked", console_ns.models[CheckDependenciesResult.__name__]) @setup_required @login_required @get_app_model @account_initialization_required - @marshal_with(app_import_check_dependencies_model) @edit_permission_required def get(self, app_model: App): - with sessionmaker(db.engine).begin() as session: + with Session(db.engine, expire_on_commit=False) as session: import_service = AppDslService(session) result = import_service.check_dependencies(app_model=app_model) diff --git a/api/controllers/console/app/audio.py b/api/controllers/console/app/audio.py index 78ddb904e1..91fbe4a85a 100644 --- a/api/controllers/console/app/audio.py +++ b/api/controllers/console/app/audio.py @@ -2,7 +2,6 @@ import logging from flask import request from flask_restx import Resource, fields -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field from werkzeug.exceptions import InternalServerError @@ -23,6 +22,7 @@ from controllers.console.app.error import ( from controllers.console.app.wraps import get_app_model from controllers.console.wraps import account_initialization_required, setup_required from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from libs.login import login_required from models import App, AppMode from services.audio_service import AudioService diff --git a/api/controllers/console/app/completion.py b/api/controllers/console/app/completion.py index d83925d173..fe274e4c9a 100644 --- a/api/controllers/console/app/completion.py +++ b/api/controllers/console/app/completion.py @@ -3,7 +3,6 @@ from typing import Any, Literal from flask import request from flask_restx import Resource -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field, field_validator from werkzeug.exceptions import InternalServerError, NotFound @@ -27,6 +26,7 @@ from core.errors.error import ( QuotaExceededError, ) from core.helper.trace_id_helper import get_external_trace_id +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from libs.helper import uuid_value from libs.login import current_user, login_required diff --git a/api/controllers/console/app/conversation.py b/api/controllers/console/app/conversation.py index d329d22309..b2b1049f0c 100644 --- a/api/controllers/console/app/conversation.py +++ b/api/controllers/console/app/conversation.py @@ -2,20 +2,37 @@ from typing import Literal import sqlalchemy as sa from flask import abort, request -from flask_restx import Resource, fields, marshal_with +from flask_restx import Resource from pydantic import BaseModel, Field, field_validator from sqlalchemy import func, or_ from sqlalchemy.orm import selectinload from werkzeug.exceptions import NotFound +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.app.wraps import get_app_model from controllers.console.wraps import account_initialization_required, edit_permission_required, setup_required from core.app.entities.app_invoke_entities import InvokeFrom from extensions.ext_database import db -from fields.raws import FilesContainedField +from fields.conversation_fields import ( + Conversation as ConversationResponse, +) +from fields.conversation_fields import ( + ConversationDetail as ConversationDetailResponse, +) +from fields.conversation_fields import ( + ConversationMessageDetail as ConversationMessageDetailResponse, +) +from fields.conversation_fields import ( + ConversationPagination as ConversationPaginationResponse, +) +from fields.conversation_fields import ( + ConversationWithSummaryPagination as ConversationWithSummaryPaginationResponse, +) +from fields.conversation_fields import ( + ResultResponse, +) from libs.datetime_utils import naive_utc_now, parse_time_range -from libs.helper import TimestampField from libs.login import current_account_with_tenant, login_required from models import Conversation, EndUser, Message, MessageAnnotation from models.model import AppMode @@ -62,267 +79,16 @@ console_ns.schema_model( ChatConversationQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), ) -# Register models for flask_restx to avoid dict type issues in Swagger -# Register in dependency order: base models first, then dependent models - -# Base models -simple_account_model = console_ns.model( - "SimpleAccount", - { - "id": fields.String, - "name": fields.String, - "email": fields.String, - }, -) - -feedback_stat_model = console_ns.model( - "FeedbackStat", - { - "like": fields.Integer, - "dislike": fields.Integer, - }, -) - -status_count_model = console_ns.model( - "StatusCount", - { - "success": fields.Integer, - "failed": fields.Integer, - "partial_success": fields.Integer, - "paused": fields.Integer, - }, -) - -message_file_model = console_ns.model( - "MessageFile", - { - "id": fields.String, - "filename": fields.String, - "type": fields.String, - "url": fields.String, - "mime_type": fields.String, - "size": fields.Integer, - "transfer_method": fields.String, - "belongs_to": fields.String(default="user"), - "upload_file_id": fields.String(default=None), - }, -) - -agent_thought_model = console_ns.model( - "AgentThought", - { - "id": fields.String, - "chain_id": fields.String, - "message_id": fields.String, - "position": fields.Integer, - "thought": fields.String, - "tool": fields.String, - "tool_labels": fields.Raw, - "tool_input": fields.String, - "created_at": TimestampField, - "observation": fields.String, - "files": fields.List(fields.String), - }, -) - -simple_model_config_model = console_ns.model( - "SimpleModelConfig", - { - "model": fields.Raw(attribute="model_dict"), - "pre_prompt": fields.String, - }, -) - -model_config_model = console_ns.model( - "ModelConfig", - { - "opening_statement": fields.String, - "suggested_questions": fields.Raw, - "model": fields.Raw, - "user_input_form": fields.Raw, - "pre_prompt": fields.String, - "agent_mode": fields.Raw, - }, -) - -# Models that depend on simple_account_model -feedback_model = console_ns.model( - "Feedback", - { - "rating": fields.String, - "content": fields.String, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_account": fields.Nested(simple_account_model, allow_null=True), - }, -) - -annotation_model = console_ns.model( - "Annotation", - { - "id": fields.String, - "question": fields.String, - "content": fields.String, - "account": fields.Nested(simple_account_model, allow_null=True), - "created_at": TimestampField, - }, -) - -annotation_hit_history_model = console_ns.model( - "AnnotationHitHistory", - { - "annotation_id": fields.String(attribute="id"), - "annotation_create_account": fields.Nested(simple_account_model, allow_null=True), - "created_at": TimestampField, - }, -) - - -class MessageTextField(fields.Raw): - def format(self, value): - return value[0]["text"] if value else "" - - -# Simple message detail model -simple_message_detail_model = console_ns.model( - "SimpleMessageDetail", - { - "inputs": FilesContainedField, - "query": fields.String, - "message": MessageTextField, - "answer": fields.String, - }, -) - -# Message detail model that depends on multiple models -message_detail_model = console_ns.model( - "MessageDetail", - { - "id": fields.String, - "conversation_id": fields.String, - "inputs": FilesContainedField, - "query": fields.String, - "message": fields.Raw, - "message_tokens": fields.Integer, - "answer": fields.String(attribute="re_sign_file_url_answer"), - "answer_tokens": fields.Integer, - "provider_response_latency": fields.Float, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_account_id": fields.String, - "feedbacks": fields.List(fields.Nested(feedback_model)), - "workflow_run_id": fields.String, - "annotation": fields.Nested(annotation_model, allow_null=True), - "annotation_hit_history": fields.Nested(annotation_hit_history_model, allow_null=True), - "created_at": TimestampField, - "agent_thoughts": fields.List(fields.Nested(agent_thought_model)), - "message_files": fields.List(fields.Nested(message_file_model)), - "metadata": fields.Raw(attribute="message_metadata_dict"), - "status": fields.String, - "error": fields.String, - "parent_message_id": fields.String, - }, -) - -# Conversation models -conversation_fields_model = console_ns.model( - "Conversation", - { - "id": fields.String, - "status": fields.String, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_end_user_session_id": fields.String(), - "from_account_id": fields.String, - "from_account_name": fields.String, - "read_at": TimestampField, - "created_at": TimestampField, - "updated_at": TimestampField, - "annotation": fields.Nested(annotation_model, allow_null=True), - "model_config": fields.Nested(simple_model_config_model), - "user_feedback_stats": fields.Nested(feedback_stat_model), - "admin_feedback_stats": fields.Nested(feedback_stat_model), - "message": fields.Nested(simple_message_detail_model, attribute="first_message"), - }, -) - -conversation_pagination_model = console_ns.model( - "ConversationPagination", - { - "page": fields.Integer, - "limit": fields.Integer(attribute="per_page"), - "total": fields.Integer, - "has_more": fields.Boolean(attribute="has_next"), - "data": fields.List(fields.Nested(conversation_fields_model), attribute="items"), - }, -) - -conversation_message_detail_model = console_ns.model( - "ConversationMessageDetail", - { - "id": fields.String, - "status": fields.String, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_account_id": fields.String, - "created_at": TimestampField, - "model_config": fields.Nested(model_config_model), - "message": fields.Nested(message_detail_model, attribute="first_message"), - }, -) - -conversation_with_summary_model = console_ns.model( - "ConversationWithSummary", - { - "id": fields.String, - "status": fields.String, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_end_user_session_id": fields.String, - "from_account_id": fields.String, - "from_account_name": fields.String, - "name": fields.String, - "summary": fields.String(attribute="summary_or_query"), - "read_at": TimestampField, - "created_at": TimestampField, - "updated_at": TimestampField, - "annotated": fields.Boolean, - "model_config": fields.Nested(simple_model_config_model), - "message_count": fields.Integer, - "user_feedback_stats": fields.Nested(feedback_stat_model), - "admin_feedback_stats": fields.Nested(feedback_stat_model), - "status_count": fields.Nested(status_count_model), - }, -) - -conversation_with_summary_pagination_model = console_ns.model( - "ConversationWithSummaryPagination", - { - "page": fields.Integer, - "limit": fields.Integer(attribute="per_page"), - "total": fields.Integer, - "has_more": fields.Boolean(attribute="has_next"), - "data": fields.List(fields.Nested(conversation_with_summary_model), attribute="items"), - }, -) - -conversation_detail_model = console_ns.model( - "ConversationDetail", - { - "id": fields.String, - "status": fields.String, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_account_id": fields.String, - "created_at": TimestampField, - "updated_at": TimestampField, - "annotated": fields.Boolean, - "introduction": fields.String, - "model_config": fields.Nested(model_config_model), - "message_count": fields.Integer, - "user_feedback_stats": fields.Nested(feedback_stat_model), - "admin_feedback_stats": fields.Nested(feedback_stat_model), - }, +register_schema_models( + console_ns, + CompletionConversationQuery, + ChatConversationQuery, + ConversationResponse, + ConversationPaginationResponse, + ConversationMessageDetailResponse, + ConversationWithSummaryPaginationResponse, + ConversationDetailResponse, + ResultResponse, ) @@ -332,13 +98,12 @@ class CompletionConversationApi(Resource): @console_ns.doc(description="Get completion conversations with pagination and filtering") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[CompletionConversationQuery.__name__]) - @console_ns.response(200, "Success", conversation_pagination_model) + @console_ns.response(200, "Success", console_ns.models[ConversationPaginationResponse.__name__]) @console_ns.response(403, "Insufficient permissions") @setup_required @login_required @account_initialization_required @get_app_model(mode=AppMode.COMPLETION) - @marshal_with(conversation_pagination_model) @edit_permission_required def get(self, app_model): current_user, _ = current_account_with_tenant() @@ -394,7 +159,9 @@ class CompletionConversationApi(Resource): conversations = db.paginate(query, page=args.page, per_page=args.limit, error_out=False) - return conversations + return ConversationPaginationResponse.model_validate(conversations, from_attributes=True).model_dump( + mode="json" + ) @console_ns.route("/apps//completion-conversations/") @@ -402,19 +169,19 @@ class CompletionConversationDetailApi(Resource): @console_ns.doc("get_completion_conversation") @console_ns.doc(description="Get completion conversation details with messages") @console_ns.doc(params={"app_id": "Application ID", "conversation_id": "Conversation ID"}) - @console_ns.response(200, "Success", conversation_message_detail_model) + @console_ns.response(200, "Success", console_ns.models[ConversationMessageDetailResponse.__name__]) @console_ns.response(403, "Insufficient permissions") @console_ns.response(404, "Conversation not found") @setup_required @login_required @account_initialization_required @get_app_model(mode=AppMode.COMPLETION) - @marshal_with(conversation_message_detail_model) @edit_permission_required def get(self, app_model, conversation_id): conversation_id = str(conversation_id) - - return _get_conversation(app_model, conversation_id) + return ConversationMessageDetailResponse.model_validate( + _get_conversation(app_model, conversation_id), from_attributes=True + ).model_dump(mode="json") @console_ns.doc("delete_completion_conversation") @console_ns.doc(description="Delete a completion conversation") @@ -436,7 +203,7 @@ class CompletionConversationDetailApi(Resource): except ConversationNotExistsError: raise NotFound("Conversation Not Exists.") - return {"result": "success"}, 204 + return ResultResponse(result="success").model_dump(mode="json"), 204 @console_ns.route("/apps//chat-conversations") @@ -445,13 +212,12 @@ class ChatConversationApi(Resource): @console_ns.doc(description="Get chat conversations with pagination, filtering and summary") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[ChatConversationQuery.__name__]) - @console_ns.response(200, "Success", conversation_with_summary_pagination_model) + @console_ns.response(200, "Success", console_ns.models[ConversationWithSummaryPaginationResponse.__name__]) @console_ns.response(403, "Insufficient permissions") @setup_required @login_required @account_initialization_required @get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]) - @marshal_with(conversation_with_summary_pagination_model) @edit_permission_required def get(self, app_model): current_user, _ = current_account_with_tenant() @@ -546,7 +312,9 @@ class ChatConversationApi(Resource): conversations = db.paginate(query, page=args.page, per_page=args.limit, error_out=False) - return conversations + return ConversationWithSummaryPaginationResponse.model_validate(conversations, from_attributes=True).model_dump( + mode="json" + ) @console_ns.route("/apps//chat-conversations/") @@ -554,19 +322,19 @@ class ChatConversationDetailApi(Resource): @console_ns.doc("get_chat_conversation") @console_ns.doc(description="Get chat conversation details") @console_ns.doc(params={"app_id": "Application ID", "conversation_id": "Conversation ID"}) - @console_ns.response(200, "Success", conversation_detail_model) + @console_ns.response(200, "Success", console_ns.models[ConversationDetailResponse.__name__]) @console_ns.response(403, "Insufficient permissions") @console_ns.response(404, "Conversation not found") @setup_required @login_required @account_initialization_required @get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]) - @marshal_with(conversation_detail_model) @edit_permission_required def get(self, app_model, conversation_id): conversation_id = str(conversation_id) - - return _get_conversation(app_model, conversation_id) + return ConversationDetailResponse.model_validate( + _get_conversation(app_model, conversation_id), from_attributes=True + ).model_dump(mode="json") @console_ns.doc("delete_chat_conversation") @console_ns.doc(description="Delete a chat conversation") @@ -588,7 +356,7 @@ class ChatConversationDetailApi(Resource): except ConversationNotExistsError: raise NotFound("Conversation Not Exists.") - return {"result": "success"}, 204 + return ResultResponse(result="success").model_dump(mode="json"), 204 def _get_conversation(app_model, conversation_id): diff --git a/api/controllers/console/app/conversation_variables.py b/api/controllers/console/app/conversation_variables.py index 369c26a80c..cead33d14f 100644 --- a/api/controllers/console/app/conversation_variables.py +++ b/api/controllers/console/app/conversation_variables.py @@ -1,44 +1,86 @@ +from __future__ import annotations + +from datetime import datetime +from typing import Any + from flask import request -from flask_restx import Resource, fields, marshal_with -from pydantic import BaseModel, Field +from flask_restx import Resource +from pydantic import BaseModel, Field, field_validator from sqlalchemy import select from sqlalchemy.orm import sessionmaker +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.app.wraps import get_app_model from controllers.console.wraps import account_initialization_required, setup_required from extensions.ext_database import db -from fields.conversation_variable_fields import ( - conversation_variable_fields, - paginated_conversation_variable_fields, -) +from fields._value_type_serializer import serialize_value_type +from fields.base import ResponseModel from libs.login import login_required from models import ConversationVariable from models.model import AppMode -DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" - class ConversationVariablesQuery(BaseModel): conversation_id: str = Field(..., description="Conversation ID to filter variables") -console_ns.schema_model( - ConversationVariablesQuery.__name__, - ConversationVariablesQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), -) +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value -# Register models for flask_restx to avoid dict type issues in Swagger -# Register base model first -conversation_variable_model = console_ns.model("ConversationVariable", conversation_variable_fields) -# For nested models, need to replace nested dict with registered model -paginated_conversation_variable_fields_copy = paginated_conversation_variable_fields.copy() -paginated_conversation_variable_fields_copy["data"] = fields.List( - fields.Nested(conversation_variable_model), attribute="data" -) -paginated_conversation_variable_model = console_ns.model( - "PaginatedConversationVariable", paginated_conversation_variable_fields_copy +class ConversationVariableResponse(ResponseModel): + id: str + name: str + value_type: str + value: str | None = None + description: str | None = None + created_at: int | None = None + updated_at: int | None = None + + @field_validator("value_type", mode="before") + @classmethod + def _normalize_value_type(cls, value: Any) -> str: + exposed_type = getattr(value, "exposed_type", None) + if callable(exposed_type): + return str(exposed_type().value) + if isinstance(value, str): + return value + try: + return serialize_value_type(value) + except Exception: + return serialize_value_type({"value_type": value}) + + @field_validator("value", mode="before") + @classmethod + def _normalize_value(cls, value: Any | None) -> str | None: + if value is None: + return None + if isinstance(value, str): + return value + return str(value) + + @field_validator("created_at", "updated_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class PaginatedConversationVariableResponse(ResponseModel): + page: int + limit: int + total: int + has_more: bool + data: list[ConversationVariableResponse] + + +register_schema_models( + console_ns, + ConversationVariablesQuery, + ConversationVariableResponse, + PaginatedConversationVariableResponse, ) @@ -48,12 +90,15 @@ class ConversationVariablesApi(Resource): @console_ns.doc(description="Get conversation variables for an application") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[ConversationVariablesQuery.__name__]) - @console_ns.response(200, "Conversation variables retrieved successfully", paginated_conversation_variable_model) + @console_ns.response( + 200, + "Conversation variables retrieved successfully", + console_ns.models[PaginatedConversationVariableResponse.__name__], + ) @setup_required @login_required @account_initialization_required @get_app_model(mode=AppMode.ADVANCED_CHAT) - @marshal_with(paginated_conversation_variable_model) def get(self, app_model): args = ConversationVariablesQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore @@ -72,17 +117,22 @@ class ConversationVariablesApi(Resource): with sessionmaker(db.engine, expire_on_commit=False).begin() as session: rows = session.scalars(stmt).all() - return { - "page": page, - "limit": page_size, - "total": len(rows), - "has_more": False, - "data": [ - { - "created_at": row.created_at, - "updated_at": row.updated_at, - **row.to_variable().model_dump(), - } - for row in rows - ], - } + response = PaginatedConversationVariableResponse.model_validate( + { + "page": page, + "limit": page_size, + "total": len(rows), + "has_more": False, + "data": [ + ConversationVariableResponse.model_validate( + { + "created_at": row.created_at, + "updated_at": row.updated_at, + **row.to_variable().model_dump(), + } + ) + for row in rows + ], + } + ) + return response.model_dump(mode="json") diff --git a/api/controllers/console/app/generator.py b/api/controllers/console/app/generator.py index 7101d5df7b..c720a5e074 100644 --- a/api/controllers/console/app/generator.py +++ b/api/controllers/console/app/generator.py @@ -1,7 +1,6 @@ from collections.abc import Sequence from flask_restx import Resource -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field from controllers.console import console_ns @@ -20,6 +19,7 @@ from core.helper.code_executor.python3.python3_code_provider import Python3CodeP from core.llm_generator.entities import RuleCodeGeneratePayload, RuleGeneratePayload, RuleStructuredOutputPayload from core.llm_generator.llm_generator import LLMGenerator from extensions.ext_database import db +from graphon.model_runtime.errors.invoke import InvokeError from libs.login import current_account_with_tenant, login_required from models import App from services.workflow_service import WorkflowService diff --git a/api/controllers/console/app/mcp_server.py b/api/controllers/console/app/mcp_server.py index 412fc8795a..d517f695b8 100644 --- a/api/controllers/console/app/mcp_server.py +++ b/api/controllers/console/app/mcp_server.py @@ -1,39 +1,68 @@ import json +from datetime import datetime +from typing import Any -from flask_restx import Resource, marshal_with -from pydantic import BaseModel, Field +from flask_restx import Resource +from pydantic import BaseModel, Field, field_validator from sqlalchemy import select from werkzeug.exceptions import NotFound +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.app.wraps import get_app_model from controllers.console.wraps import account_initialization_required, edit_permission_required, setup_required from extensions.ext_database import db -from fields.app_fields import app_server_fields +from fields.base import ResponseModel from libs.login import current_account_with_tenant, login_required from models.enums import AppMCPServerStatus from models.model import AppMCPServer -DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" - -# Register model for flask_restx to avoid dict type issues in Swagger -app_server_model = console_ns.model("AppServer", app_server_fields) - class MCPServerCreatePayload(BaseModel): description: str | None = Field(default=None, description="Server description") - parameters: dict = Field(..., description="Server parameters configuration") + parameters: dict[str, Any] = Field(..., description="Server parameters configuration") class MCPServerUpdatePayload(BaseModel): id: str = Field(..., description="Server ID") description: str | None = Field(default=None, description="Server description") - parameters: dict = Field(..., description="Server parameters configuration") + parameters: dict[str, Any] = Field(..., description="Server parameters configuration") status: str | None = Field(default=None, description="Server status") -for model in (MCPServerCreatePayload, MCPServerUpdatePayload): - console_ns.schema_model(model.__name__, model.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)) +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class AppMCPServerResponse(ResponseModel): + id: str + name: str + server_code: str + description: str + status: AppMCPServerStatus + parameters: dict[str, Any] | list[Any] | str + created_at: int | None = None + updated_at: int | None = None + + @field_validator("parameters", mode="before") + @classmethod + def _normalize_parameters(cls, value: Any) -> Any: + if isinstance(value, str): + try: + return json.loads(value) + except (json.JSONDecodeError, TypeError): + return value + return value + + @field_validator("created_at", "updated_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +register_schema_models(console_ns, MCPServerCreatePayload, MCPServerUpdatePayload, AppMCPServerResponse) @console_ns.route("/apps//server") @@ -41,27 +70,31 @@ class AppMCPServerController(Resource): @console_ns.doc("get_app_mcp_server") @console_ns.doc(description="Get MCP server configuration for an application") @console_ns.doc(params={"app_id": "Application ID"}) - @console_ns.response(200, "MCP server configuration retrieved successfully", app_server_model) + @console_ns.response( + 200, "MCP server configuration retrieved successfully", console_ns.models[AppMCPServerResponse.__name__] + ) @login_required @account_initialization_required @setup_required @get_app_model - @marshal_with(app_server_model) def get(self, app_model): server = db.session.scalar(select(AppMCPServer).where(AppMCPServer.app_id == app_model.id).limit(1)) - return server + if server is None: + return {} + return AppMCPServerResponse.model_validate(server, from_attributes=True).model_dump(mode="json") @console_ns.doc("create_app_mcp_server") @console_ns.doc(description="Create MCP server configuration for an application") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[MCPServerCreatePayload.__name__]) - @console_ns.response(201, "MCP server configuration created successfully", app_server_model) + @console_ns.response( + 201, "MCP server configuration created successfully", console_ns.models[AppMCPServerResponse.__name__] + ) @console_ns.response(403, "Insufficient permissions") @account_initialization_required @get_app_model @login_required @setup_required - @marshal_with(app_server_model) @edit_permission_required def post(self, app_model): _, current_tenant_id = current_account_with_tenant() @@ -82,20 +115,21 @@ class AppMCPServerController(Resource): ) db.session.add(server) db.session.commit() - return server + return AppMCPServerResponse.model_validate(server, from_attributes=True).model_dump(mode="json"), 201 @console_ns.doc("update_app_mcp_server") @console_ns.doc(description="Update MCP server configuration for an application") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[MCPServerUpdatePayload.__name__]) - @console_ns.response(200, "MCP server configuration updated successfully", app_server_model) + @console_ns.response( + 200, "MCP server configuration updated successfully", console_ns.models[AppMCPServerResponse.__name__] + ) @console_ns.response(403, "Insufficient permissions") @console_ns.response(404, "Server not found") @get_app_model @login_required @setup_required @account_initialization_required - @marshal_with(app_server_model) @edit_permission_required def put(self, app_model): payload = MCPServerUpdatePayload.model_validate(console_ns.payload or {}) @@ -118,7 +152,7 @@ class AppMCPServerController(Resource): except ValueError: raise ValueError("Invalid status") db.session.commit() - return server + return AppMCPServerResponse.model_validate(server, from_attributes=True).model_dump(mode="json") @console_ns.route("/apps//server/refresh") @@ -126,13 +160,12 @@ class AppMCPServerRefreshController(Resource): @console_ns.doc("refresh_app_mcp_server") @console_ns.doc(description="Refresh MCP server configuration and regenerate server code") @console_ns.doc(params={"server_id": "Server ID"}) - @console_ns.response(200, "MCP server refreshed successfully", app_server_model) + @console_ns.response(200, "MCP server refreshed successfully", console_ns.models[AppMCPServerResponse.__name__]) @console_ns.response(403, "Insufficient permissions") @console_ns.response(404, "Server not found") @setup_required @login_required @account_initialization_required - @marshal_with(app_server_model) @edit_permission_required def get(self, server_id): _, current_tenant_id = current_account_with_tenant() @@ -145,4 +178,4 @@ class AppMCPServerRefreshController(Resource): raise NotFound() server.server_code = AppMCPServer.generate_server_code(16) db.session.commit() - return server + return AppMCPServerResponse.model_validate(server, from_attributes=True).model_dump(mode="json") diff --git a/api/controllers/console/app/message.py b/api/controllers/console/app/message.py index 2afe276742..44e19b57db 100644 --- a/api/controllers/console/app/message.py +++ b/api/controllers/console/app/message.py @@ -1,13 +1,14 @@ import logging +from datetime import datetime from typing import Literal from flask import request -from flask_restx import Resource, fields, marshal_with -from graphon.model_runtime.errors.invoke import InvokeError +from flask_restx import Resource from pydantic import BaseModel, Field, field_validator from sqlalchemy import exists, func, select from werkzeug.exceptions import InternalServerError, NotFound +from controllers.common.controller_schemas import MessageFeedbackPayload as _MessageFeedbackPayloadBase from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.app.error import ( @@ -24,10 +25,22 @@ from controllers.console.wraps import ( setup_required, ) from core.app.entities.app_invoke_entities import InvokeFrom +from core.entities.execution_extra_content import ExecutionExtraContentDomainModel from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError from extensions.ext_database import db -from fields.raws import FilesContainedField -from libs.helper import TimestampField, uuid_value +from fields.base import ResponseModel +from fields.conversation_fields import ( + AgentThought, + ConversationAnnotation, + ConversationAnnotationHitHistory, + Feedback, + JSONValue, + MessageFile, + format_files_contained, + to_timestamp, +) +from graphon.model_runtime.errors.invoke import InvokeError +from libs.helper import uuid_value from libs.infinite_scroll_pagination import InfiniteScrollPagination from libs.login import current_account_with_tenant, login_required from models.enums import FeedbackFromSource, FeedbackRating @@ -59,10 +72,8 @@ class ChatMessagesQuery(BaseModel): return uuid_value(value) -class MessageFeedbackPayload(BaseModel): +class MessageFeedbackPayload(_MessageFeedbackPayloadBase): message_id: str = Field(..., description="Message ID") - rating: Literal["like", "dislike"] | None = Field(default=None, description="Feedback rating") - content: str | None = Field(default=None, description="Feedback content") @field_validator("message_id") @classmethod @@ -99,6 +110,51 @@ class SuggestedQuestionsResponse(BaseModel): data: list[str] = Field(description="Suggested question") +class MessageDetailResponse(ResponseModel): + id: str + conversation_id: str + inputs: dict[str, JSONValue] + query: str + message: JSONValue | None = None + message_tokens: int | None = None + answer: str = Field(validation_alias="re_sign_file_url_answer") + answer_tokens: int | None = None + provider_response_latency: float | None = None + from_source: str + from_end_user_id: str | None = None + from_account_id: str | None = None + feedbacks: list[Feedback] = Field(default_factory=list) + workflow_run_id: str | None = None + annotation: ConversationAnnotation | None = None + annotation_hit_history: ConversationAnnotationHitHistory | None = None + created_at: int | None = None + agent_thoughts: list[AgentThought] = Field(default_factory=list) + message_files: list[MessageFile] = Field(default_factory=list) + extra_contents: list[ExecutionExtraContentDomainModel] = Field(default_factory=list) + metadata: JSONValue | None = Field(default=None, validation_alias="message_metadata_dict") + status: str + error: str | None = None + parent_message_id: str | None = None + + @field_validator("inputs", mode="before") + @classmethod + def _normalize_inputs(cls, value: JSONValue) -> JSONValue: + return format_files_contained(value) + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_created_at(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return to_timestamp(value) + return value + + +class MessageInfiniteScrollPaginationResponse(ResponseModel): + limit: int + has_more: bool + data: list[MessageDetailResponse] + + register_schema_models( console_ns, ChatMessagesQuery, @@ -106,124 +162,8 @@ register_schema_models( FeedbackExportQuery, AnnotationCountResponse, SuggestedQuestionsResponse, -) - -# Register models for flask_restx to avoid dict type issues in Swagger -# Register in dependency order: base models first, then dependent models - -# Base models -simple_account_model = console_ns.model( - "SimpleAccount", - { - "id": fields.String, - "name": fields.String, - "email": fields.String, - }, -) - -message_file_model = console_ns.model( - "MessageFile", - { - "id": fields.String, - "filename": fields.String, - "type": fields.String, - "url": fields.String, - "mime_type": fields.String, - "size": fields.Integer, - "transfer_method": fields.String, - "belongs_to": fields.String(default="user"), - "upload_file_id": fields.String(default=None), - }, -) - -agent_thought_model = console_ns.model( - "AgentThought", - { - "id": fields.String, - "chain_id": fields.String, - "message_id": fields.String, - "position": fields.Integer, - "thought": fields.String, - "tool": fields.String, - "tool_labels": fields.Raw, - "tool_input": fields.String, - "created_at": TimestampField, - "observation": fields.String, - "files": fields.List(fields.String), - }, -) - -# Models that depend on simple_account_model -feedback_model = console_ns.model( - "Feedback", - { - "rating": fields.String, - "content": fields.String, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_account": fields.Nested(simple_account_model, allow_null=True), - }, -) - -annotation_model = console_ns.model( - "Annotation", - { - "id": fields.String, - "question": fields.String, - "content": fields.String, - "account": fields.Nested(simple_account_model, allow_null=True), - "created_at": TimestampField, - }, -) - -annotation_hit_history_model = console_ns.model( - "AnnotationHitHistory", - { - "annotation_id": fields.String(attribute="id"), - "annotation_create_account": fields.Nested(simple_account_model, allow_null=True), - "created_at": TimestampField, - }, -) - -# Message detail model that depends on multiple models -message_detail_model = console_ns.model( - "MessageDetail", - { - "id": fields.String, - "conversation_id": fields.String, - "inputs": FilesContainedField, - "query": fields.String, - "message": fields.Raw, - "message_tokens": fields.Integer, - "answer": fields.String(attribute="re_sign_file_url_answer"), - "answer_tokens": fields.Integer, - "provider_response_latency": fields.Float, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_account_id": fields.String, - "feedbacks": fields.List(fields.Nested(feedback_model)), - "workflow_run_id": fields.String, - "annotation": fields.Nested(annotation_model, allow_null=True), - "annotation_hit_history": fields.Nested(annotation_hit_history_model, allow_null=True), - "created_at": TimestampField, - "agent_thoughts": fields.List(fields.Nested(agent_thought_model)), - "message_files": fields.List(fields.Nested(message_file_model)), - "extra_contents": fields.List(fields.Raw), - "metadata": fields.Raw(attribute="message_metadata_dict"), - "status": fields.String, - "error": fields.String, - "parent_message_id": fields.String, - }, -) - -# Message infinite scroll pagination model -message_infinite_scroll_pagination_model = console_ns.model( - "MessageInfiniteScrollPagination", - { - "limit": fields.Integer, - "has_more": fields.Boolean, - "data": fields.List(fields.Nested(message_detail_model)), - }, + MessageDetailResponse, + MessageInfiniteScrollPaginationResponse, ) @@ -233,13 +173,12 @@ class ChatMessageListApi(Resource): @console_ns.doc(description="Get chat messages for a conversation with pagination") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[ChatMessagesQuery.__name__]) - @console_ns.response(200, "Success", message_infinite_scroll_pagination_model) + @console_ns.response(200, "Success", console_ns.models[MessageInfiniteScrollPaginationResponse.__name__]) @console_ns.response(404, "Conversation not found") @login_required @account_initialization_required @setup_required @get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]) - @marshal_with(message_infinite_scroll_pagination_model) @edit_permission_required def get(self, app_model): args = ChatMessagesQuery.model_validate(request.args.to_dict()) @@ -299,7 +238,10 @@ class ChatMessageListApi(Resource): history_messages = list(reversed(history_messages)) attach_message_extra_contents(history_messages) - return InfiniteScrollPagination(data=history_messages, limit=args.limit, has_more=has_more) + return MessageInfiniteScrollPaginationResponse.model_validate( + InfiniteScrollPagination(data=history_messages, limit=args.limit, has_more=has_more), + from_attributes=True, + ).model_dump(mode="json") @console_ns.route("/apps//feedbacks") @@ -469,13 +411,12 @@ class MessageApi(Resource): @console_ns.doc("get_message") @console_ns.doc(description="Get message details by ID") @console_ns.doc(params={"app_id": "Application ID", "message_id": "Message ID"}) - @console_ns.response(200, "Message retrieved successfully", message_detail_model) + @console_ns.response(200, "Message retrieved successfully", console_ns.models[MessageDetailResponse.__name__]) @console_ns.response(404, "Message not found") @get_app_model @setup_required @login_required @account_initialization_required - @marshal_with(message_detail_model) def get(self, app_model, message_id: str): message_id = str(message_id) @@ -487,4 +428,4 @@ class MessageApi(Resource): raise NotFound("Message Not Exists.") attach_message_extra_contents([message]) - return message + return MessageDetailResponse.model_validate(message, from_attributes=True).model_dump(mode="json") diff --git a/api/controllers/console/app/model_config.py b/api/controllers/console/app/model_config.py index 8bb5aa2c1b..1869cbf5f6 100644 --- a/api/controllers/console/app/model_config.py +++ b/api/controllers/console/app/model_config.py @@ -1,9 +1,11 @@ import json -from typing import cast +from typing import Any, cast from flask import request -from flask_restx import Resource, fields +from flask_restx import Resource +from pydantic import BaseModel, Field +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.app.wraps import get_app_model from controllers.console.wraps import account_initialization_required, edit_permission_required, setup_required @@ -18,30 +20,30 @@ from models.model import AppMode, AppModelConfig from services.app_model_config_service import AppModelConfigService +class ModelConfigRequest(BaseModel): + provider: str | None = Field(default=None, description="Model provider") + model: str | None = Field(default=None, description="Model name") + configs: dict[str, Any] | None = Field(default=None, description="Model configuration parameters") + opening_statement: str | None = Field(default=None, description="Opening statement") + suggested_questions: list[str] | None = Field(default=None, description="Suggested questions") + more_like_this: dict[str, Any] | None = Field(default=None, description="More like this configuration") + speech_to_text: dict[str, Any] | None = Field(default=None, description="Speech to text configuration") + text_to_speech: dict[str, Any] | None = Field(default=None, description="Text to speech configuration") + retrieval_model: dict[str, Any] | None = Field(default=None, description="Retrieval model configuration") + tools: list[dict[str, Any]] | None = Field(default=None, description="Available tools") + dataset_configs: dict[str, Any] | None = Field(default=None, description="Dataset configurations") + agent_mode: dict[str, Any] | None = Field(default=None, description="Agent mode configuration") + + +register_schema_models(console_ns, ModelConfigRequest) + + @console_ns.route("/apps//model-config") class ModelConfigResource(Resource): @console_ns.doc("update_app_model_config") @console_ns.doc(description="Update application model configuration") @console_ns.doc(params={"app_id": "Application ID"}) - @console_ns.expect( - console_ns.model( - "ModelConfigRequest", - { - "provider": fields.String(description="Model provider"), - "model": fields.String(description="Model name"), - "configs": fields.Raw(description="Model configuration parameters"), - "opening_statement": fields.String(description="Opening statement"), - "suggested_questions": fields.List(fields.String(), description="Suggested questions"), - "more_like_this": fields.Raw(description="More like this configuration"), - "speech_to_text": fields.Raw(description="Speech to text configuration"), - "text_to_speech": fields.Raw(description="Text to speech configuration"), - "retrieval_model": fields.Raw(description="Retrieval model configuration"), - "tools": fields.List(fields.Raw(), description="Available tools"), - "dataset_configs": fields.Raw(description="Dataset configurations"), - "agent_mode": fields.Raw(description="Agent mode configuration"), - }, - ) - ) + @console_ns.expect(console_ns.models[ModelConfigRequest.__name__]) @console_ns.response(200, "Model configuration updated successfully") @console_ns.response(400, "Invalid configuration") @console_ns.response(404, "App not found") diff --git a/api/controllers/console/app/site.py b/api/controllers/console/app/site.py index 7f44a99ff1..9991d78d94 100644 --- a/api/controllers/console/app/site.py +++ b/api/controllers/console/app/site.py @@ -1,11 +1,12 @@ from typing import Literal -from flask_restx import Resource, marshal_with +from flask_restx import Resource from pydantic import BaseModel, Field, field_validator from sqlalchemy import select from werkzeug.exceptions import NotFound from constants.languages import supported_language +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.app.wraps import get_app_model from controllers.console.wraps import ( @@ -15,13 +16,11 @@ from controllers.console.wraps import ( setup_required, ) from extensions.ext_database import db -from fields.app_fields import app_site_fields +from fields.base import ResponseModel from libs.datetime_utils import naive_utc_now from libs.login import current_account_with_tenant, login_required from models import Site -DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" - class AppSiteUpdatePayload(BaseModel): title: str | None = Field(default=None) @@ -49,13 +48,26 @@ class AppSiteUpdatePayload(BaseModel): return supported_language(value) -console_ns.schema_model( - AppSiteUpdatePayload.__name__, - AppSiteUpdatePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), -) +class AppSiteResponse(ResponseModel): + app_id: str + access_token: str | None = Field(default=None, validation_alias="code") + code: str | None = None + title: str + icon: str | None = None + icon_background: str | None = None + description: str | None = None + default_language: str + customize_domain: str | None = None + copyright: str | None = None + privacy_policy: str | None = None + custom_disclaimer: str | None = None + customize_token_strategy: str + prompt_public: bool + show_workflow_steps: bool + use_icon_as_answer_icon: bool -# Register model for flask_restx to avoid dict type issues in Swagger -app_site_model = console_ns.model("AppSite", app_site_fields) + +register_schema_models(console_ns, AppSiteUpdatePayload, AppSiteResponse) @console_ns.route("/apps//site") @@ -64,7 +76,7 @@ class AppSite(Resource): @console_ns.doc(description="Update application site configuration") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[AppSiteUpdatePayload.__name__]) - @console_ns.response(200, "Site configuration updated successfully", app_site_model) + @console_ns.response(200, "Site configuration updated successfully", console_ns.models[AppSiteResponse.__name__]) @console_ns.response(403, "Insufficient permissions") @console_ns.response(404, "App not found") @setup_required @@ -72,7 +84,6 @@ class AppSite(Resource): @edit_permission_required @account_initialization_required @get_app_model - @marshal_with(app_site_model) def post(self, app_model): args = AppSiteUpdatePayload.model_validate(console_ns.payload or {}) current_user, _ = current_account_with_tenant() @@ -106,7 +117,7 @@ class AppSite(Resource): site.updated_at = naive_utc_now() db.session.commit() - return site + return AppSiteResponse.model_validate(site, from_attributes=True).model_dump(mode="json") @console_ns.route("/apps//site/access-token-reset") @@ -114,7 +125,7 @@ class AppSiteAccessTokenReset(Resource): @console_ns.doc("reset_app_site_access_token") @console_ns.doc(description="Reset access token for application site") @console_ns.doc(params={"app_id": "Application ID"}) - @console_ns.response(200, "Access token reset successfully", app_site_model) + @console_ns.response(200, "Access token reset successfully", console_ns.models[AppSiteResponse.__name__]) @console_ns.response(403, "Insufficient permissions (admin/owner required)") @console_ns.response(404, "App or site not found") @setup_required @@ -122,7 +133,6 @@ class AppSiteAccessTokenReset(Resource): @is_admin_or_owner_required @account_initialization_required @get_app_model - @marshal_with(app_site_model) def post(self, app_model): current_user, _ = current_account_with_tenant() site = db.session.scalar(select(Site).where(Site.app_id == app_model.id).limit(1)) @@ -135,4 +145,4 @@ class AppSiteAccessTokenReset(Resource): site.updated_at = naive_utc_now() db.session.commit() - return site + return AppSiteResponse.model_validate(site, from_attributes=True).model_dump(mode="json") diff --git a/api/controllers/console/app/workflow.py b/api/controllers/console/app/workflow.py index dcd24d2200..478f783eb0 100644 --- a/api/controllers/console/app/workflow.py +++ b/api/controllers/console/app/workflow.py @@ -4,16 +4,13 @@ from collections.abc import Sequence from typing import Any from flask import abort, request -from flask_restx import Resource, fields, marshal_with -from graphon.enums import NodeType -from graphon.file import File -from graphon.graph_engine.manager import GraphEngineManager -from graphon.model_runtime.utils.encoders import jsonable_encoder +from flask_restx import Resource, fields, marshal, marshal_with from pydantic import BaseModel, Field, ValidationError, field_validator from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import BadRequest, Forbidden, InternalServerError, NotFound import services +from controllers.common.controller_schemas import DefaultBlockConfigQuery, WorkflowListQuery, WorkflowUpdatePayload from controllers.console import console_ns from controllers.console.app.error import ConversationCompletedError, DraftWorkflowNotExist, DraftWorkflowNotSync from controllers.console.app.workflow_run import workflow_run_node_execution_model @@ -38,7 +35,13 @@ from extensions.ext_database import db from extensions.ext_redis import redis_client from factories import file_factory, variable_factory from fields.member_fields import simple_account_fields +from fields.online_user_fields import online_user_list_fields from fields.workflow_fields import workflow_fields, workflow_pagination_fields +from graphon.enums import NodeType +from graphon.file import File +from graphon.file import helpers as file_helpers +from graphon.graph_engine.manager import GraphEngineManager +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs import helper from libs.datetime_utils import naive_utc_now from libs.helper import TimestampField, uuid_value @@ -46,6 +49,7 @@ from libs.login import current_account_with_tenant, login_required from models import App from models.model import AppMode from models.workflow import Workflow +from repositories.workflow_collaboration_repository import WORKFLOW_ONLINE_USERS_PREFIX from services.app_generate_service import AppGenerateService from services.errors.app import IsDraftWorkflowError, WorkflowHashNotEqualError, WorkflowNotFoundError from services.errors.llm import InvokeRateLimitError @@ -56,6 +60,7 @@ _file_access_controller = DatabaseFileAccessController() LISTENING_RETRY_IN = 2000 DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" RESTORE_SOURCE_WORKFLOW_MUST_BE_PUBLISHED_MESSAGE = "source workflow must be published" +MAX_WORKFLOW_ONLINE_USERS_QUERY_IDS = 50 # Register models for flask_restx to avoid dict type issues in Swagger # Register in dependency order: base models first, then dependent models @@ -142,10 +147,6 @@ class PublishWorkflowPayload(BaseModel): marked_comment: str | None = Field(default=None, max_length=100) -class DefaultBlockConfigQuery(BaseModel): - q: str | None = None - - class ConvertToWorkflowPayload(BaseModel): name: str | None = None icon_type: str | None = None @@ -153,16 +154,12 @@ class ConvertToWorkflowPayload(BaseModel): icon_background: str | None = None -class WorkflowListQuery(BaseModel): - page: int = Field(default=1, ge=1, le=99999) - limit: int = Field(default=10, ge=1, le=100) - user_id: str | None = None - named_only: bool = False +class WorkflowFeaturesPayload(BaseModel): + features: dict[str, Any] = Field(..., description="Workflow feature configuration") -class WorkflowUpdatePayload(BaseModel): - marked_name: str | None = Field(default=None, max_length=20) - marked_comment: str | None = Field(default=None, max_length=100) +class WorkflowOnlineUsersQuery(BaseModel): + app_ids: str = Field(..., description="Comma-separated app IDs") class DraftWorkflowTriggerRunPayload(BaseModel): @@ -188,6 +185,8 @@ reg(DefaultBlockConfigQuery) reg(ConvertToWorkflowPayload) reg(WorkflowListQuery) reg(WorkflowUpdatePayload) +reg(WorkflowFeaturesPayload) +reg(WorkflowOnlineUsersQuery) reg(DraftWorkflowTriggerRunPayload) reg(DraftWorkflowTriggerRunAllPayload) @@ -946,6 +945,32 @@ class ConvertToWorkflowApi(Resource): } +@console_ns.route("/apps//workflows/draft/features") +class WorkflowFeaturesApi(Resource): + """Update draft workflow features.""" + + @console_ns.expect(console_ns.models[WorkflowFeaturesPayload.__name__]) + @console_ns.doc("update_workflow_features") + @console_ns.doc(description="Update draft workflow features") + @console_ns.doc(params={"app_id": "Application ID"}) + @console_ns.response(200, "Workflow features updated successfully") + @setup_required + @login_required + @account_initialization_required + @get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW]) + @edit_permission_required + def post(self, app_model: App): + current_user, _ = current_account_with_tenant() + + args = WorkflowFeaturesPayload.model_validate(console_ns.payload or {}) + features = args.features + + workflow_service = WorkflowService() + workflow_service.update_draft_workflow_features(app_model=app_model, features=features, account=current_user) + + return {"result": "success"} + + @console_ns.route("/apps//workflows") class PublishedAllWorkflowApi(Resource): @console_ns.expect(console_ns.models[WorkflowListQuery.__name__]) @@ -957,7 +982,6 @@ class PublishedAllWorkflowApi(Resource): @login_required @account_initialization_required @get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW]) - @marshal_with(workflow_pagination_model) @edit_permission_required def get(self, app_model: App): """ @@ -985,9 +1009,10 @@ class PublishedAllWorkflowApi(Resource): user_id=user_id, named_only=named_only, ) + serialized_workflows = marshal(workflows, workflow_fields_copy) return { - "items": workflows, + "items": serialized_workflows, "page": page, "limit": limit, "has_more": has_more, @@ -1355,3 +1380,62 @@ class DraftWorkflowTriggerRunAllApi(Resource): "status": "error", } ), 400 + + +@console_ns.route("/apps/workflows/online-users") +class WorkflowOnlineUsersApi(Resource): + @console_ns.expect(console_ns.models[WorkflowOnlineUsersQuery.__name__]) + @console_ns.doc("get_workflow_online_users") + @console_ns.doc(description="Get workflow online users") + @setup_required + @login_required + @account_initialization_required + @marshal_with(online_user_list_fields) + def get(self): + args = WorkflowOnlineUsersQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore + + app_ids = list(dict.fromkeys(app_id.strip() for app_id in args.app_ids.split(",") if app_id.strip())) + if len(app_ids) > MAX_WORKFLOW_ONLINE_USERS_QUERY_IDS: + raise BadRequest(f"Maximum {MAX_WORKFLOW_ONLINE_USERS_QUERY_IDS} app_ids are allowed per request.") + + if not app_ids: + return {"data": []} + + _, current_tenant_id = current_account_with_tenant() + workflow_service = WorkflowService() + accessible_app_ids = workflow_service.get_accessible_app_ids(app_ids, current_tenant_id) + + results = [] + for app_id in app_ids: + if app_id not in accessible_app_ids: + continue + + users_json = redis_client.hgetall(f"{WORKFLOW_ONLINE_USERS_PREFIX}{app_id}") + + users = [] + for _, user_info_json in users_json.items(): + try: + user_info = json.loads(user_info_json) + except Exception: + continue + + if not isinstance(user_info, dict): + continue + + avatar = user_info.get("avatar") + if isinstance(avatar, str) and avatar and not avatar.startswith(("http://", "https://")): + try: + user_info["avatar"] = file_helpers.get_signed_file_url(avatar) + except Exception as exc: + logger.warning( + "Failed to sign workflow online user avatar; using original value. " + "app_id=%s avatar=%s error=%s", + app_id, + avatar, + exc, + ) + + users.append(user_info) + results.append({"app_id": app_id, "users": users}) + + return {"data": results} diff --git a/api/controllers/console/app/workflow_app_log.py b/api/controllers/console/app/workflow_app_log.py index 3b24c2a402..4b39590235 100644 --- a/api/controllers/console/app/workflow_app_log.py +++ b/api/controllers/console/app/workflow_app_log.py @@ -1,27 +1,26 @@ from datetime import datetime +from typing import Any from dateutil.parser import isoparse from flask import request -from flask_restx import Resource, marshal_with -from graphon.enums import WorkflowExecutionStatus +from flask_restx import Resource from pydantic import BaseModel, Field, field_validator from sqlalchemy.orm import sessionmaker +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.app.wraps import get_app_model from controllers.console.wraps import account_initialization_required, setup_required from extensions.ext_database import db -from fields.workflow_app_log_fields import ( - build_workflow_app_log_pagination_model, - build_workflow_archived_log_pagination_model, -) +from fields.base import ResponseModel +from fields.end_user_fields import SimpleEndUser +from fields.member_fields import SimpleAccount +from graphon.enums import WorkflowExecutionStatus from libs.login import login_required from models import App from models.model import AppMode from services.workflow_app_service import WorkflowAppService -DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" - class WorkflowAppLogQuery(BaseModel): keyword: str | None = Field(default=None, description="Search keyword for filtering logs") @@ -58,13 +57,113 @@ class WorkflowAppLogQuery(BaseModel): raise ValueError("Invalid boolean value for detail") -console_ns.schema_model( - WorkflowAppLogQuery.__name__, WorkflowAppLogQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0) -) +class WorkflowRunForLogResponse(ResponseModel): + id: str + version: str | None = None + status: str | None = None + triggered_from: str | None = None + error: str | None = None + elapsed_time: float | None = None + total_tokens: int | None = None + total_steps: int | None = None + created_at: int | None = None + finished_at: int | None = None + exceptions_count: int | None = None -# Register model for flask_restx to avoid dict type issues in Swagger -workflow_app_log_pagination_model = build_workflow_app_log_pagination_model(console_ns) -workflow_archived_log_pagination_model = build_workflow_archived_log_pagination_model(console_ns) + @field_validator("status", mode="before") + @classmethod + def _normalize_status(cls, value: Any) -> str | None: + if value is None: + return None + if isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @field_validator("created_at", "finished_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class WorkflowRunForArchivedLogResponse(ResponseModel): + id: str + status: str | None = None + triggered_from: str | None = None + elapsed_time: float | None = None + total_tokens: int | None = None + + @field_validator("status", mode="before") + @classmethod + def _normalize_status(cls, value: Any) -> str | None: + if value is None: + return None + if isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + +class WorkflowAppLogPartialResponse(ResponseModel): + id: str + workflow_run: WorkflowRunForLogResponse | None = None + details: Any = None + created_from: str | None = None + created_by_role: str | None = None + created_by_account: SimpleAccount | None = None + created_by_end_user: SimpleEndUser | None = None + created_at: int | None = None + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class WorkflowArchivedLogPartialResponse(ResponseModel): + id: str + workflow_run: WorkflowRunForArchivedLogResponse | None = None + trigger_metadata: Any = None + created_by_account: SimpleAccount | None = None + created_by_end_user: SimpleEndUser | None = None + created_at: int | None = None + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class WorkflowAppLogPaginationResponse(ResponseModel): + page: int + limit: int + total: int + has_more: bool + data: list[WorkflowAppLogPartialResponse] + + +class WorkflowArchivedLogPaginationResponse(ResponseModel): + page: int + limit: int + total: int + has_more: bool + data: list[WorkflowArchivedLogPartialResponse] + + +register_schema_models( + console_ns, + WorkflowAppLogQuery, + WorkflowRunForLogResponse, + WorkflowRunForArchivedLogResponse, + WorkflowAppLogPartialResponse, + WorkflowArchivedLogPartialResponse, + WorkflowAppLogPaginationResponse, + WorkflowArchivedLogPaginationResponse, +) @console_ns.route("/apps//workflow-app-logs") @@ -73,12 +172,15 @@ class WorkflowAppLogApi(Resource): @console_ns.doc(description="Get workflow application execution logs") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[WorkflowAppLogQuery.__name__]) - @console_ns.response(200, "Workflow app logs retrieved successfully", workflow_app_log_pagination_model) + @console_ns.response( + 200, + "Workflow app logs retrieved successfully", + console_ns.models[WorkflowAppLogPaginationResponse.__name__], + ) @setup_required @login_required @account_initialization_required @get_app_model(mode=[AppMode.WORKFLOW]) - @marshal_with(workflow_app_log_pagination_model) def get(self, app_model: App): """ Get workflow app logs @@ -87,7 +189,7 @@ class WorkflowAppLogApi(Resource): # get paginate workflow app logs workflow_app_service = WorkflowAppService() - with sessionmaker(db.engine).begin() as session: + with sessionmaker(db.engine, expire_on_commit=False).begin() as session: workflow_app_log_pagination = workflow_app_service.get_paginate_workflow_app_logs( session=session, app_model=app_model, @@ -102,7 +204,9 @@ class WorkflowAppLogApi(Resource): created_by_account=args.created_by_account, ) - return workflow_app_log_pagination + return WorkflowAppLogPaginationResponse.model_validate( + workflow_app_log_pagination, from_attributes=True + ).model_dump(mode="json") @console_ns.route("/apps//workflow-archived-logs") @@ -111,12 +215,15 @@ class WorkflowArchivedLogApi(Resource): @console_ns.doc(description="Get workflow archived execution logs") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[WorkflowAppLogQuery.__name__]) - @console_ns.response(200, "Workflow archived logs retrieved successfully", workflow_archived_log_pagination_model) + @console_ns.response( + 200, + "Workflow archived logs retrieved successfully", + console_ns.models[WorkflowArchivedLogPaginationResponse.__name__], + ) @setup_required @login_required @account_initialization_required @get_app_model(mode=[AppMode.WORKFLOW]) - @marshal_with(workflow_archived_log_pagination_model) def get(self, app_model: App): """ Get workflow archived logs @@ -124,7 +231,7 @@ class WorkflowArchivedLogApi(Resource): args = WorkflowAppLogQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore workflow_app_service = WorkflowAppService() - with sessionmaker(db.engine).begin() as session: + with sessionmaker(db.engine, expire_on_commit=False).begin() as session: workflow_app_log_pagination = workflow_app_service.get_paginate_workflow_archive_logs( session=session, app_model=app_model, @@ -132,4 +239,6 @@ class WorkflowArchivedLogApi(Resource): limit=args.limit, ) - return workflow_app_log_pagination + return WorkflowArchivedLogPaginationResponse.model_validate( + workflow_app_log_pagination, from_attributes=True + ).model_dump(mode="json") diff --git a/api/controllers/console/app/workflow_comment.py b/api/controllers/console/app/workflow_comment.py new file mode 100644 index 0000000000..e7c3e982a6 --- /dev/null +++ b/api/controllers/console/app/workflow_comment.py @@ -0,0 +1,335 @@ +import logging + +from flask_restx import Resource, marshal_with +from pydantic import BaseModel, Field, TypeAdapter + +from controllers.common.schema import register_schema_models +from controllers.console import console_ns +from controllers.console.app.wraps import get_app_model +from controllers.console.wraps import account_initialization_required, edit_permission_required, setup_required +from fields.member_fields import AccountWithRole +from fields.workflow_comment_fields import ( + workflow_comment_basic_fields, + workflow_comment_create_fields, + workflow_comment_detail_fields, + workflow_comment_reply_create_fields, + workflow_comment_reply_update_fields, + workflow_comment_resolve_fields, + workflow_comment_update_fields, +) +from libs.login import current_user, login_required +from models import App +from services.account_service import TenantService +from services.workflow_comment_service import WorkflowCommentService + +logger = logging.getLogger(__name__) +DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" + + +class WorkflowCommentCreatePayload(BaseModel): + content: str = Field(..., description="Comment content") + position_x: float = Field(..., description="Comment X position") + position_y: float = Field(..., description="Comment Y position") + mentioned_user_ids: list[str] = Field(default_factory=list, description="Mentioned user IDs") + + +class WorkflowCommentUpdatePayload(BaseModel): + content: str = Field(..., description="Comment content") + position_x: float | None = Field(default=None, description="Comment X position") + position_y: float | None = Field(default=None, description="Comment Y position") + mentioned_user_ids: list[str] | None = Field( + default=None, + description="Mentioned user IDs. Omit to keep existing mentions.", + ) + + +class WorkflowCommentReplyPayload(BaseModel): + content: str = Field(..., description="Reply content") + mentioned_user_ids: list[str] = Field(default_factory=list, description="Mentioned user IDs") + + +class WorkflowCommentMentionUsersPayload(BaseModel): + users: list[AccountWithRole] + + +for model in ( + WorkflowCommentCreatePayload, + WorkflowCommentUpdatePayload, + WorkflowCommentReplyPayload, +): + console_ns.schema_model(model.__name__, model.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)) +register_schema_models(console_ns, AccountWithRole, WorkflowCommentMentionUsersPayload) + +workflow_comment_basic_model = console_ns.model("WorkflowCommentBasic", workflow_comment_basic_fields) +workflow_comment_detail_model = console_ns.model("WorkflowCommentDetail", workflow_comment_detail_fields) +workflow_comment_create_model = console_ns.model("WorkflowCommentCreate", workflow_comment_create_fields) +workflow_comment_update_model = console_ns.model("WorkflowCommentUpdate", workflow_comment_update_fields) +workflow_comment_resolve_model = console_ns.model("WorkflowCommentResolve", workflow_comment_resolve_fields) +workflow_comment_reply_create_model = console_ns.model( + "WorkflowCommentReplyCreate", workflow_comment_reply_create_fields +) +workflow_comment_reply_update_model = console_ns.model( + "WorkflowCommentReplyUpdate", workflow_comment_reply_update_fields +) + + +@console_ns.route("/apps//workflow/comments") +class WorkflowCommentListApi(Resource): + """API for listing and creating workflow comments.""" + + @console_ns.doc("list_workflow_comments") + @console_ns.doc(description="Get all comments for a workflow") + @console_ns.doc(params={"app_id": "Application ID"}) + @console_ns.response(200, "Comments retrieved successfully", workflow_comment_basic_model) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @marshal_with(workflow_comment_basic_model, envelope="data") + def get(self, app_model: App): + """Get all comments for a workflow.""" + comments = WorkflowCommentService.get_comments(tenant_id=current_user.current_tenant_id, app_id=app_model.id) + + return comments + + @console_ns.doc("create_workflow_comment") + @console_ns.doc(description="Create a new workflow comment") + @console_ns.doc(params={"app_id": "Application ID"}) + @console_ns.expect(console_ns.models[WorkflowCommentCreatePayload.__name__]) + @console_ns.response(201, "Comment created successfully", workflow_comment_create_model) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @marshal_with(workflow_comment_create_model) + @edit_permission_required + def post(self, app_model: App): + """Create a new workflow comment.""" + payload = WorkflowCommentCreatePayload.model_validate(console_ns.payload or {}) + + result = WorkflowCommentService.create_comment( + tenant_id=current_user.current_tenant_id, + app_id=app_model.id, + created_by=current_user.id, + content=payload.content, + position_x=payload.position_x, + position_y=payload.position_y, + mentioned_user_ids=payload.mentioned_user_ids, + ) + + return result, 201 + + +@console_ns.route("/apps//workflow/comments/") +class WorkflowCommentDetailApi(Resource): + """API for managing individual workflow comments.""" + + @console_ns.doc("get_workflow_comment") + @console_ns.doc(description="Get a specific workflow comment") + @console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"}) + @console_ns.response(200, "Comment retrieved successfully", workflow_comment_detail_model) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @marshal_with(workflow_comment_detail_model) + def get(self, app_model: App, comment_id: str): + """Get a specific workflow comment.""" + comment = WorkflowCommentService.get_comment( + tenant_id=current_user.current_tenant_id, app_id=app_model.id, comment_id=comment_id + ) + + return comment + + @console_ns.doc("update_workflow_comment") + @console_ns.doc(description="Update a workflow comment") + @console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"}) + @console_ns.expect(console_ns.models[WorkflowCommentUpdatePayload.__name__]) + @console_ns.response(200, "Comment updated successfully", workflow_comment_update_model) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @marshal_with(workflow_comment_update_model) + @edit_permission_required + def put(self, app_model: App, comment_id: str): + """Update a workflow comment.""" + payload = WorkflowCommentUpdatePayload.model_validate(console_ns.payload or {}) + + result = WorkflowCommentService.update_comment( + tenant_id=current_user.current_tenant_id, + app_id=app_model.id, + comment_id=comment_id, + user_id=current_user.id, + content=payload.content, + position_x=payload.position_x, + position_y=payload.position_y, + mentioned_user_ids=payload.mentioned_user_ids, + ) + + return result + + @console_ns.doc("delete_workflow_comment") + @console_ns.doc(description="Delete a workflow comment") + @console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"}) + @console_ns.response(204, "Comment deleted successfully") + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @edit_permission_required + def delete(self, app_model: App, comment_id: str): + """Delete a workflow comment.""" + WorkflowCommentService.delete_comment( + tenant_id=current_user.current_tenant_id, + app_id=app_model.id, + comment_id=comment_id, + user_id=current_user.id, + ) + + return {"result": "success"}, 204 + + +@console_ns.route("/apps//workflow/comments//resolve") +class WorkflowCommentResolveApi(Resource): + """API for resolving and reopening workflow comments.""" + + @console_ns.doc("resolve_workflow_comment") + @console_ns.doc(description="Resolve a workflow comment") + @console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"}) + @console_ns.response(200, "Comment resolved successfully", workflow_comment_resolve_model) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @marshal_with(workflow_comment_resolve_model) + @edit_permission_required + def post(self, app_model: App, comment_id: str): + """Resolve a workflow comment.""" + comment = WorkflowCommentService.resolve_comment( + tenant_id=current_user.current_tenant_id, + app_id=app_model.id, + comment_id=comment_id, + user_id=current_user.id, + ) + + return comment + + +@console_ns.route("/apps//workflow/comments//replies") +class WorkflowCommentReplyApi(Resource): + """API for managing comment replies.""" + + @console_ns.doc("create_workflow_comment_reply") + @console_ns.doc(description="Add a reply to a workflow comment") + @console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"}) + @console_ns.expect(console_ns.models[WorkflowCommentReplyPayload.__name__]) + @console_ns.response(201, "Reply created successfully", workflow_comment_reply_create_model) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @marshal_with(workflow_comment_reply_create_model) + @edit_permission_required + def post(self, app_model: App, comment_id: str): + """Add a reply to a workflow comment.""" + # Validate comment access first + WorkflowCommentService.validate_comment_access( + comment_id=comment_id, tenant_id=current_user.current_tenant_id, app_id=app_model.id + ) + + payload = WorkflowCommentReplyPayload.model_validate(console_ns.payload or {}) + + result = WorkflowCommentService.create_reply( + comment_id=comment_id, + content=payload.content, + created_by=current_user.id, + mentioned_user_ids=payload.mentioned_user_ids, + ) + + return result, 201 + + +@console_ns.route("/apps//workflow/comments//replies/") +class WorkflowCommentReplyDetailApi(Resource): + """API for managing individual comment replies.""" + + @console_ns.doc("update_workflow_comment_reply") + @console_ns.doc(description="Update a comment reply") + @console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID", "reply_id": "Reply ID"}) + @console_ns.expect(console_ns.models[WorkflowCommentReplyPayload.__name__]) + @console_ns.response(200, "Reply updated successfully", workflow_comment_reply_update_model) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @marshal_with(workflow_comment_reply_update_model) + @edit_permission_required + def put(self, app_model: App, comment_id: str, reply_id: str): + """Update a comment reply.""" + # Validate comment access first + WorkflowCommentService.validate_comment_access( + comment_id=comment_id, tenant_id=current_user.current_tenant_id, app_id=app_model.id + ) + + payload = WorkflowCommentReplyPayload.model_validate(console_ns.payload or {}) + + reply = WorkflowCommentService.update_reply( + tenant_id=current_user.current_tenant_id, + app_id=app_model.id, + comment_id=comment_id, + reply_id=reply_id, + user_id=current_user.id, + content=payload.content, + mentioned_user_ids=payload.mentioned_user_ids, + ) + + return reply + + @console_ns.doc("delete_workflow_comment_reply") + @console_ns.doc(description="Delete a comment reply") + @console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID", "reply_id": "Reply ID"}) + @console_ns.response(204, "Reply deleted successfully") + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @edit_permission_required + def delete(self, app_model: App, comment_id: str, reply_id: str): + """Delete a comment reply.""" + # Validate comment access first + WorkflowCommentService.validate_comment_access( + comment_id=comment_id, tenant_id=current_user.current_tenant_id, app_id=app_model.id + ) + + WorkflowCommentService.delete_reply( + tenant_id=current_user.current_tenant_id, + app_id=app_model.id, + comment_id=comment_id, + reply_id=reply_id, + user_id=current_user.id, + ) + + return {"result": "success"}, 204 + + +@console_ns.route("/apps//workflow/comments/mention-users") +class WorkflowCommentMentionUsersApi(Resource): + """API for getting mentionable users for workflow comments.""" + + @console_ns.doc("workflow_comment_mention_users") + @console_ns.doc(description="Get all users in current tenant for mentions") + @console_ns.doc(params={"app_id": "Application ID"}) + @console_ns.response( + 200, "Mentionable users retrieved successfully", console_ns.models[WorkflowCommentMentionUsersPayload.__name__] + ) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + def get(self, app_model: App): + """Get all users in current tenant for mentions.""" + members = TenantService.get_tenant_members(current_user.current_tenant) + users = TypeAdapter(list[AccountWithRole]).validate_python(members, from_attributes=True) + response = WorkflowCommentMentionUsersPayload(users=users) + return response.model_dump(mode="json"), 200 diff --git a/api/controllers/console/app/workflow_draft_variable.py b/api/controllers/console/app/workflow_draft_variable.py index f6d076320c..f6319573e0 100644 --- a/api/controllers/console/app/workflow_draft_variable.py +++ b/api/controllers/console/app/workflow_draft_variable.py @@ -1,14 +1,10 @@ import logging from collections.abc import Callable from functools import wraps -from typing import Any +from typing import Any, TypedDict from flask import Response, request from flask_restx import Resource, fields, marshal, marshal_with -from graphon.file import helpers as file_helpers -from graphon.variables.segment_group import SegmentGroup -from graphon.variables.segments import ArrayFileSegment, FileSegment, Segment -from graphon.variables.types import SegmentType from pydantic import BaseModel, Field from sqlalchemy.orm import sessionmaker @@ -22,8 +18,13 @@ from controllers.web.error import InvalidArgumentError, NotFoundError from core.app.file_access import DatabaseFileAccessController from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, SYSTEM_VARIABLE_NODE_ID from extensions.ext_database import db +from factories import variable_factory from factories.file_factory import build_from_mapping, build_from_mappings from factories.variable_factory import build_segment_with_type +from graphon.file import helpers as file_helpers +from graphon.variables.segment_group import SegmentGroup +from graphon.variables.segments import ArrayFileSegment, FileSegment, Segment +from graphon.variables.types import SegmentType from libs.login import current_user, login_required from models import App, AppMode from models.workflow import WorkflowDraftVariable @@ -45,6 +46,16 @@ class WorkflowDraftVariableUpdatePayload(BaseModel): value: Any | None = Field(default=None, description="Variable value") +class ConversationVariableUpdatePayload(BaseModel): + conversation_variables: list[dict[str, Any]] = Field( + ..., description="Conversation variables for the draft workflow" + ) + + +class EnvironmentVariableUpdatePayload(BaseModel): + environment_variables: list[dict[str, Any]] = Field(..., description="Environment variables for the draft workflow") + + console_ns.schema_model( WorkflowDraftVariableListQuery.__name__, WorkflowDraftVariableListQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), @@ -53,6 +64,14 @@ console_ns.schema_model( WorkflowDraftVariableUpdatePayload.__name__, WorkflowDraftVariableUpdatePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), ) +console_ns.schema_model( + ConversationVariableUpdatePayload.__name__, + ConversationVariableUpdatePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), +) +console_ns.schema_model( + EnvironmentVariableUpdatePayload.__name__, + EnvironmentVariableUpdatePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), +) def _convert_values_to_json_serializable_object(value: Segment): @@ -86,7 +105,14 @@ def _serialize_variable_type(workflow_draft_var: WorkflowDraftVariable) -> str: return value_type.exposed_type().value -def _serialize_full_content(variable: WorkflowDraftVariable) -> dict | None: +class FullContentDict(TypedDict): + size_bytes: int | None + value_type: str + length: int | None + download_url: str + + +def _serialize_full_content(variable: WorkflowDraftVariable) -> FullContentDict | None: """Serialize full_content information for large variables.""" if not variable.is_truncated(): return None @@ -94,12 +120,13 @@ def _serialize_full_content(variable: WorkflowDraftVariable) -> dict | None: variable_file = variable.variable_file assert variable_file is not None - return { + result: FullContentDict = { "size_bytes": variable_file.size, "value_type": variable_file.value_type.exposed_type().value, "length": variable_file.length, "download_url": file_helpers.get_signed_file_url(variable_file.upload_file_id, as_attachment=True), } + return result def _ensure_variable_access( @@ -384,24 +411,27 @@ class VariableApi(Resource): new_value = None if raw_value is not None: - if variable.value_type == SegmentType.FILE: - if not isinstance(raw_value, dict): - raise InvalidArgumentError(description=f"expected dict for file, got {type(raw_value)}") - raw_value = build_from_mapping( - mapping=raw_value, - tenant_id=app_model.tenant_id, - access_controller=_file_access_controller, - ) - elif variable.value_type == SegmentType.ARRAY_FILE: - if not isinstance(raw_value, list): - raise InvalidArgumentError(description=f"expected list for files, got {type(raw_value)}") - if len(raw_value) > 0 and not isinstance(raw_value[0], dict): - raise InvalidArgumentError(description=f"expected dict for files[0], got {type(raw_value)}") - raw_value = build_from_mappings( - mappings=raw_value, - tenant_id=app_model.tenant_id, - access_controller=_file_access_controller, - ) + match variable.value_type: + case SegmentType.FILE: + if not isinstance(raw_value, dict): + raise InvalidArgumentError(description=f"expected dict for file, got {type(raw_value)}") + raw_value = build_from_mapping( + mapping=raw_value, + tenant_id=app_model.tenant_id, + access_controller=_file_access_controller, + ) + case SegmentType.ARRAY_FILE: + if not isinstance(raw_value, list): + raise InvalidArgumentError(description=f"expected list for files, got {type(raw_value)}") + if len(raw_value) > 0 and not isinstance(raw_value[0], dict): + raise InvalidArgumentError(description=f"expected dict for files[0], got {type(raw_value)}") + raw_value = build_from_mappings( + mappings=raw_value, + tenant_id=app_model.tenant_id, + access_controller=_file_access_controller, + ) + case _: + pass new_value = build_segment_with_type(variable.value_type, raw_value) draft_var_srv.update_variable(variable, name=new_name, value=new_value) db.session.commit() @@ -499,6 +529,34 @@ class ConversationVariableCollectionApi(Resource): db.session.commit() return _get_variable_list(app_model, CONVERSATION_VARIABLE_NODE_ID) + @console_ns.expect(console_ns.models[ConversationVariableUpdatePayload.__name__]) + @console_ns.doc("update_conversation_variables") + @console_ns.doc(description="Update conversation variables for workflow draft") + @console_ns.doc(params={"app_id": "Application ID"}) + @console_ns.response(200, "Conversation variables updated successfully") + @setup_required + @login_required + @account_initialization_required + @edit_permission_required + @get_app_model(mode=AppMode.ADVANCED_CHAT) + def post(self, app_model: App): + payload = ConversationVariableUpdatePayload.model_validate(console_ns.payload or {}) + + workflow_service = WorkflowService() + + conversation_variables_list = payload.conversation_variables + conversation_variables = [ + variable_factory.build_conversation_variable_from_mapping(obj) for obj in conversation_variables_list + ] + + workflow_service.update_draft_workflow_conversation_variables( + app_model=app_model, + account=current_user, + conversation_variables=conversation_variables, + ) + + return {"result": "success"} + @console_ns.route("/apps//workflows/draft/system-variables") class SystemVariableCollectionApi(Resource): @@ -550,3 +608,31 @@ class EnvironmentVariableCollectionApi(Resource): ) return {"items": env_vars_list} + + @console_ns.expect(console_ns.models[EnvironmentVariableUpdatePayload.__name__]) + @console_ns.doc("update_environment_variables") + @console_ns.doc(description="Update environment variables for workflow draft") + @console_ns.doc(params={"app_id": "Application ID"}) + @console_ns.response(200, "Environment variables updated successfully") + @setup_required + @login_required + @account_initialization_required + @edit_permission_required + @get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW]) + def post(self, app_model: App): + payload = EnvironmentVariableUpdatePayload.model_validate(console_ns.payload or {}) + + workflow_service = WorkflowService() + + environment_variables_list = payload.environment_variables + environment_variables = [ + variable_factory.build_environment_variable_from_mapping(obj) for obj in environment_variables_list + ] + + workflow_service.update_draft_workflow_environment_variables( + app_model=app_model, + account=current_user, + environment_variables=environment_variables, + ) + + return {"result": "success"} diff --git a/api/controllers/console/app/workflow_run.py b/api/controllers/console/app/workflow_run.py index 83e8bedc11..6748d95d6b 100644 --- a/api/controllers/console/app/workflow_run.py +++ b/api/controllers/console/app/workflow_run.py @@ -3,8 +3,6 @@ from typing import Literal, TypedDict, cast from flask import request from flask_restx import Resource, fields, marshal_with -from graphon.entities.pause_reason import HumanInputRequired -from graphon.enums import WorkflowExecutionStatus from pydantic import BaseModel, Field, field_validator from sqlalchemy import select from sqlalchemy.orm import sessionmaker @@ -28,6 +26,8 @@ from fields.workflow_run_fields import ( workflow_run_node_execution_list_fields, workflow_run_pagination_fields, ) +from graphon.entities.pause_reason import HumanInputRequired +from graphon.enums import WorkflowExecutionStatus from libs.archive_storage import ArchiveStorageNotConfiguredError, get_archive_storage from libs.custom_inputs import time_duration from libs.helper import uuid_value @@ -36,7 +36,7 @@ from models import Account, App, AppMode, EndUser, WorkflowArchiveLog, WorkflowR from models.workflow import WorkflowRun from repositories.factory import DifyAPIRepositoryFactory from services.retention.workflow_run.constants import ARCHIVE_BUNDLE_NAME -from services.workflow_run_service import WorkflowRunService +from services.workflow_run_service import WorkflowRunListArgs, WorkflowRunService def _build_backstage_input_url(form_token: str | None) -> str | None: @@ -214,7 +214,11 @@ class AdvancedChatAppWorkflowRunListApi(Resource): Get advanced chat app workflow run list """ args_model = WorkflowRunListQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore - args = args_model.model_dump(exclude_none=True) + args: WorkflowRunListArgs = {"limit": args_model.limit} + if args_model.last_id is not None: + args["last_id"] = args_model.last_id + if args_model.status is not None: + args["status"] = args_model.status # Default to DEBUGGING if not specified triggered_from = ( @@ -356,7 +360,11 @@ class WorkflowRunListApi(Resource): Get workflow run list """ args_model = WorkflowRunListQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore - args = args_model.model_dump(exclude_none=True) + args: WorkflowRunListArgs = {"limit": args_model.limit} + if args_model.last_id is not None: + args["last_id"] = args_model.last_id + if args_model.status is not None: + args["status"] = args_model.status # Default to DEBUGGING for workflow if not specified (backward compatibility) triggered_from = ( diff --git a/api/controllers/console/app/workflow_trigger.py b/api/controllers/console/app/workflow_trigger.py index e4a6afae1e..a6715fa200 100644 --- a/api/controllers/console/app/workflow_trigger.py +++ b/api/controllers/console/app/workflow_trigger.py @@ -1,16 +1,17 @@ import logging +from datetime import datetime from flask import request -from flask_restx import Resource, fields, marshal_with -from pydantic import BaseModel +from flask_restx import Resource +from pydantic import BaseModel, field_validator from sqlalchemy import select from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import NotFound from configs import dify_config -from controllers.common.schema import get_or_create_model +from controllers.common.schema import register_schema_models from extensions.ext_database import db -from fields.workflow_trigger_fields import trigger_fields, triggers_list_fields, webhook_trigger_fields +from fields.base import ResponseModel from libs.login import current_user, login_required from models.enums import AppTriggerStatus from models.model import Account, App, AppMode @@ -21,15 +22,6 @@ from ..app.wraps import get_app_model from ..wraps import account_initialization_required, edit_permission_required, setup_required logger = logging.getLogger(__name__) -DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" - -trigger_model = get_or_create_model("WorkflowTrigger", trigger_fields) - -triggers_list_fields_copy = triggers_list_fields.copy() -triggers_list_fields_copy["data"] = fields.List(fields.Nested(trigger_model)) -triggers_list_model = get_or_create_model("WorkflowTriggerList", triggers_list_fields_copy) - -webhook_trigger_model = get_or_create_model("WebhookTrigger", webhook_trigger_fields) class Parser(BaseModel): @@ -41,10 +33,52 @@ class ParserEnable(BaseModel): enable_trigger: bool -console_ns.schema_model(Parser.__name__, Parser.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)) +class WorkflowTriggerResponse(ResponseModel): + id: str + trigger_type: str + title: str + node_id: str + provider_name: str + icon: str + status: str + created_at: datetime | None = None + updated_at: datetime | None = None -console_ns.schema_model( - ParserEnable.__name__, ParserEnable.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0) + @field_validator("id", "trigger_type", "title", "node_id", "provider_name", "icon", "status", mode="before") + @classmethod + def _normalize_string_fields(cls, value: object) -> str: + if isinstance(value, str): + return value + return str(value) + + +class WorkflowTriggerListResponse(ResponseModel): + data: list[WorkflowTriggerResponse] + + +class WebhookTriggerResponse(ResponseModel): + id: str + webhook_id: str + webhook_url: str + webhook_debug_url: str + node_id: str + created_at: datetime | None = None + + @field_validator("id", "webhook_id", "webhook_url", "webhook_debug_url", "node_id", mode="before") + @classmethod + def _normalize_string_fields(cls, value: object) -> str: + if isinstance(value, str): + return value + return str(value) + + +register_schema_models( + console_ns, + Parser, + ParserEnable, + WorkflowTriggerResponse, + WorkflowTriggerListResponse, + WebhookTriggerResponse, ) @@ -57,14 +91,14 @@ class WebhookTriggerApi(Resource): @login_required @account_initialization_required @get_app_model(mode=AppMode.WORKFLOW) - @marshal_with(webhook_trigger_model) + @console_ns.response(200, "Success", console_ns.models[WebhookTriggerResponse.__name__]) def get(self, app_model: App): """Get webhook trigger for a node""" args = Parser.model_validate(request.args.to_dict(flat=True)) # type: ignore node_id = args.node_id - with sessionmaker(db.engine).begin() as session: + with sessionmaker(db.engine, expire_on_commit=False).begin() as session: # Get webhook trigger for this app and node webhook_trigger = session.scalar( select(WorkflowWebhookTrigger) @@ -78,7 +112,7 @@ class WebhookTriggerApi(Resource): if not webhook_trigger: raise NotFound("Webhook trigger not found for this node") - return webhook_trigger + return WebhookTriggerResponse.model_validate(webhook_trigger, from_attributes=True).model_dump(mode="json") @console_ns.route("/apps//triggers") @@ -89,13 +123,13 @@ class AppTriggersApi(Resource): @login_required @account_initialization_required @get_app_model(mode=AppMode.WORKFLOW) - @marshal_with(triggers_list_model) + @console_ns.response(200, "Success", console_ns.models[WorkflowTriggerListResponse.__name__]) def get(self, app_model: App): """Get app triggers list""" assert isinstance(current_user, Account) assert current_user.current_tenant_id is not None - with sessionmaker(db.engine).begin() as session: + with sessionmaker(db.engine, expire_on_commit=False).begin() as session: # Get all triggers for this app using select API triggers = ( session.execute( @@ -118,7 +152,9 @@ class AppTriggersApi(Resource): else: trigger.icon = "" # type: ignore - return {"data": triggers} + return WorkflowTriggerListResponse.model_validate({"data": triggers}, from_attributes=True).model_dump( + mode="json" + ) @console_ns.route("/apps//trigger-enable") @@ -129,7 +165,7 @@ class AppTriggerEnableApi(Resource): @account_initialization_required @edit_permission_required @get_app_model(mode=AppMode.WORKFLOW) - @marshal_with(trigger_model) + @console_ns.response(200, "Success", console_ns.models[WorkflowTriggerResponse.__name__]) def post(self, app_model: App): """Update app trigger (enable/disable)""" args = ParserEnable.model_validate(console_ns.payload) @@ -160,4 +196,4 @@ class AppTriggerEnableApi(Resource): else: trigger.icon = "" # type: ignore - return trigger + return WorkflowTriggerResponse.model_validate(trigger, from_attributes=True).model_dump(mode="json") diff --git a/api/controllers/console/auth/activate.py b/api/controllers/console/auth/activate.py index f741107b87..f7061f820f 100644 --- a/api/controllers/console/auth/activate.py +++ b/api/controllers/console/auth/activate.py @@ -1,8 +1,11 @@ +from typing import Any + from flask import request -from flask_restx import Resource, fields +from flask_restx import Resource from pydantic import BaseModel, Field, field_validator from constants.languages import supported_language +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.error import AlreadyActivateError from extensions.ext_database import db @@ -11,8 +14,6 @@ from libs.helper import EmailStr, timezone from models import AccountStatus from services.account_service import RegisterService -DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" - class ActivateCheckQuery(BaseModel): workspace_id: str | None = Field(default=None) @@ -39,8 +40,16 @@ class ActivatePayload(BaseModel): return timezone(value) -for model in (ActivateCheckQuery, ActivatePayload): - console_ns.schema_model(model.__name__, model.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)) +class ActivationCheckResponse(BaseModel): + is_valid: bool = Field(description="Whether token is valid") + data: dict[str, Any] | None = Field(default=None, description="Activation data if valid") + + +class ActivationResponse(BaseModel): + result: str = Field(description="Operation result") + + +register_schema_models(console_ns, ActivateCheckQuery, ActivatePayload, ActivationCheckResponse, ActivationResponse) @console_ns.route("/activate/check") @@ -51,13 +60,7 @@ class ActivateCheckApi(Resource): @console_ns.response( 200, "Success", - console_ns.model( - "ActivationCheckResponse", - { - "is_valid": fields.Boolean(description="Whether token is valid"), - "data": fields.Raw(description="Activation data if valid"), - }, - ), + console_ns.models[ActivationCheckResponse.__name__], ) def get(self): args = ActivateCheckQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore @@ -95,12 +98,7 @@ class ActivateApi(Resource): @console_ns.response( 200, "Account activated successfully", - console_ns.model( - "ActivationResponse", - { - "result": fields.String(description="Operation result"), - }, - ), + console_ns.models[ActivationResponse.__name__], ) @console_ns.response(400, "Already activated or invalid token") def post(self): diff --git a/api/controllers/console/auth/email_register.py b/api/controllers/console/auth/email_register.py index 9e7faa09c5..1fd781b4fc 100644 --- a/api/controllers/console/auth/email_register.py +++ b/api/controllers/console/auth/email_register.py @@ -1,7 +1,6 @@ from flask import request from flask_restx import Resource from pydantic import BaseModel, Field, field_validator -from sqlalchemy.orm import sessionmaker from configs import dify_config from constants.languages import languages @@ -14,7 +13,6 @@ from controllers.console.auth.error import ( InvalidTokenError, PasswordMismatchError, ) -from extensions.ext_database import db from libs.helper import EmailStr, extract_remote_ip from libs.password import valid_password from models import Account @@ -73,8 +71,7 @@ class EmailRegisterSendEmailApi(Resource): if dify_config.BILLING_ENABLED and BillingService.is_email_in_freeze(normalized_email): raise AccountInFreezeError() - with sessionmaker(db.engine).begin() as session: - account = AccountService.get_account_by_email_with_case_fallback(args.email, session=session) + account = AccountService.get_account_by_email_with_case_fallback(args.email) token = AccountService.send_email_register_email(email=normalized_email, account=account, language=language) return {"result": "success", "data": token} @@ -145,17 +142,16 @@ class EmailRegisterResetApi(Resource): email = register_data.get("email", "") normalized_email = email.lower() - with sessionmaker(db.engine).begin() as session: - account = AccountService.get_account_by_email_with_case_fallback(email, session=session) + account = AccountService.get_account_by_email_with_case_fallback(email) - if account: - raise EmailAlreadyInUseError() - else: - account = self._create_new_account(normalized_email, args.password_confirm) - if not account: - raise AccountNotFoundError() - token_pair = AccountService.login(account=account, ip_address=extract_remote_ip(request)) - AccountService.reset_login_error_rate_limit(normalized_email) + if account: + raise EmailAlreadyInUseError() + else: + account = self._create_new_account(normalized_email, args.password_confirm) + if not account: + raise AccountNotFoundError() + token_pair = AccountService.login(account=account, ip_address=extract_remote_ip(request)) + AccountService.reset_login_error_rate_limit(normalized_email) return {"result": "success", "data": token_pair.model_dump()} diff --git a/api/controllers/console/auth/forgot_password.py b/api/controllers/console/auth/forgot_password.py index 63bc98b53f..ed390a5f89 100644 --- a/api/controllers/console/auth/forgot_password.py +++ b/api/controllers/console/auth/forgot_password.py @@ -4,7 +4,6 @@ import secrets from flask import request from flask_restx import Resource from pydantic import BaseModel, Field -from sqlalchemy.orm import sessionmaker from controllers.common.schema import register_schema_models from controllers.console import console_ns @@ -85,8 +84,7 @@ class ForgotPasswordSendEmailApi(Resource): else: language = "en-US" - with sessionmaker(db.engine).begin() as session: - account = AccountService.get_account_by_email_with_case_fallback(args.email, session=session) + account = AccountService.get_account_by_email_with_case_fallback(args.email) token = AccountService.send_reset_password_email( account=account, @@ -184,17 +182,18 @@ class ForgotPasswordResetApi(Resource): password_hashed = hash_password(args.new_password, salt) email = reset_data.get("email", "") - with sessionmaker(db.engine).begin() as session: - account = AccountService.get_account_by_email_with_case_fallback(email, session=session) + account = AccountService.get_account_by_email_with_case_fallback(email) - if account: - self._update_existing_account(account, password_hashed, salt, session) - else: - raise AccountNotFound() + if account: + account = db.session.merge(account) + self._update_existing_account(account, password_hashed, salt) + db.session.commit() + else: + raise AccountNotFound() return {"result": "success"} - def _update_existing_account(self, account, password_hashed, salt, session): + def _update_existing_account(self, account, password_hashed, salt): # Update existing account credentials account.password = base64.b64encode(password_hashed).decode() account.password_salt = base64.b64encode(salt).decode() diff --git a/api/controllers/console/auth/login.py b/api/controllers/console/auth/login.py index 962cc83b0e..8216b3d0da 100644 --- a/api/controllers/console/auth/login.py +++ b/api/controllers/console/auth/login.py @@ -1,7 +1,10 @@ +import logging + import flask_login from flask import make_response, request from flask_restx import Resource from pydantic import BaseModel, Field +from werkzeug.exceptions import Unauthorized import services from configs import dify_config @@ -42,12 +45,13 @@ from libs.token import ( ) from services.account_service import AccountService, InvitationDetailDict, RegisterService, TenantService from services.billing_service import BillingService -from services.entities.auth_entities import LoginPayloadBase +from services.entities.auth_entities import LoginFailureReason, LoginPayloadBase from services.errors.account import AccountRegisterError from services.errors.workspace import WorkSpaceNotAllowedCreateError, WorkspacesLimitExceededError from services.feature_service import FeatureService DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" +logger = logging.getLogger(__name__) class LoginPayload(LoginPayloadBase): @@ -91,10 +95,12 @@ class LoginApi(Resource): normalized_email = request_email.lower() if dify_config.BILLING_ENABLED and BillingService.is_email_in_freeze(normalized_email): + _log_console_login_failure(email=normalized_email, reason=LoginFailureReason.ACCOUNT_IN_FREEZE) raise AccountInFreezeError() is_login_error_rate_limit = AccountService.is_login_error_rate_limit(normalized_email) if is_login_error_rate_limit: + _log_console_login_failure(email=normalized_email, reason=LoginFailureReason.LOGIN_RATE_LIMITED) raise EmailPasswordLoginLimitError() invite_token = args.invite_token @@ -110,14 +116,20 @@ class LoginApi(Resource): invitee_email = data.get("email") if data else None invitee_email_normalized = invitee_email.lower() if isinstance(invitee_email, str) else invitee_email if invitee_email_normalized != normalized_email: + _log_console_login_failure( + email=normalized_email, + reason=LoginFailureReason.INVALID_INVITATION_EMAIL, + ) raise InvalidEmailError() account = _authenticate_account_with_case_fallback( request_email, normalized_email, args.password, invite_token ) except services.errors.account.AccountLoginError: + _log_console_login_failure(email=normalized_email, reason=LoginFailureReason.ACCOUNT_BANNED) raise AccountBannedError() except services.errors.account.AccountPasswordError as exc: AccountService.add_login_error_rate_limit(normalized_email) + _log_console_login_failure(email=normalized_email, reason=LoginFailureReason.INVALID_CREDENTIALS) raise AuthenticationFailedError() from exc # SELF_HOSTED only have one workspace tenants = TenantService.get_join_tenants(account) @@ -240,20 +252,27 @@ class EmailCodeLoginApi(Resource): token_data = AccountService.get_email_code_login_data(args.token) if token_data is None: + _log_console_login_failure(email=user_email, reason=LoginFailureReason.INVALID_EMAIL_CODE_TOKEN) raise InvalidTokenError() token_email = token_data.get("email") normalized_token_email = token_email.lower() if isinstance(token_email, str) else token_email if normalized_token_email != user_email: + _log_console_login_failure(email=user_email, reason=LoginFailureReason.EMAIL_CODE_EMAIL_MISMATCH) raise InvalidEmailError() if token_data["code"] != args.code: + _log_console_login_failure(email=user_email, reason=LoginFailureReason.INVALID_EMAIL_CODE) raise EmailCodeError() AccountService.revoke_email_code_login_token(args.token) try: account = _get_account_with_case_fallback(original_email) + except Unauthorized as exc: + _log_console_login_failure(email=user_email, reason=LoginFailureReason.ACCOUNT_BANNED) + raise AccountBannedError() from exc except AccountRegisterError: + _log_console_login_failure(email=user_email, reason=LoginFailureReason.ACCOUNT_IN_FREEZE) raise AccountInFreezeError() if account: tenants = TenantService.get_join_tenants(account) @@ -279,6 +298,7 @@ class EmailCodeLoginApi(Resource): except WorkSpaceNotAllowedCreateError: raise NotAllowedCreateWorkspace() except AccountRegisterError: + _log_console_login_failure(email=user_email, reason=LoginFailureReason.ACCOUNT_IN_FREEZE) raise AccountInFreezeError() except WorkspacesLimitExceededError: raise WorkspacesLimitExceeded() @@ -336,3 +356,12 @@ def _authenticate_account_with_case_fallback( if original_email == normalized_email: raise return AccountService.authenticate(normalized_email, password, invite_token) + + +def _log_console_login_failure(*, email: str, reason: LoginFailureReason) -> None: + logger.warning( + "Console login failed: email=%s reason=%s ip_address=%s", + email, + reason, + extract_remote_ip(request), + ) diff --git a/api/controllers/console/auth/oauth.py b/api/controllers/console/auth/oauth.py index 5c7011fd22..d31fb4a46c 100644 --- a/api/controllers/console/auth/oauth.py +++ b/api/controllers/console/auth/oauth.py @@ -4,7 +4,6 @@ import urllib.parse import httpx from flask import current_app, redirect, request from flask_restx import Resource -from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import Unauthorized from configs import dify_config @@ -180,8 +179,7 @@ def _get_account_by_openid_or_email(provider: str, user_info: OAuthUserInfo) -> account: Account | None = Account.get_by_openid(provider, user_info.id) if not account: - with sessionmaker(db.engine).begin() as session: - account = AccountService.get_account_by_email_with_case_fallback(user_info.email, session=session) + account = AccountService.get_account_by_email_with_case_fallback(user_info.email) return account diff --git a/api/controllers/console/auth/oauth_server.py b/api/controllers/console/auth/oauth_server.py index b55cda4244..727428c8e7 100644 --- a/api/controllers/console/auth/oauth_server.py +++ b/api/controllers/console/auth/oauth_server.py @@ -5,11 +5,11 @@ from typing import Concatenate from flask import jsonify, request from flask.typing import ResponseReturnValue from flask_restx import Resource -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel from werkzeug.exceptions import BadRequest, NotFound from controllers.console.wraps import account_initialization_required, setup_required +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.login import current_account_with_tenant, login_required from models import Account from models.model import OAuthProviderApp diff --git a/api/controllers/console/billing/billing.py b/api/controllers/console/billing/billing.py index 23c01eedb1..45de338559 100644 --- a/api/controllers/console/billing/billing.py +++ b/api/controllers/console/billing/billing.py @@ -2,18 +2,17 @@ import base64 from typing import Literal from flask import request -from flask_restx import Resource, fields +from flask_restx import Resource from pydantic import BaseModel, Field from werkzeug.exceptions import BadRequest +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, only_edition_cloud, setup_required from enums.cloud_plan import CloudPlan from libs.login import current_account_with_tenant, login_required from services.billing_service import BillingService -DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" - class SubscriptionQuery(BaseModel): plan: Literal[CloudPlan.PROFESSIONAL, CloudPlan.TEAM] = Field(..., description="Subscription plan") @@ -24,8 +23,7 @@ class PartnerTenantsPayload(BaseModel): click_id: str = Field(..., description="Click Id from partner referral link") -for model in (SubscriptionQuery, PartnerTenantsPayload): - console_ns.schema_model(model.__name__, model.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)) +register_schema_models(console_ns, SubscriptionQuery, PartnerTenantsPayload) @console_ns.route("/billing/subscription") @@ -58,12 +56,7 @@ class PartnerTenants(Resource): @console_ns.doc("sync_partner_tenants_bindings") @console_ns.doc(description="Sync partner tenants bindings") @console_ns.doc(params={"partner_key": "Partner key"}) - @console_ns.expect( - console_ns.model( - "SyncPartnerTenantsBindingsRequest", - {"click_id": fields.String(required=True, description="Click Id from partner referral link")}, - ) - ) + @console_ns.expect(console_ns.models[PartnerTenantsPayload.__name__]) @console_ns.response(200, "Tenants synced to partner successfully") @console_ns.response(400, "Invalid partner information") @setup_required diff --git a/api/controllers/console/datasets/data_source.py b/api/controllers/console/datasets/data_source.py index e623722b23..ed3c1a59d4 100644 --- a/api/controllers/console/datasets/data_source.py +++ b/api/controllers/console/datasets/data_source.py @@ -162,7 +162,9 @@ class DataSourceApi(Resource): binding_id = str(binding_id) with sessionmaker(db.engine, expire_on_commit=False).begin() as session: data_source_binding = session.execute( - select(DataSourceOauthBinding).filter_by(id=binding_id, tenant_id=current_tenant_id) + select(DataSourceOauthBinding).where( + DataSourceOauthBinding.id == binding_id, DataSourceOauthBinding.tenant_id == current_tenant_id + ) ).scalar_one_or_none() if data_source_binding is None: raise NotFound("Data source binding not found.") @@ -222,11 +224,11 @@ class DataSourceNotionListApi(Resource): raise ValueError("Dataset is not notion type.") documents = session.scalars( - select(Document).filter_by( - dataset_id=query.dataset_id, - tenant_id=current_tenant_id, - data_source_type="notion_import", - enabled=True, + select(Document).where( + Document.dataset_id == query.dataset_id, + Document.tenant_id == current_tenant_id, + Document.data_source_type == "notion_import", + Document.enabled.is_(True), ) ).all() if documents: diff --git a/api/controllers/console/datasets/datasets.py b/api/controllers/console/datasets/datasets.py index f23c7eb431..ea0fdef0a7 100644 --- a/api/controllers/console/datasets/datasets.py +++ b/api/controllers/console/datasets/datasets.py @@ -2,7 +2,6 @@ from typing import Any, cast from flask import request from flask_restx import Resource, fields, marshal, marshal_with -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import BaseModel, Field, field_validator from sqlalchemy import func, select from werkzeug.exceptions import Forbidden, NotFound @@ -11,10 +10,7 @@ import services from configs import dify_config from controllers.common.schema import get_or_create_model, register_schema_models from controllers.console import console_ns -from controllers.console.apikey import ( - api_key_item_model, - api_key_list_model, -) +from controllers.console.apikey import ApiKeyItem, ApiKeyList from controllers.console.app.error import ProviderNotInitializeError from controllers.console.datasets.error import DatasetInUseError, DatasetNameDuplicateError, IndexingEstimateError from controllers.console.wraps import ( @@ -52,6 +48,7 @@ from fields.dataset_fields import ( weighted_score_fields, ) from fields.document_fields import document_status_fields +from graphon.model_runtime.entities.model_entities import ModelType from libs.login import current_account_with_tenant, login_required from models import ApiToken, Dataset, Document, DocumentSegment, UploadFile from models.dataset import DatasetPermission, DatasetPermissionEnum @@ -785,23 +782,23 @@ class DatasetApiKeyApi(Resource): @console_ns.doc("get_dataset_api_keys") @console_ns.doc(description="Get dataset API keys") - @console_ns.response(200, "API keys retrieved successfully", api_key_list_model) + @console_ns.response(200, "API keys retrieved successfully", console_ns.models[ApiKeyList.__name__]) @setup_required @login_required @account_initialization_required - @marshal_with(api_key_list_model) def get(self): _, current_tenant_id = current_account_with_tenant() keys = db.session.scalars( select(ApiToken).where(ApiToken.type == self.resource_type, ApiToken.tenant_id == current_tenant_id) ).all() - return {"items": keys} + return ApiKeyList.model_validate({"data": keys}, from_attributes=True).model_dump(mode="json") + @console_ns.response(200, "API key created successfully", console_ns.models[ApiKeyItem.__name__]) + @console_ns.response(400, "Maximum keys exceeded") @setup_required @login_required @is_admin_or_owner_required @account_initialization_required - @marshal_with(api_key_item_model) def post(self): _, current_tenant_id = current_account_with_tenant() @@ -828,7 +825,7 @@ class DatasetApiKeyApi(Resource): api_token.type = self.resource_type db.session.add(api_token) db.session.commit() - return api_token, 200 + return ApiKeyItem.model_validate(api_token, from_attributes=True).model_dump(mode="json"), 200 @console_ns.route("/datasets/api-keys/") diff --git a/api/controllers/console/datasets/datasets_document.py b/api/controllers/console/datasets/datasets_document.py index ab367d8483..3372a967d9 100644 --- a/api/controllers/console/datasets/datasets_document.py +++ b/api/controllers/console/datasets/datasets_document.py @@ -3,20 +3,19 @@ import logging from argparse import ArgumentTypeError from collections.abc import Sequence from contextlib import ExitStack +from datetime import datetime from typing import Any, Literal, cast -from uuid import UUID import sqlalchemy as sa from flask import request, send_file -from flask_restx import Resource, fields, marshal, marshal_with -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError -from pydantic import BaseModel, Field +from flask_restx import Resource, marshal +from pydantic import BaseModel, Field, field_validator from sqlalchemy import asc, desc, func, select from werkzeug.exceptions import Forbidden, NotFound import services -from controllers.common.schema import get_or_create_model, register_schema_models +from controllers.common.controller_schemas import DocumentBatchDownloadZipPayload +from controllers.common.schema import register_schema_models from controllers.console import console_ns from core.errors.error import ( LLMBadRequestError, @@ -31,14 +30,14 @@ from core.rag.extractor.entity.datasource_type import DatasourceType from core.rag.extractor.entity.extract_setting import ExtractSetting, NotionInfo, WebsiteInfo from core.rag.index_processor.constant.index_type import IndexTechniqueType from extensions.ext_database import db -from fields.dataset_fields import dataset_fields +from fields.base import ResponseModel from fields.document_fields import ( - dataset_and_document_fields, document_fields, - document_metadata_fields, document_status_fields, document_with_segments_fields, ) +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from libs.datetime_utils import naive_utc_now from libs.login import current_account_with_tenant, login_required from models import DatasetProcessRule, Document, DocumentSegment, UploadFile @@ -71,31 +70,101 @@ from ..wraps import ( logger = logging.getLogger(__name__) -# NOTE: Keep constants near the top of the module for discoverability. -DOCUMENT_BATCH_DOWNLOAD_ZIP_MAX_DOCS = 100 + +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value -# Register models for flask_restx to avoid dict type issues in Swagger -dataset_model = get_or_create_model("Dataset", dataset_fields) +def _normalize_enum(value: Any) -> Any: + if isinstance(value, str) or value is None: + return value + return getattr(value, "value", value) -document_metadata_model = get_or_create_model("DocumentMetadata", document_metadata_fields) -document_fields_copy = document_fields.copy() -document_fields_copy["doc_metadata"] = fields.List( - fields.Nested(document_metadata_model), attribute="doc_metadata_details" -) -document_model = get_or_create_model("Document", document_fields_copy) +class DatasetResponse(ResponseModel): + id: str + name: str + description: str | None = None + permission: str | None = None + data_source_type: str | None = None + indexing_technique: str | None = None + created_by: str | None = None + created_at: int | None = None -document_with_segments_fields_copy = document_with_segments_fields.copy() -document_with_segments_fields_copy["doc_metadata"] = fields.List( - fields.Nested(document_metadata_model), attribute="doc_metadata_details" -) -document_with_segments_model = get_or_create_model("DocumentWithSegments", document_with_segments_fields_copy) + @field_validator("data_source_type", "indexing_technique", mode="before") + @classmethod + def _normalize_enum_fields(cls, value: Any) -> Any: + return _normalize_enum(value) -dataset_and_document_fields_copy = dataset_and_document_fields.copy() -dataset_and_document_fields_copy["dataset"] = fields.Nested(dataset_model) -dataset_and_document_fields_copy["documents"] = fields.List(fields.Nested(document_model)) -dataset_and_document_model = get_or_create_model("DatasetAndDocument", dataset_and_document_fields_copy) + @field_validator("created_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class DocumentMetadataResponse(ResponseModel): + id: str + name: str + type: str + value: str | None = None + + +class DocumentResponse(ResponseModel): + id: str + position: int | None = None + data_source_type: str | None = None + data_source_info: Any = Field(default=None, validation_alias="data_source_info_dict") + data_source_detail_dict: Any = None + dataset_process_rule_id: str | None = None + name: str + created_from: str | None = None + created_by: str | None = None + created_at: int | None = None + tokens: int | None = None + indexing_status: str | None = None + error: str | None = None + enabled: bool | None = None + disabled_at: int | None = None + disabled_by: str | None = None + archived: bool | None = None + display_status: str | None = None + word_count: int | None = None + hit_count: int | None = None + doc_form: str | None = None + doc_metadata: list[DocumentMetadataResponse] = Field(default_factory=list, validation_alias="doc_metadata_details") + summary_index_status: str | None = None + need_summary: bool | None = None + + @field_validator("data_source_type", "indexing_status", "display_status", "doc_form", mode="before") + @classmethod + def _normalize_enum_fields(cls, value: Any) -> Any: + return _normalize_enum(value) + + @field_validator("doc_metadata", mode="before") + @classmethod + def _normalize_doc_metadata(cls, value: Any) -> list[Any]: + if value is None: + return [] + return value + + @field_validator("created_at", "disabled_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class DocumentWithSegmentsResponse(DocumentResponse): + process_rule_dict: Any = None + completed_segments: int | None = None + total_segments: int | None = None + + +class DatasetAndDocumentResponse(ResponseModel): + dataset: DatasetResponse + documents: list[DocumentResponse] + batch: str class DocumentRetryPayload(BaseModel): @@ -110,10 +179,9 @@ class GenerateSummaryPayload(BaseModel): document_list: list[str] -class DocumentBatchDownloadZipPayload(BaseModel): - """Request payload for bulk downloading documents as a zip archive.""" - - document_ids: list[UUID] = Field(..., min_length=1, max_length=DOCUMENT_BATCH_DOWNLOAD_ZIP_MAX_DOCS) +class DocumentMetadataUpdatePayload(BaseModel): + doc_type: str | None = None + doc_metadata: Any = None class DocumentDatasetListParam(BaseModel): @@ -133,7 +201,13 @@ register_schema_models( DocumentRetryPayload, DocumentRenamePayload, GenerateSummaryPayload, + DocumentMetadataUpdatePayload, DocumentBatchDownloadZipPayload, + DatasetResponse, + DocumentMetadataResponse, + DocumentResponse, + DocumentWithSegmentsResponse, + DatasetAndDocumentResponse, ) @@ -280,7 +354,7 @@ class DatasetDocumentListApi(Resource): except services.errors.account.NoPermissionError as e: raise Forbidden(str(e)) - query = select(Document).filter_by(dataset_id=str(dataset_id), tenant_id=current_tenant_id) + query = select(Document).where(Document.dataset_id == str(dataset_id), Document.tenant_id == current_tenant_id) if status: query = DocumentService.apply_display_status_filter(query, status) @@ -366,10 +440,10 @@ class DatasetDocumentListApi(Resource): @setup_required @login_required @account_initialization_required - @marshal_with(dataset_and_document_model) @cloud_edition_billing_resource_check("vector_space") @cloud_edition_billing_rate_limit_check("knowledge") @console_ns.expect(console_ns.models[KnowledgeConfig.__name__]) + @console_ns.response(200, "Documents created successfully", console_ns.models[DatasetAndDocumentResponse.__name__]) def post(self, dataset_id): current_user, _ = current_account_with_tenant() dataset_id = str(dataset_id) @@ -407,7 +481,9 @@ class DatasetDocumentListApi(Resource): except ModelCurrentlyNotSupportError: raise ProviderModelCurrentlyNotSupportError() - return {"dataset": dataset, "documents": documents, "batch": batch} + return DatasetAndDocumentResponse.model_validate( + {"dataset": dataset, "documents": documents, "batch": batch}, from_attributes=True + ).model_dump(mode="json") @setup_required @login_required @@ -435,12 +511,13 @@ class DatasetInitApi(Resource): @console_ns.doc("init_dataset") @console_ns.doc(description="Initialize dataset with documents") @console_ns.expect(console_ns.models[KnowledgeConfig.__name__]) - @console_ns.response(201, "Dataset initialized successfully", dataset_and_document_model) + @console_ns.response( + 201, "Dataset initialized successfully", console_ns.models[DatasetAndDocumentResponse.__name__] + ) @console_ns.response(400, "Invalid request parameters") @setup_required @login_required @account_initialization_required - @marshal_with(dataset_and_document_model) @cloud_edition_billing_resource_check("vector_space") @cloud_edition_billing_rate_limit_check("knowledge") def post(self): @@ -488,9 +565,9 @@ class DatasetInitApi(Resource): except ModelCurrentlyNotSupportError: raise ProviderModelCurrentlyNotSupportError() - response = {"dataset": dataset, "documents": documents, "batch": batch} - - return response + return DatasetAndDocumentResponse.model_validate( + {"dataset": dataset, "documents": documents, "batch": batch}, from_attributes=True + ).model_dump(mode="json") @console_ns.route("/datasets//documents//indexing-estimate") @@ -997,15 +1074,7 @@ class DocumentMetadataApi(DocumentResource): @console_ns.doc("update_document_metadata") @console_ns.doc(description="Update document metadata") @console_ns.doc(params={"dataset_id": "Dataset ID", "document_id": "Document ID"}) - @console_ns.expect( - console_ns.model( - "UpdateDocumentMetadataRequest", - { - "doc_type": fields.String(description="Document type"), - "doc_metadata": fields.Raw(description="Document metadata"), - }, - ) - ) + @console_ns.expect(console_ns.models[DocumentMetadataUpdatePayload.__name__]) @console_ns.response(200, "Document metadata updated successfully") @console_ns.response(404, "Document not found") @console_ns.response(403, "Permission denied") @@ -1018,10 +1087,10 @@ class DocumentMetadataApi(DocumentResource): document_id = str(document_id) document = self.get_document(dataset_id, document_id) - req_data = request.get_json() + req_data = DocumentMetadataUpdatePayload.model_validate(request.get_json() or {}) - doc_type = req_data.get("doc_type") - doc_metadata = req_data.get("doc_metadata") + doc_type = req_data.doc_type + doc_metadata = req_data.doc_metadata # The role of the current user in the ta table must be admin, owner, dataset_operator, or editor if not current_user.is_dataset_editor: @@ -1035,7 +1104,7 @@ class DocumentMetadataApi(DocumentResource): if not isinstance(doc_metadata, dict): raise ValueError("doc_metadata must be a dictionary.") - metadata_schema: dict = cast(dict, DocumentService.DOCUMENT_METADATA_SCHEMA[doc_type]) + metadata_schema: dict[str, Any] = cast(dict[str, Any], DocumentService.DOCUMENT_METADATA_SCHEMA[doc_type]) document.doc_metadata = {} if doc_type == "others": @@ -1203,7 +1272,7 @@ class DocumentRenameApi(DocumentResource): @setup_required @login_required @account_initialization_required - @marshal_with(document_model) + @console_ns.response(200, "Document renamed successfully", console_ns.models[DocumentResponse.__name__]) @console_ns.expect(console_ns.models[DocumentRenamePayload.__name__]) def post(self, dataset_id, document_id): # The role of the current user in the ta table must be admin, owner, editor, or dataset_operator @@ -1221,7 +1290,7 @@ class DocumentRenameApi(DocumentResource): except services.errors.document.DocumentIndexingError: raise DocumentIndexingError("Cannot delete document during indexing.") - return document + return DocumentResponse.model_validate(document, from_attributes=True).model_dump(mode="json") @console_ns.route("/datasets//documents//website-sync") diff --git a/api/controllers/console/datasets/datasets_segments.py b/api/controllers/console/datasets/datasets_segments.py index c5f4e3a6e2..2647bb1f5a 100644 --- a/api/controllers/console/datasets/datasets_segments.py +++ b/api/controllers/console/datasets/datasets_segments.py @@ -2,7 +2,6 @@ import uuid from flask import request from flask_restx import Resource, marshal -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import BaseModel, Field from sqlalchemy import String, cast, func, or_, select from sqlalchemy.dialects.postgresql import JSONB @@ -10,6 +9,7 @@ from werkzeug.exceptions import Forbidden, NotFound import services from configs import dify_config +from controllers.common.controller_schemas import ChildChunkCreatePayload, ChildChunkUpdatePayload from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.app.error import ProviderNotInitializeError @@ -31,6 +31,7 @@ from core.rag.index_processor.constant.index_type import IndexTechniqueType from extensions.ext_database import db from extensions.ext_redis import redis_client from fields.segment_fields import child_chunk_fields, segment_fields +from graphon.model_runtime.entities.model_entities import ModelType from libs.helper import escape_like_pattern from libs.login import current_account_with_tenant, login_required from models.dataset import ChildChunk, DocumentSegment @@ -82,14 +83,6 @@ class BatchImportPayload(BaseModel): upload_file_id: str -class ChildChunkCreatePayload(BaseModel): - content: str - - -class ChildChunkUpdatePayload(BaseModel): - content: str - - class ChildChunkBatchUpdatePayload(BaseModel): chunks: list[ChildChunkUpdateArgs] diff --git a/api/controllers/console/datasets/external.py b/api/controllers/console/datasets/external.py index f3866f6aef..e513e8c8f9 100644 --- a/api/controllers/console/datasets/external.py +++ b/api/controllers/console/datasets/external.py @@ -227,10 +227,11 @@ class ExternalApiUseCheckApi(Resource): @login_required @account_initialization_required def get(self, external_knowledge_api_id): + _, current_tenant_id = current_account_with_tenant() external_knowledge_api_id = str(external_knowledge_api_id) external_knowledge_api_is_using, count = ExternalDatasetService.external_knowledge_api_use_check( - external_knowledge_api_id + external_knowledge_api_id, current_tenant_id ) return {"is_using": external_knowledge_api_is_using, "count": count}, 200 diff --git a/api/controllers/console/datasets/hit_testing.py b/api/controllers/console/datasets/hit_testing.py index e62be13c2f..36a7a4bb0e 100644 --- a/api/controllers/console/datasets/hit_testing.py +++ b/api/controllers/console/datasets/hit_testing.py @@ -1,13 +1,13 @@ -from flask_restx import Resource, fields +from __future__ import annotations -from controllers.common.schema import register_schema_model -from fields.hit_testing_fields import ( - child_chunk_fields, - document_fields, - files_fields, - hit_testing_record_fields, - segment_fields, -) +from datetime import datetime +from typing import Any + +from flask_restx import Resource +from pydantic import Field, field_validator + +from controllers.common.schema import register_schema_models +from fields.base import ResponseModel from libs.login import login_required from .. import console_ns @@ -18,39 +18,92 @@ from ..wraps import ( setup_required, ) -register_schema_model(console_ns, HitTestingPayload) + +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value -def _get_or_create_model(model_name: str, field_def): - """Get or create a flask_restx model to avoid dict type issues in Swagger.""" - existing = console_ns.models.get(model_name) - if existing is None: - existing = console_ns.model(model_name, field_def) - return existing +class HitTestingDocument(ResponseModel): + id: str | None = None + data_source_type: str | None = None + name: str | None = None + doc_type: str | None = None + doc_metadata: Any | None = None -# Register models for flask_restx to avoid dict type issues in Swagger -document_model = _get_or_create_model("HitTestingDocument", document_fields) +class HitTestingSegment(ResponseModel): + id: str | None = None + position: int | None = None + document_id: str | None = None + content: str | None = None + sign_content: str | None = None + answer: str | None = None + word_count: int | None = None + tokens: int | None = None + keywords: list[str] = Field(default_factory=list) + index_node_id: str | None = None + index_node_hash: str | None = None + hit_count: int | None = None + enabled: bool | None = None + disabled_at: int | None = None + disabled_by: str | None = None + status: str | None = None + created_by: str | None = None + created_at: int | None = None + indexing_at: int | None = None + completed_at: int | None = None + error: str | None = None + stopped_at: int | None = None + document: HitTestingDocument | None = None -segment_fields_copy = segment_fields.copy() -segment_fields_copy["document"] = fields.Nested(document_model) -segment_model = _get_or_create_model("HitTestingSegment", segment_fields_copy) + @field_validator("disabled_at", "created_at", "indexing_at", "completed_at", "stopped_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) -child_chunk_model = _get_or_create_model("HitTestingChildChunk", child_chunk_fields) -files_model = _get_or_create_model("HitTestingFile", files_fields) -hit_testing_record_fields_copy = hit_testing_record_fields.copy() -hit_testing_record_fields_copy["segment"] = fields.Nested(segment_model) -hit_testing_record_fields_copy["child_chunks"] = fields.List(fields.Nested(child_chunk_model)) -hit_testing_record_fields_copy["files"] = fields.List(fields.Nested(files_model)) -hit_testing_record_model = _get_or_create_model("HitTestingRecord", hit_testing_record_fields_copy) +class HitTestingChildChunk(ResponseModel): + id: str | None = None + content: str | None = None + position: int | None = None + score: float | None = None -# Response model for hit testing API -hit_testing_response_fields = { - "query": fields.String, - "records": fields.List(fields.Nested(hit_testing_record_model)), -} -hit_testing_response_model = _get_or_create_model("HitTestingResponse", hit_testing_response_fields) + +class HitTestingFile(ResponseModel): + id: str | None = None + name: str | None = None + size: int | None = None + extension: str | None = None + mime_type: str | None = None + source_url: str | None = None + + +class HitTestingRecord(ResponseModel): + segment: HitTestingSegment | None = None + child_chunks: list[HitTestingChildChunk] = Field(default_factory=list) + score: float | None = None + tsne_position: Any | None = None + files: list[HitTestingFile] = Field(default_factory=list) + summary: str | None = None + + +class HitTestingResponse(ResponseModel): + query: str + records: list[HitTestingRecord] = Field(default_factory=list) + + +register_schema_models( + console_ns, + HitTestingPayload, + HitTestingDocument, + HitTestingSegment, + HitTestingChildChunk, + HitTestingFile, + HitTestingRecord, + HitTestingResponse, +) @console_ns.route("/datasets//hit-testing") @@ -59,7 +112,11 @@ class HitTestingApi(Resource, DatasetsHitTestingBase): @console_ns.doc(description="Test dataset knowledge retrieval") @console_ns.doc(params={"dataset_id": "Dataset ID"}) @console_ns.expect(console_ns.models[HitTestingPayload.__name__]) - @console_ns.response(200, "Hit testing completed successfully", model=hit_testing_response_model) + @console_ns.response( + 200, + "Hit testing completed successfully", + model=console_ns.models[HitTestingResponse.__name__], + ) @console_ns.response(404, "Dataset not found") @console_ns.response(400, "Invalid parameters") @setup_required @@ -74,4 +131,4 @@ class HitTestingApi(Resource, DatasetsHitTestingBase): args = payload.model_dump(exclude_none=True) self.hit_testing_args_check(args) - return self.perform_hit_testing(dataset, args) + return HitTestingResponse.model_validate(self.perform_hit_testing(dataset, args)).model_dump(mode="json") diff --git a/api/controllers/console/datasets/hit_testing_base.py b/api/controllers/console/datasets/hit_testing_base.py index 8fb3699849..699fa599c8 100644 --- a/api/controllers/console/datasets/hit_testing_base.py +++ b/api/controllers/console/datasets/hit_testing_base.py @@ -2,7 +2,6 @@ import logging from typing import Any from flask_restx import marshal -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field from werkzeug.exceptions import Forbidden, InternalServerError, NotFound @@ -21,6 +20,7 @@ from core.errors.error import ( QuotaExceededError, ) from fields.hit_testing_fields import hit_testing_record_fields +from graphon.model_runtime.errors.invoke import InvokeError from libs.login import current_user from models.account import Account from services.dataset_service import DatasetService diff --git a/api/controllers/console/datasets/metadata.py b/api/controllers/console/datasets/metadata.py index 2e69ddc5ab..d966e1629e 100644 --- a/api/controllers/console/datasets/metadata.py +++ b/api/controllers/console/datasets/metadata.py @@ -1,9 +1,9 @@ from typing import Literal from flask_restx import Resource, marshal_with -from pydantic import BaseModel from werkzeug.exceptions import NotFound +from controllers.common.controller_schemas import MetadataUpdatePayload from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, enterprise_license_required, setup_required @@ -18,11 +18,6 @@ from services.entities.knowledge_entities.knowledge_entities import ( ) from services.metadata_service import MetadataService - -class MetadataUpdatePayload(BaseModel): - name: str - - register_schema_models( console_ns, MetadataArgs, MetadataOperationData, MetadataUpdatePayload, DocumentMetadataOperation, MetadataDetail ) diff --git a/api/controllers/console/datasets/rag_pipeline/datasource_auth.py b/api/controllers/console/datasets/rag_pipeline/datasource_auth.py index bdf83b991e..fd0a8b33bc 100644 --- a/api/controllers/console/datasets/rag_pipeline/datasource_auth.py +++ b/api/controllers/console/datasets/rag_pipeline/datasource_auth.py @@ -2,8 +2,6 @@ from typing import Any from flask import make_response, redirect, request from flask_restx import Resource -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field from werkzeug.exceptions import Forbidden, NotFound @@ -12,6 +10,8 @@ from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, edit_permission_required, setup_required from core.plugin.impl.oauth import OAuthHandler +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.login import current_account_with_tenant, login_required from models.provider_ids import DatasourceProviderID from services.datasource_provider_service import DatasourceProviderService diff --git a/api/controllers/console/datasets/rag_pipeline/rag_pipeline_draft_variable.py b/api/controllers/console/datasets/rag_pipeline/rag_pipeline_draft_variable.py index 93feec0019..b31d73f27d 100644 --- a/api/controllers/console/datasets/rag_pipeline/rag_pipeline_draft_variable.py +++ b/api/controllers/console/datasets/rag_pipeline/rag_pipeline_draft_variable.py @@ -4,7 +4,6 @@ from typing import Any, NoReturn from flask import Response, request from flask_restx import Resource, marshal, marshal_with -from graphon.variables.types import SegmentType from pydantic import BaseModel, Field from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import Forbidden @@ -28,6 +27,7 @@ from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, SYSTE from extensions.ext_database import db from factories.file_factory import build_from_mapping, build_from_mappings from factories.variable_factory import build_segment_with_type +from graphon.variables.types import SegmentType from libs.login import current_user, login_required from models import Account from models.dataset import Pipeline @@ -223,24 +223,27 @@ class RagPipelineVariableApi(Resource): new_value = None if raw_value is not None: - if variable.value_type == SegmentType.FILE: - if not isinstance(raw_value, dict): - raise InvalidArgumentError(description=f"expected dict for file, got {type(raw_value)}") - raw_value = build_from_mapping( - mapping=raw_value, - tenant_id=pipeline.tenant_id, - access_controller=_file_access_controller, - ) - elif variable.value_type == SegmentType.ARRAY_FILE: - if not isinstance(raw_value, list): - raise InvalidArgumentError(description=f"expected list for files, got {type(raw_value)}") - if len(raw_value) > 0 and not isinstance(raw_value[0], dict): - raise InvalidArgumentError(description=f"expected dict for files[0], got {type(raw_value)}") - raw_value = build_from_mappings( - mappings=raw_value, - tenant_id=pipeline.tenant_id, - access_controller=_file_access_controller, - ) + match variable.value_type: + case SegmentType.FILE: + if not isinstance(raw_value, dict): + raise InvalidArgumentError(description=f"expected dict for file, got {type(raw_value)}") + raw_value = build_from_mapping( + mapping=raw_value, + tenant_id=pipeline.tenant_id, + access_controller=_file_access_controller, + ) + case SegmentType.ARRAY_FILE: + if not isinstance(raw_value, list): + raise InvalidArgumentError(description=f"expected list for files, got {type(raw_value)}") + if len(raw_value) > 0 and not isinstance(raw_value[0], dict): + raise InvalidArgumentError(description=f"expected dict for files[0], got {type(raw_value)}") + raw_value = build_from_mappings( + mappings=raw_value, + tenant_id=pipeline.tenant_id, + access_controller=_file_access_controller, + ) + case _: + pass new_value = build_segment_with_type(variable.value_type, raw_value) draft_var_srv.update_variable(variable, name=new_name, value=new_value) db.session.commit() diff --git a/api/controllers/console/datasets/rag_pipeline/rag_pipeline_import.py b/api/controllers/console/datasets/rag_pipeline/rag_pipeline_import.py index 732a6dc446..aa27458176 100644 --- a/api/controllers/console/datasets/rag_pipeline/rag_pipeline_import.py +++ b/api/controllers/console/datasets/rag_pipeline/rag_pipeline_import.py @@ -19,7 +19,7 @@ from fields.rag_pipeline_fields import ( ) from libs.login import current_account_with_tenant, login_required from models.dataset import Pipeline -from services.app_dsl_service import ImportStatus +from services.entities.dsl_entities import ImportStatus from services.rag_pipeline.rag_pipeline_dsl_service import RagPipelineDslService @@ -83,11 +83,13 @@ class RagPipelineImportApi(Resource): # Return appropriate status code based on result status = result.status - if status == ImportStatus.FAILED: - return result.model_dump(mode="json"), 400 - elif status == ImportStatus.PENDING: - return result.model_dump(mode="json"), 202 - return result.model_dump(mode="json"), 200 + match status: + case ImportStatus.FAILED: + return result.model_dump(mode="json"), 400 + case ImportStatus.PENDING: + return result.model_dump(mode="json"), 202 + case ImportStatus.COMPLETED | ImportStatus.COMPLETED_WITH_WARNINGS: + return result.model_dump(mode="json"), 200 @console_ns.route("/rag/pipelines/imports//confirm") diff --git a/api/controllers/console/datasets/rag_pipeline/rag_pipeline_workflow.py b/api/controllers/console/datasets/rag_pipeline/rag_pipeline_workflow.py index 70dfe47d7f..ee146e8287 100644 --- a/api/controllers/console/datasets/rag_pipeline/rag_pipeline_workflow.py +++ b/api/controllers/console/datasets/rag_pipeline/rag_pipeline_workflow.py @@ -4,12 +4,12 @@ from typing import Any, Literal, cast from flask import abort, request from flask_restx import Resource, marshal_with # type: ignore -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field, ValidationError from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import BadRequest, Forbidden, InternalServerError, NotFound import services +from controllers.common.controller_schemas import DefaultBlockConfigQuery, WorkflowListQuery, WorkflowUpdatePayload from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.app.error import ( @@ -40,6 +40,7 @@ from core.app.apps.pipeline.pipeline_generator import PipelineGenerator from core.app.entities.app_invoke_entities import InvokeFrom from extensions.ext_database import db from factories import variable_factory +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs import helper from libs.helper import TimestampField, UUIDStrOrEmpty from libs.login import current_account_with_tenant, current_user, login_required @@ -94,22 +95,6 @@ class PublishedWorkflowRunPayload(DraftWorkflowRunPayload): original_document_id: str | None = None -class DefaultBlockConfigQuery(BaseModel): - q: str | None = None - - -class WorkflowListQuery(BaseModel): - page: int = Field(default=1, ge=1, le=99999) - limit: int = Field(default=10, ge=1, le=100) - user_id: str | None = None - named_only: bool = False - - -class WorkflowUpdatePayload(BaseModel): - marked_name: str | None = Field(default=None, max_length=20) - marked_comment: str | None = Field(default=None, max_length=100) - - class NodeIdQuery(BaseModel): node_id: str @@ -361,89 +346,6 @@ class PublishedRagPipelineRunApi(Resource): raise InvokeRateLimitHttpError(ex.description) -# class RagPipelinePublishedDatasourceNodeRunStatusApi(Resource): -# @setup_required -# @login_required -# @account_initialization_required -# @get_rag_pipeline -# def post(self, pipeline: Pipeline, node_id: str): -# """ -# Run rag pipeline datasource -# """ -# # The role of the current user in the ta table must be admin, owner, or editor -# if not current_user.has_edit_permission: -# raise Forbidden() -# -# if not isinstance(current_user, Account): -# raise Forbidden() -# -# parser = (reqparse.RequestParser() -# .add_argument("job_id", type=str, required=True, nullable=False, location="json") -# .add_argument("datasource_type", type=str, required=True, location="json") -# ) -# args = parser.parse_args() -# -# job_id = args.get("job_id") -# if job_id == None: -# raise ValueError("missing job_id") -# datasource_type = args.get("datasource_type") -# if datasource_type == None: -# raise ValueError("missing datasource_type") -# -# rag_pipeline_service = RagPipelineService() -# result = rag_pipeline_service.run_datasource_workflow_node_status( -# pipeline=pipeline, -# node_id=node_id, -# job_id=job_id, -# account=current_user, -# datasource_type=datasource_type, -# is_published=True -# ) -# -# return result - - -# class RagPipelineDraftDatasourceNodeRunStatusApi(Resource): -# @setup_required -# @login_required -# @account_initialization_required -# @get_rag_pipeline -# def post(self, pipeline: Pipeline, node_id: str): -# """ -# Run rag pipeline datasource -# """ -# # The role of the current user in the ta table must be admin, owner, or editor -# if not current_user.has_edit_permission: -# raise Forbidden() -# -# if not isinstance(current_user, Account): -# raise Forbidden() -# -# parser = (reqparse.RequestParser() -# .add_argument("job_id", type=str, required=True, nullable=False, location="json") -# .add_argument("datasource_type", type=str, required=True, location="json") -# ) -# args = parser.parse_args() -# -# job_id = args.get("job_id") -# if job_id == None: -# raise ValueError("missing job_id") -# datasource_type = args.get("datasource_type") -# if datasource_type == None: -# raise ValueError("missing datasource_type") -# -# rag_pipeline_service = RagPipelineService() -# result = rag_pipeline_service.run_datasource_workflow_node_status( -# pipeline=pipeline, -# node_id=node_id, -# job_id=job_id, -# account=current_user, -# datasource_type=datasource_type, -# is_published=False -# ) -# -# return result -# @console_ns.route("/rag/pipelines//workflows/published/datasource/nodes//run") class RagPipelinePublishedDatasourceNodeRunApi(Resource): @console_ns.expect(console_ns.models[DatasourceNodeRunPayload.__name__]) diff --git a/api/controllers/console/explore/audio.py b/api/controllers/console/explore/audio.py index a37077af42..ab660d9dc3 100644 --- a/api/controllers/console/explore/audio.py +++ b/api/controllers/console/explore/audio.py @@ -1,7 +1,6 @@ import logging from flask import request -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import InternalServerError import services @@ -20,6 +19,7 @@ from controllers.console.app.error import ( ) from controllers.console.explore.wraps import InstalledAppResource from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from services.audio_service import AudioService from services.errors.audio import ( AudioTooLargeServiceError, diff --git a/api/controllers/console/explore/completion.py b/api/controllers/console/explore/completion.py index eacd7332fe..ccdccceaa6 100644 --- a/api/controllers/console/explore/completion.py +++ b/api/controllers/console/explore/completion.py @@ -2,7 +2,6 @@ import logging from typing import Any, Literal from uuid import UUID -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field, field_validator from werkzeug.exceptions import InternalServerError, NotFound @@ -26,6 +25,7 @@ from core.errors.error import ( QuotaExceededError, ) from extensions.ext_database import db +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from libs.datetime_utils import naive_utc_now from libs.login import current_user diff --git a/api/controllers/console/explore/installed_app.py b/api/controllers/console/explore/installed_app.py index 0740dd0e24..2d9a997fbf 100644 --- a/api/controllers/console/explore/installed_app.py +++ b/api/controllers/console/explore/installed_app.py @@ -1,21 +1,24 @@ import logging +from datetime import datetime from typing import Any from flask import request -from flask_restx import Resource, fields, marshal_with -from pydantic import BaseModel, Field +from flask_restx import Resource +from pydantic import BaseModel, Field, computed_field, field_validator from sqlalchemy import and_, select from werkzeug.exceptions import BadRequest, Forbidden, NotFound -from controllers.common.schema import get_or_create_model +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.explore.wraps import InstalledAppResource from controllers.console.wraps import account_initialization_required, cloud_edition_billing_resource_check from extensions.ext_database import db -from fields.installed_app_fields import app_fields, installed_app_fields, installed_app_list_fields +from fields.base import ResponseModel +from graphon.file import helpers as file_helpers from libs.datetime_utils import naive_utc_now from libs.login import current_account_with_tenant, login_required from models import App, InstalledApp, RecommendedApp +from models.model import IconType from services.account_service import TenantService from services.enterprise.enterprise_service import EnterpriseService from services.feature_service import FeatureService @@ -36,22 +39,97 @@ class InstalledAppsListQuery(BaseModel): logger = logging.getLogger(__name__) -app_model = get_or_create_model("InstalledAppInfo", app_fields) +def _build_icon_url(icon_type: str | IconType | None, icon: str | None) -> str | None: + if icon is None or icon_type is None: + return None + icon_type_value = icon_type.value if isinstance(icon_type, IconType) else str(icon_type) + if icon_type_value.lower() != IconType.IMAGE: + return None + return file_helpers.get_signed_file_url(icon) -installed_app_fields_copy = installed_app_fields.copy() -installed_app_fields_copy["app"] = fields.Nested(app_model) -installed_app_model = get_or_create_model("InstalledApp", installed_app_fields_copy) -installed_app_list_fields_copy = installed_app_list_fields.copy() -installed_app_list_fields_copy["installed_apps"] = fields.List(fields.Nested(installed_app_model)) -installed_app_list_model = get_or_create_model("InstalledAppList", installed_app_list_fields_copy) +def _safe_primitive(value: Any) -> Any: + if value is None or isinstance(value, (str, int, float, bool, datetime)): + return value + return None + + +class InstalledAppInfoResponse(ResponseModel): + id: str + name: str | None = None + mode: str | None = None + icon_type: str | None = None + icon: str | None = None + icon_background: str | None = None + use_icon_as_answer_icon: bool | None = None + + @field_validator("mode", "icon_type", mode="before") + @classmethod + def _normalize_enum_like(cls, value: Any) -> str | None: + if value is None: + return None + if isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @computed_field(return_type=str | None) # type: ignore[prop-decorator] + @property + def icon_url(self) -> str | None: + return _build_icon_url(self.icon_type, self.icon) + + +class InstalledAppResponse(ResponseModel): + id: str + app: InstalledAppInfoResponse + app_owner_tenant_id: str + is_pinned: bool + last_used_at: int | None = None + editable: bool + uninstallable: bool + + @field_validator("app", mode="before") + @classmethod + def _normalize_app(cls, value: Any) -> Any: + if isinstance(value, dict): + return value + return { + "id": _safe_primitive(getattr(value, "id", "")) or "", + "name": _safe_primitive(getattr(value, "name", None)), + "mode": _safe_primitive(getattr(value, "mode", None)), + "icon_type": _safe_primitive(getattr(value, "icon_type", None)), + "icon": _safe_primitive(getattr(value, "icon", None)), + "icon_background": _safe_primitive(getattr(value, "icon_background", None)), + "use_icon_as_answer_icon": _safe_primitive(getattr(value, "use_icon_as_answer_icon", None)), + } + + @field_validator("last_used_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class InstalledAppListResponse(ResponseModel): + installed_apps: list[InstalledAppResponse] + + +register_schema_models( + console_ns, + InstalledAppCreatePayload, + InstalledAppUpdatePayload, + InstalledAppsListQuery, + InstalledAppInfoResponse, + InstalledAppResponse, + InstalledAppListResponse, +) @console_ns.route("/installed-apps") class InstalledAppsListApi(Resource): @login_required @account_initialization_required - @marshal_with(installed_app_list_model) + @console_ns.response(200, "Success", console_ns.models[InstalledAppListResponse.__name__]) def get(self): query = InstalledAppsListQuery.model_validate(request.args.to_dict()) current_user, current_tenant_id = current_account_with_tenant() @@ -125,7 +203,9 @@ class InstalledAppsListApi(Resource): ) ) - return {"installed_apps": installed_app_list} + return InstalledAppListResponse.model_validate( + {"installed_apps": installed_app_list}, from_attributes=True + ).model_dump(mode="json") @login_required @account_initialization_required diff --git a/api/controllers/console/explore/message.py b/api/controllers/console/explore/message.py index 64d55d7ca3..209667d1d0 100644 --- a/api/controllers/console/explore/message.py +++ b/api/controllers/console/explore/message.py @@ -2,7 +2,6 @@ import logging from typing import Literal from flask import request -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, TypeAdapter from werkzeug.exceptions import InternalServerError, NotFound @@ -25,6 +24,7 @@ from core.app.entities.app_invoke_entities import InvokeFrom from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError from fields.conversation_fields import ResultResponse from fields.message_fields import MessageInfiniteScrollPagination, MessageListItem, SuggestedQuestionsResponse +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from libs.login import current_account_with_tenant from models.enums import FeedbackRating diff --git a/api/controllers/console/explore/recommended_app.py b/api/controllers/console/explore/recommended_app.py index c9920c97cf..55bd679b48 100644 --- a/api/controllers/console/explore/recommended_app.py +++ b/api/controllers/console/explore/recommended_app.py @@ -1,66 +1,83 @@ +from typing import Any + from flask import request -from flask_restx import Resource, fields, marshal_with -from pydantic import BaseModel, Field +from flask_restx import Resource +from pydantic import BaseModel, Field, computed_field, field_validator from constants.languages import languages -from controllers.common.schema import get_or_create_model +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.wraps import account_initialization_required -from libs.helper import AppIconUrlField +from fields.base import ResponseModel +from libs.helper import build_icon_url from libs.login import current_user, login_required from services.recommended_app_service import RecommendedAppService -app_fields = { - "id": fields.String, - "name": fields.String, - "mode": fields.String, - "icon": fields.String, - "icon_type": fields.String, - "icon_url": AppIconUrlField, - "icon_background": fields.String, -} - -app_model = get_or_create_model("RecommendedAppInfo", app_fields) - -recommended_app_fields = { - "app": fields.Nested(app_model, attribute="app"), - "app_id": fields.String, - "description": fields.String(attribute="description"), - "copyright": fields.String, - "privacy_policy": fields.String, - "custom_disclaimer": fields.String, - "category": fields.String, - "position": fields.Integer, - "is_listed": fields.Boolean, - "can_trial": fields.Boolean, -} - -recommended_app_model = get_or_create_model("RecommendedApp", recommended_app_fields) - -recommended_app_list_fields = { - "recommended_apps": fields.List(fields.Nested(recommended_app_model)), - "categories": fields.List(fields.String), -} - -recommended_app_list_model = get_or_create_model("RecommendedAppList", recommended_app_list_fields) - class RecommendedAppsQuery(BaseModel): language: str | None = Field(default=None) -console_ns.schema_model( - RecommendedAppsQuery.__name__, - RecommendedAppsQuery.model_json_schema(ref_template="#/definitions/{model}"), +class RecommendedAppInfoResponse(ResponseModel): + id: str + name: str | None = None + mode: str | None = None + icon: str | None = None + icon_type: str | None = None + icon_background: str | None = None + + @staticmethod + def _normalize_enum_like(value: Any) -> str | None: + if value is None: + return None + if isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @field_validator("mode", "icon_type", mode="before") + @classmethod + def _normalize_enum_fields(cls, value: Any) -> str | None: + return cls._normalize_enum_like(value) + + @computed_field(return_type=str | None) # type: ignore[prop-decorator] + @property + def icon_url(self) -> str | None: + return build_icon_url(self.icon_type, self.icon) + + +class RecommendedAppResponse(ResponseModel): + app: RecommendedAppInfoResponse | None = None + app_id: str + description: str | None = None + copyright: str | None = None + privacy_policy: str | None = None + custom_disclaimer: str | None = None + category: str | None = None + position: int | None = None + is_listed: bool | None = None + can_trial: bool | None = None + + +class RecommendedAppListResponse(ResponseModel): + recommended_apps: list[RecommendedAppResponse] + categories: list[str] + + +register_schema_models( + console_ns, + RecommendedAppsQuery, + RecommendedAppInfoResponse, + RecommendedAppResponse, + RecommendedAppListResponse, ) @console_ns.route("/explore/apps") class RecommendedAppListApi(Resource): @console_ns.expect(console_ns.models[RecommendedAppsQuery.__name__]) + @console_ns.response(200, "Success", console_ns.models[RecommendedAppListResponse.__name__]) @login_required @account_initialization_required - @marshal_with(recommended_app_list_model) def get(self): # language args args = RecommendedAppsQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore @@ -72,7 +89,10 @@ class RecommendedAppListApi(Resource): else: language_prefix = languages[0] - return RecommendedAppService.get_recommended_apps_and_categories(language_prefix) + return RecommendedAppListResponse.model_validate( + RecommendedAppService.get_recommended_apps_and_categories(language_prefix), + from_attributes=True, + ).model_dump(mode="json") @console_ns.route("/explore/apps/") diff --git a/api/controllers/console/explore/trial.py b/api/controllers/console/explore/trial.py index e432574434..1456301a24 100644 --- a/api/controllers/console/explore/trial.py +++ b/api/controllers/console/explore/trial.py @@ -3,8 +3,6 @@ from typing import Any, Literal, cast from flask import request from flask_restx import Resource, fields, marshal, marshal_with -from graphon.graph_engine.manager import GraphEngineManager -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel from sqlalchemy import select from werkzeug.exceptions import Forbidden, InternalServerError, NotFound @@ -61,6 +59,8 @@ from fields.workflow_fields import ( workflow_fields, workflow_partial_fields, ) +from graphon.graph_engine.manager import GraphEngineManager +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from libs.helper import uuid_value from libs.login import current_user @@ -169,6 +169,7 @@ console_ns.schema_model( class TrialAppWorkflowRunApi(TrialAppResource): + @trial_feature_enable @console_ns.expect(console_ns.models[WorkflowRunRequest.__name__]) def post(self, trial_app): """ @@ -210,6 +211,7 @@ class TrialAppWorkflowRunApi(TrialAppResource): class TrialAppWorkflowTaskStopApi(TrialAppResource): + @trial_feature_enable def post(self, trial_app, task_id: str): """ Stop workflow task @@ -290,7 +292,6 @@ class TrialChatApi(TrialAppResource): class TrialMessageSuggestedQuestionApi(TrialAppResource): - @trial_feature_enable def get(self, trial_app, message_id): app_model = trial_app app_mode = AppMode.value_of(app_model.mode) @@ -470,7 +471,6 @@ class TrialCompletionApi(TrialAppResource): class TrialSitApi(Resource): """Resource for trial app sites.""" - @trial_feature_enable @get_app_model_with_trial(None) def get(self, app_model): """Retrieve app site info. @@ -492,7 +492,6 @@ class TrialSitApi(Resource): class TrialAppParameterApi(Resource): """Resource for app variables.""" - @trial_feature_enable @get_app_model_with_trial(None) def get(self, app_model): """Retrieve app parameters.""" @@ -521,7 +520,6 @@ class TrialAppParameterApi(Resource): class AppApi(Resource): - @trial_feature_enable @get_app_model_with_trial(None) @marshal_with(app_detail_with_site_model) def get(self, app_model): @@ -534,7 +532,6 @@ class AppApi(Resource): class AppWorkflowApi(Resource): - @trial_feature_enable @get_app_model_with_trial(None) @marshal_with(workflow_model) def get(self, app_model): @@ -547,7 +544,6 @@ class AppWorkflowApi(Resource): class DatasetListApi(Resource): - @trial_feature_enable @get_app_model_with_trial(None) def get(self, app_model): page = request.args.get("page", default=1, type=int) diff --git a/api/controllers/console/explore/workflow.py b/api/controllers/console/explore/workflow.py index da88de6776..438cce4fd8 100644 --- a/api/controllers/console/explore/workflow.py +++ b/api/controllers/console/explore/workflow.py @@ -1,7 +1,5 @@ import logging -from graphon.graph_engine.manager import GraphEngineManager -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import InternalServerError from controllers.common.controller_schemas import WorkflowRunPayload @@ -23,6 +21,8 @@ from core.errors.error import ( QuotaExceededError, ) from extensions.ext_redis import redis_client +from graphon.graph_engine.manager import GraphEngineManager +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from libs.login import current_account_with_tenant from models.model import AppMode, InstalledApp diff --git a/api/controllers/console/extension.py b/api/controllers/console/extension.py index efa46c9779..7a6356d052 100644 --- a/api/controllers/console/extension.py +++ b/api/controllers/console/extension.py @@ -1,15 +1,18 @@ +from datetime import datetime +from typing import Any + from flask import request -from flask_restx import Resource, fields, marshal_with -from pydantic import BaseModel, Field +from flask_restx import Resource +from pydantic import BaseModel, Field, TypeAdapter, field_validator from constants import HIDDEN_VALUE -from fields.api_based_extension_fields import api_based_extension_fields +from fields.base import ResponseModel from libs.login import current_account_with_tenant, login_required from models.api_based_extension import APIBasedExtension from services.api_based_extension_service import APIBasedExtensionService from services.code_based_extension_service import CodeBasedExtensionService -from ..common.schema import register_schema_models +from ..common.schema import DEFAULT_REF_TEMPLATE_SWAGGER_2_0, register_schema_models from . import console_ns from .wraps import account_initialization_required, setup_required @@ -24,12 +27,52 @@ class APIBasedExtensionPayload(BaseModel): api_key: str = Field(description="API key for authentication") -register_schema_models(console_ns, APIBasedExtensionPayload) +class CodeBasedExtensionResponse(ResponseModel): + module: str = Field(description="Module name") + data: Any = Field(description="Extension data") -api_based_extension_model = console_ns.model("ApiBasedExtensionModel", api_based_extension_fields) +def _mask_api_key(api_key: str) -> str: + if not api_key: + return api_key + if len(api_key) <= 8: + return api_key[0] + "******" + api_key[-1] + return api_key[:3] + "******" + api_key[-3:] -api_based_extension_list_model = fields.List(fields.Nested(api_based_extension_model)) + +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class APIBasedExtensionResponse(ResponseModel): + id: str + name: str + api_endpoint: str + api_key: str + created_at: int | None = None + + @field_validator("api_key", mode="before") + @classmethod + def _normalize_api_key(cls, value: str) -> str: + return _mask_api_key(value) + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_created_at(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +register_schema_models(console_ns, APIBasedExtensionPayload, CodeBasedExtensionResponse, APIBasedExtensionResponse) +console_ns.schema_model( + "APIBasedExtensionListResponse", + TypeAdapter(list[APIBasedExtensionResponse]).json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), +) + + +def _serialize_api_based_extension(extension: APIBasedExtension) -> dict[str, Any]: + return APIBasedExtensionResponse.model_validate(extension, from_attributes=True).model_dump(mode="json") @console_ns.route("/code-based-extension") @@ -40,10 +83,7 @@ class CodeBasedExtensionAPI(Resource): @console_ns.response( 200, "Success", - console_ns.model( - "CodeBasedExtensionResponse", - {"module": fields.String(description="Module name"), "data": fields.Raw(description="Extension data")}, - ), + console_ns.models[CodeBasedExtensionResponse.__name__], ) @setup_required @login_required @@ -51,30 +91,34 @@ class CodeBasedExtensionAPI(Resource): def get(self): query = CodeBasedExtensionQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore - return {"module": query.module, "data": CodeBasedExtensionService.get_code_based_extension(query.module)} + return CodeBasedExtensionResponse( + module=query.module, + data=CodeBasedExtensionService.get_code_based_extension(query.module), + ).model_dump(mode="json") @console_ns.route("/api-based-extension") class APIBasedExtensionAPI(Resource): @console_ns.doc("get_api_based_extensions") @console_ns.doc(description="Get all API-based extensions for current tenant") - @console_ns.response(200, "Success", api_based_extension_list_model) + @console_ns.response(200, "Success", console_ns.models["APIBasedExtensionListResponse"]) @setup_required @login_required @account_initialization_required - @marshal_with(api_based_extension_model) def get(self): _, tenant_id = current_account_with_tenant() - return APIBasedExtensionService.get_all_by_tenant_id(tenant_id) + return [ + _serialize_api_based_extension(extension) + for extension in APIBasedExtensionService.get_all_by_tenant_id(tenant_id) + ] @console_ns.doc("create_api_based_extension") @console_ns.doc(description="Create a new API-based extension") @console_ns.expect(console_ns.models[APIBasedExtensionPayload.__name__]) - @console_ns.response(201, "Extension created successfully", api_based_extension_model) + @console_ns.response(201, "Extension created successfully", console_ns.models[APIBasedExtensionResponse.__name__]) @setup_required @login_required @account_initialization_required - @marshal_with(api_based_extension_model) def post(self): payload = APIBasedExtensionPayload.model_validate(console_ns.payload or {}) _, current_tenant_id = current_account_with_tenant() @@ -86,7 +130,7 @@ class APIBasedExtensionAPI(Resource): api_key=payload.api_key, ) - return APIBasedExtensionService.save(extension_data) + return _serialize_api_based_extension(APIBasedExtensionService.save(extension_data)) @console_ns.route("/api-based-extension/") @@ -94,26 +138,26 @@ class APIBasedExtensionDetailAPI(Resource): @console_ns.doc("get_api_based_extension") @console_ns.doc(description="Get API-based extension by ID") @console_ns.doc(params={"id": "Extension ID"}) - @console_ns.response(200, "Success", api_based_extension_model) + @console_ns.response(200, "Success", console_ns.models[APIBasedExtensionResponse.__name__]) @setup_required @login_required @account_initialization_required - @marshal_with(api_based_extension_model) def get(self, id): api_based_extension_id = str(id) _, tenant_id = current_account_with_tenant() - return APIBasedExtensionService.get_with_tenant_id(tenant_id, api_based_extension_id) + return _serialize_api_based_extension( + APIBasedExtensionService.get_with_tenant_id(tenant_id, api_based_extension_id) + ) @console_ns.doc("update_api_based_extension") @console_ns.doc(description="Update API-based extension") @console_ns.doc(params={"id": "Extension ID"}) @console_ns.expect(console_ns.models[APIBasedExtensionPayload.__name__]) - @console_ns.response(200, "Extension updated successfully", api_based_extension_model) + @console_ns.response(200, "Extension updated successfully", console_ns.models[APIBasedExtensionResponse.__name__]) @setup_required @login_required @account_initialization_required - @marshal_with(api_based_extension_model) def post(self, id): api_based_extension_id = str(id) _, current_tenant_id = current_account_with_tenant() @@ -128,7 +172,7 @@ class APIBasedExtensionDetailAPI(Resource): if payload.api_key != HIDDEN_VALUE: extension_data_from_db.api_key = payload.api_key - return APIBasedExtensionService.save(extension_data_from_db) + return _serialize_api_based_extension(APIBasedExtensionService.save(extension_data_from_db)) @console_ns.doc("delete_api_based_extension") @console_ns.doc(description="Delete API-based extension") diff --git a/api/controllers/console/human_input_form.py b/api/controllers/console/human_input_form.py index e37e78c966..845af37365 100644 --- a/api/controllers/console/human_input_form.py +++ b/api/controllers/console/human_input_form.py @@ -7,7 +7,8 @@ import logging from collections.abc import Generator from flask import Response, jsonify, request -from flask_restx import Resource, reqparse +from flask_restx import Resource +from pydantic import BaseModel from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker @@ -33,6 +34,11 @@ from services.workflow_event_snapshot_service import build_workflow_event_stream logger = logging.getLogger(__name__) +class HumanInputFormSubmitPayload(BaseModel): + inputs: dict + action: str + + def _jsonify_form_definition(form: Form) -> Response: payload = form.get_definition().model_dump() payload["expiration_time"] = int(form.expiration_time.timestamp()) @@ -84,10 +90,7 @@ class ConsoleHumanInputFormApi(Resource): "action": "Approve" } """ - parser = reqparse.RequestParser() - parser.add_argument("inputs", type=dict, required=True, location="json") - parser.add_argument("action", type=str, required=True, location="json") - args = parser.parse_args() + payload = HumanInputFormSubmitPayload.model_validate(request.get_json()) current_user, _ = current_account_with_tenant() service = HumanInputService(db.engine) @@ -107,8 +110,8 @@ class ConsoleHumanInputFormApi(Resource): service.submit_form_by_token( recipient_type=recipient_type, form_token=form_token, - selected_action_id=args["action"], - form_data=args["inputs"], + selected_action_id=payload.action, + form_data=payload.inputs, submission_user_id=current_user.id, ) @@ -168,12 +171,13 @@ class ConsoleWorkflowEventsApi(Resource): else: msg_generator = MessageGenerator() generator: BaseAppGenerator - if app.mode == AppMode.ADVANCED_CHAT: - generator = AdvancedChatAppGenerator() - elif app.mode == AppMode.WORKFLOW: - generator = WorkflowAppGenerator() - else: - raise InvalidArgumentError(f"cannot subscribe to workflow run, workflow_run_id={workflow_run.id}") + match app.mode: + case AppMode.ADVANCED_CHAT: + generator = AdvancedChatAppGenerator() + case AppMode.WORKFLOW: + generator = WorkflowAppGenerator() + case _: + raise InvalidArgumentError(f"cannot subscribe to workflow run, workflow_run_id={workflow_run.id}") include_state_snapshot = request.args.get("include_state_snapshot", "false").lower() == "true" diff --git a/api/controllers/console/notification.py b/api/controllers/console/notification.py index 180167402a..5d46470173 100644 --- a/api/controllers/console/notification.py +++ b/api/controllers/console/notification.py @@ -1,3 +1,4 @@ +from collections.abc import Mapping from typing import TypedDict from flask import request @@ -13,6 +14,14 @@ from services.billing_service import BillingService _FALLBACK_LANG = "en-US" +class NotificationLangContent(TypedDict, total=False): + lang: str + title: str + subtitle: str + body: str + titlePicUrl: str + + class NotificationItemDict(TypedDict): notification_id: str | None frequency: str | None @@ -28,9 +37,11 @@ class NotificationResponseDict(TypedDict): notifications: list[NotificationItemDict] -def _pick_lang_content(contents: dict, lang: str) -> dict: +def _pick_lang_content(contents: Mapping[str, NotificationLangContent], lang: str) -> NotificationLangContent: """Return the single LangContent for *lang*, falling back to English.""" - return contents.get(lang) or contents.get(_FALLBACK_LANG) or next(iter(contents.values()), {}) + return ( + contents.get(lang) or contents.get(_FALLBACK_LANG) or next(iter(contents.values()), NotificationLangContent()) + ) class DismissNotificationPayload(BaseModel): @@ -71,7 +82,7 @@ class NotificationApi(Resource): notifications: list[NotificationItemDict] = [] for notification in result.get("notifications") or []: - contents: dict = notification.get("contents") or {} + contents: Mapping[str, NotificationLangContent] = notification.get("contents") or {} lang_content = _pick_lang_content(contents, lang) item: NotificationItemDict = { "notification_id": notification.get("notificationId"), diff --git a/api/controllers/console/remote_files.py b/api/controllers/console/remote_files.py index 551c86fd82..2a46d2250a 100644 --- a/api/controllers/console/remote_files.py +++ b/api/controllers/console/remote_files.py @@ -2,7 +2,6 @@ import urllib.parse import httpx from flask_restx import Resource -from graphon.file import helpers as file_helpers from pydantic import BaseModel, Field import services @@ -16,6 +15,7 @@ from controllers.console import console_ns from core.helper import ssrf_proxy from extensions.ext_database import db from fields.file_fields import FileWithSignedUrl, RemoteFileInfo +from graphon.file import helpers as file_helpers from libs.login import current_account_with_tenant, login_required from services.file_service import FileService diff --git a/api/controllers/console/socketio/__init__.py b/api/controllers/console/socketio/__init__.py new file mode 100644 index 0000000000..8b13789179 --- /dev/null +++ b/api/controllers/console/socketio/__init__.py @@ -0,0 +1 @@ + diff --git a/api/controllers/console/socketio/workflow.py b/api/controllers/console/socketio/workflow.py new file mode 100644 index 0000000000..b4f03593fd --- /dev/null +++ b/api/controllers/console/socketio/workflow.py @@ -0,0 +1,108 @@ +import logging +from collections.abc import Callable +from typing import cast + +from flask import Request as FlaskRequest + +from extensions.ext_socketio import sio +from libs.passport import PassportService +from libs.token import extract_access_token +from repositories.workflow_collaboration_repository import WorkflowCollaborationRepository +from services.account_service import AccountService +from services.workflow_collaboration_service import WorkflowCollaborationService + +repository = WorkflowCollaborationRepository() +collaboration_service = WorkflowCollaborationService(repository, sio) + + +def _sio_on(event: str) -> Callable[[Callable[..., object]], Callable[..., object]]: + return cast(Callable[[Callable[..., object]], Callable[..., object]], sio.on(event)) + + +@_sio_on("connect") +def socket_connect(sid, environ, auth): + """ + WebSocket connect event, do authentication here. + """ + try: + request_environ = FlaskRequest(environ) + token = extract_access_token(request_environ) + except Exception: + logging.exception("Failed to extract token") + token = None + + if not token: + logging.warning("Socket connect rejected: missing token (sid=%s)", sid) + return False + + try: + decoded = PassportService().verify(token) + user_id = decoded.get("user_id") + if not user_id: + logging.warning("Socket connect rejected: missing user_id (sid=%s)", sid) + return False + + with sio.app.app_context(): + user = AccountService.load_logged_in_account(account_id=user_id) + if not user: + logging.warning("Socket connect rejected: user not found (user_id=%s, sid=%s)", user_id, sid) + return False + if not user.has_edit_permission: + logging.warning("Socket connect rejected: no edit permission (user_id=%s, sid=%s)", user_id, sid) + return False + + collaboration_service.save_socket_identity(sid, user) + return True + + except Exception: + logging.exception("Socket authentication failed") + return False + + +@_sio_on("user_connect") +def handle_user_connect(sid, data): + """ + Handle user connect event. Each session (tab) is treated as an independent collaborator. + """ + workflow_id = data.get("workflow_id") + if not workflow_id: + return {"msg": "workflow_id is required"}, 400 + + result = collaboration_service.authorize_and_join_workflow_room(workflow_id, sid) + if not result: + return {"msg": "unauthorized"}, 401 + + user_id, is_leader = result + return {"msg": "connected", "user_id": user_id, "sid": sid, "isLeader": is_leader} + + +@_sio_on("disconnect") +def handle_disconnect(sid): + """ + Handle session disconnect event. Remove the specific session from online users. + """ + collaboration_service.disconnect_session(sid) + + +@_sio_on("collaboration_event") +def handle_collaboration_event(sid, data): + """ + Handle general collaboration events, include: + 1. mouse_move + 2. vars_and_features_update + 3. sync_request (ask leader to update graph) + 4. app_state_update + 5. mcp_server_update + 6. workflow_update + 7. comments_update + 8. node_panel_presence + """ + return collaboration_service.relay_collaboration_event(sid, data) + + +@_sio_on("graph_event") +def handle_graph_event(sid, data): + """ + Handle graph events - simple broadcast relay. + """ + return collaboration_service.relay_graph_event(sid, data) diff --git a/api/controllers/console/tag/tags.py b/api/controllers/console/tag/tags.py index 39b84d3869..614bf03ea5 100644 --- a/api/controllers/console/tag/tags.py +++ b/api/controllers/console/tag/tags.py @@ -1,13 +1,14 @@ from typing import Literal from flask import request -from flask_restx import Namespace, Resource, fields, marshal_with -from pydantic import BaseModel, Field +from flask_restx import Resource +from pydantic import BaseModel, Field, field_validator from werkzeug.exceptions import Forbidden from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, edit_permission_required, setup_required +from fields.base import ResponseModel from libs.login import current_account_with_tenant, login_required from models.enums import TagType from services.tag_service import ( @@ -18,17 +19,6 @@ from services.tag_service import ( UpdateTagPayload, ) -dataset_tag_fields = { - "id": fields.String, - "name": fields.String, - "type": fields.String, - "binding_count": fields.String, -} - - -def build_dataset_tag_fields(api_or_ns: Namespace): - return api_or_ns.model("DataSetTag", dataset_tag_fields) - class TagBasePayload(BaseModel): name: str = Field(description="Tag name", min_length=1, max_length=50) @@ -52,12 +42,36 @@ class TagListQueryParam(BaseModel): keyword: str | None = Field(None, description="Search keyword") +class TagResponse(ResponseModel): + id: str + name: str + type: str | None = None + binding_count: str | None = None + + @field_validator("type", mode="before") + @classmethod + def normalize_type(cls, value: TagType | str | None) -> str | None: + if value is None: + return None + if isinstance(value, TagType): + return value.value + return value + + @field_validator("binding_count", mode="before") + @classmethod + def normalize_binding_count(cls, value: int | str | None) -> str | None: + if value is None: + return None + return str(value) + + register_schema_models( console_ns, TagBasePayload, TagBindingPayload, TagBindingRemovePayload, TagListQueryParam, + TagResponse, ) @@ -69,14 +83,18 @@ class TagListApi(Resource): @console_ns.doc( params={"type": 'Tag type filter. Can be "knowledge" or "app".', "keyword": "Search keyword for tag name."} ) - @marshal_with(dataset_tag_fields) + @console_ns.doc(responses={200: ("Success", [console_ns.models[TagResponse.__name__]])}) def get(self): _, current_tenant_id = current_account_with_tenant() raw_args = request.args.to_dict() param = TagListQueryParam.model_validate(raw_args) tags = TagService.get_tags(param.type, current_tenant_id, param.keyword) - return tags, 200 + serialized_tags = [ + TagResponse.model_validate(tag, from_attributes=True).model_dump(mode="json") for tag in tags + ] + + return serialized_tags, 200 @console_ns.expect(console_ns.models[TagBasePayload.__name__]) @setup_required @@ -91,7 +109,9 @@ class TagListApi(Resource): payload = TagBasePayload.model_validate(console_ns.payload or {}) tag = TagService.save_tags(SaveTagPayload(name=payload.name, type=payload.type)) - response = {"id": tag.id, "name": tag.name, "type": tag.type, "binding_count": 0} + response = TagResponse.model_validate( + {"id": tag.id, "name": tag.name, "type": tag.type, "binding_count": 0} + ).model_dump(mode="json") return response, 200 @@ -114,7 +134,9 @@ class TagUpdateDeleteApi(Resource): binding_count = TagService.get_tag_binding_count(tag_id) - response = {"id": tag.id, "name": tag.name, "type": tag.type, "binding_count": binding_count} + response = TagResponse.model_validate( + {"id": tag.id, "name": tag.name, "type": tag.type, "binding_count": binding_count} + ).model_dump(mode="json") return response, 200 diff --git a/api/controllers/console/workspace/__init__.py b/api/controllers/console/workspace/__init__.py index 60f712e476..59dd29fdac 100644 --- a/api/controllers/console/workspace/__init__.py +++ b/api/controllers/console/workspace/__init__.py @@ -35,22 +35,24 @@ def plugin_permission_required( return view(*args, **kwargs) if install_required: - if permission.install_permission == TenantPluginPermission.InstallPermission.NOBODY: - raise Forbidden() - if permission.install_permission == TenantPluginPermission.InstallPermission.ADMINS: - if not user.is_admin_or_owner: + match permission.install_permission: + case TenantPluginPermission.InstallPermission.NOBODY: raise Forbidden() - if permission.install_permission == TenantPluginPermission.InstallPermission.EVERYONE: - pass + case TenantPluginPermission.InstallPermission.ADMINS: + if not user.is_admin_or_owner: + raise Forbidden() + case TenantPluginPermission.InstallPermission.EVERYONE: + pass if debug_required: - if permission.debug_permission == TenantPluginPermission.DebugPermission.NOBODY: - raise Forbidden() - if permission.debug_permission == TenantPluginPermission.DebugPermission.ADMINS: - if not user.is_admin_or_owner: + match permission.debug_permission: + case TenantPluginPermission.DebugPermission.NOBODY: raise Forbidden() - if permission.debug_permission == TenantPluginPermission.DebugPermission.EVERYONE: - pass + case TenantPluginPermission.DebugPermission.ADMINS: + if not user.is_admin_or_owner: + raise Forbidden() + case TenantPluginPermission.DebugPermission.EVERYONE: + pass return view(*args, **kwargs) diff --git a/api/controllers/console/workspace/account.py b/api/controllers/console/workspace/account.py index 626d330e9d..44404005b2 100644 --- a/api/controllers/console/workspace/account.py +++ b/api/controllers/console/workspace/account.py @@ -1,14 +1,13 @@ from __future__ import annotations from datetime import datetime -from typing import Literal +from typing import Any, Literal import pytz from flask import request -from flask_restx import Resource, fields, marshal_with +from flask_restx import Resource from pydantic import BaseModel, Field, field_validator, model_validator from sqlalchemy import select -from sqlalchemy.orm import sessionmaker from configs import dify_config from constants.languages import supported_language @@ -38,9 +37,11 @@ from controllers.console.wraps import ( setup_required, ) from extensions.ext_database import db +from fields.base import ResponseModel from fields.member_fields import Account as AccountResponse +from graphon.file import helpers as file_helpers from libs.datetime_utils import naive_utc_now -from libs.helper import EmailStr, TimestampField, extract_remote_ip, timezone +from libs.helper import EmailStr, extract_remote_ip, timezone from libs.login import current_account_with_tenant, login_required from models import AccountIntegrate, InvitationCode from models.account import AccountStatus, InvitationCodeStatus @@ -75,6 +76,10 @@ class AccountAvatarPayload(BaseModel): avatar: str +class AccountAvatarQuery(BaseModel): + avatar: str = Field(..., description="Avatar file ID") + + class AccountInterfaceLanguagePayload(BaseModel): interface_language: str @@ -160,6 +165,7 @@ def reg(cls: type[BaseModel]): reg(AccountInitPayload) reg(AccountNamePayload) reg(AccountAvatarPayload) +reg(AccountAvatarQuery) reg(AccountInterfaceLanguagePayload) reg(AccountInterfaceThemePayload) reg(AccountTimezonePayload) @@ -175,21 +181,61 @@ reg(CheckEmailUniquePayload) register_schema_models(console_ns, AccountResponse) -def _serialize_account(account) -> dict: +def _serialize_account(account) -> dict[str, Any]: return AccountResponse.model_validate(account, from_attributes=True).model_dump(mode="json") -integrate_fields = { - "provider": fields.String, - "created_at": TimestampField, - "is_bound": fields.Boolean, - "link": fields.String, -} +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value -integrate_model = console_ns.model("AccountIntegrate", integrate_fields) -integrate_list_model = console_ns.model( - "AccountIntegrateList", - {"data": fields.List(fields.Nested(integrate_model))}, + +class AccountIntegrateResponse(ResponseModel): + provider: str + created_at: int | None = None + is_bound: bool + link: str | None = None + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_created_at(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class AccountIntegrateListResponse(ResponseModel): + data: list[AccountIntegrateResponse] + + +class EducationVerifyResponse(ResponseModel): + token: str | None = None + + +class EducationStatusResponse(ResponseModel): + result: bool | None = None + is_student: bool | None = None + expire_at: int | None = None + allow_refresh: bool | None = None + + @field_validator("expire_at", mode="before") + @classmethod + def _normalize_expire_at(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class EducationAutocompleteResponse(ResponseModel): + data: list[str] = Field(default_factory=list) + curr_page: int | None = None + has_next: bool | None = None + + +register_schema_models( + console_ns, + AccountIntegrateResponse, + AccountIntegrateListResponse, + EducationVerifyResponse, + EducationStatusResponse, + EducationAutocompleteResponse, ) @@ -269,6 +315,18 @@ class AccountNameApi(Resource): @console_ns.route("/account/avatar") class AccountAvatarApi(Resource): + @console_ns.expect(console_ns.models[AccountAvatarQuery.__name__]) + @console_ns.doc("get_account_avatar") + @console_ns.doc(description="Get account avatar url") + @setup_required + @login_required + @account_initialization_required + def get(self): + args = AccountAvatarQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore + + avatar_url = file_helpers.get_signed_file_url(args.avatar) + return {"avatar_url": avatar_url} + @console_ns.expect(console_ns.models[AccountAvatarPayload.__name__]) @setup_required @login_required @@ -360,7 +418,7 @@ class AccountIntegrateApi(Resource): @setup_required @login_required @account_initialization_required - @marshal_with(integrate_list_model) + @console_ns.response(200, "Success", console_ns.models[AccountIntegrateListResponse.__name__]) def get(self): account, _ = current_account_with_tenant() @@ -396,7 +454,9 @@ class AccountIntegrateApi(Resource): } ) - return {"data": integrate_data} + return AccountIntegrateListResponse( + data=[AccountIntegrateResponse.model_validate(item) for item in integrate_data] + ).model_dump(mode="json") @console_ns.route("/account/delete/verify") @@ -448,31 +508,22 @@ class AccountDeleteUpdateFeedbackApi(Resource): @console_ns.route("/account/education/verify") class EducationVerifyApi(Resource): - verify_fields = { - "token": fields.String, - } - @setup_required @login_required @account_initialization_required @only_edition_cloud @cloud_edition_billing_enabled - @marshal_with(verify_fields) + @console_ns.response(200, "Success", console_ns.models[EducationVerifyResponse.__name__]) def get(self): account, _ = current_account_with_tenant() - return BillingService.EducationIdentity.verify(account.id, account.email) + return EducationVerifyResponse.model_validate( + BillingService.EducationIdentity.verify(account.id, account.email) or {} + ).model_dump(mode="json") @console_ns.route("/account/education") class EducationApi(Resource): - status_fields = { - "result": fields.Boolean, - "is_student": fields.Boolean, - "expire_at": TimestampField, - "allow_refresh": fields.Boolean, - } - @console_ns.expect(console_ns.models[EducationActivatePayload.__name__]) @setup_required @login_required @@ -492,37 +543,33 @@ class EducationApi(Resource): @account_initialization_required @only_edition_cloud @cloud_edition_billing_enabled - @marshal_with(status_fields) + @console_ns.response(200, "Success", console_ns.models[EducationStatusResponse.__name__]) def get(self): account, _ = current_account_with_tenant() - res = BillingService.EducationIdentity.status(account.id) + res = BillingService.EducationIdentity.status(account.id) or {} # convert expire_at to UTC timestamp from isoformat if res and "expire_at" in res: res["expire_at"] = datetime.fromisoformat(res["expire_at"]).astimezone(pytz.utc) - return res + return EducationStatusResponse.model_validate(res).model_dump(mode="json") @console_ns.route("/account/education/autocomplete") class EducationAutoCompleteApi(Resource): - data_fields = { - "data": fields.List(fields.String), - "curr_page": fields.Integer, - "has_next": fields.Boolean, - } - @console_ns.expect(console_ns.models[EducationAutocompleteQuery.__name__]) @setup_required @login_required @account_initialization_required @only_edition_cloud @cloud_edition_billing_enabled - @marshal_with(data_fields) + @console_ns.response(200, "Success", console_ns.models[EducationAutocompleteResponse.__name__]) def get(self): payload = request.args.to_dict(flat=True) args = EducationAutocompleteQuery.model_validate(payload) - return BillingService.EducationIdentity.autocomplete(args.keywords, args.page, args.limit) + return EducationAutocompleteResponse.model_validate( + BillingService.EducationIdentity.autocomplete(args.keywords, args.page, args.limit) or {} + ).model_dump(mode="json") @console_ns.route("/account/change-email") @@ -562,8 +609,7 @@ class ChangeEmailSendEmailApi(Resource): user_email = current_user.email else: - with sessionmaker(db.engine).begin() as session: - account = AccountService.get_account_by_email_with_case_fallback(args.email, session=session) + account = AccountService.get_account_by_email_with_case_fallback(args.email) if account is None: raise AccountNotFound() email_for_sending = account.email diff --git a/api/controllers/console/workspace/agent_providers.py b/api/controllers/console/workspace/agent_providers.py index 3fdcbc4710..764f488755 100644 --- a/api/controllers/console/workspace/agent_providers.py +++ b/api/controllers/console/workspace/agent_providers.py @@ -1,8 +1,8 @@ from flask_restx import Resource, fields -from graphon.model_runtime.utils.encoders import jsonable_encoder from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, setup_required +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.login import current_account_with_tenant, login_required from services.agent_service import AgentService diff --git a/api/controllers/console/workspace/endpoint.py b/api/controllers/console/workspace/endpoint.py index b6b9deb1f9..f45b72f390 100644 --- a/api/controllers/console/workspace/endpoint.py +++ b/api/controllers/console/workspace/endpoint.py @@ -2,13 +2,13 @@ from typing import Any from flask import request from flask_restx import Resource -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, is_admin_or_owner_required, setup_required from core.plugin.impl.exc import PluginPermissionDeniedError +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.login import current_account_with_tenant, login_required from services.plugin.endpoint_service import EndpointService diff --git a/api/controllers/console/workspace/load_balancing_config.py b/api/controllers/console/workspace/load_balancing_config.py index e4cfca9fa4..2a6f37aec8 100644 --- a/api/controllers/console/workspace/load_balancing_config.py +++ b/api/controllers/console/workspace/load_balancing_config.py @@ -1,12 +1,12 @@ from flask_restx import Resource -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from pydantic import BaseModel from werkzeug.exceptions import Forbidden from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, setup_required +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from libs.login import current_account_with_tenant, login_required from models import TenantAccountRole from services.model_load_balancing_service import ModelLoadBalancingService diff --git a/api/controllers/console/workspace/model_providers.py b/api/controllers/console/workspace/model_providers.py index cbb9677309..4b10561fdb 100644 --- a/api/controllers/console/workspace/model_providers.py +++ b/api/controllers/console/workspace/model_providers.py @@ -3,13 +3,13 @@ from typing import Any, Literal from flask import request, send_file from flask_restx import Resource -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field, field_validator from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, is_admin_or_owner_required, setup_required +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.helper import uuid_value from libs.login import current_account_with_tenant, login_required from services.billing_service import BillingService diff --git a/api/controllers/console/workspace/models.py b/api/controllers/console/workspace/models.py index 9182dbb510..b2d07ff8f9 100644 --- a/api/controllers/console/workspace/models.py +++ b/api/controllers/console/workspace/models.py @@ -3,14 +3,14 @@ from typing import Any, cast from flask import request from flask_restx import Resource -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field, field_validator from controllers.common.schema import register_enum_models, register_schema_models from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, is_admin_or_owner_required, setup_required +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.helper import uuid_value from libs.login import current_account_with_tenant, login_required from services.model_load_balancing_service import ModelLoadBalancingService @@ -465,7 +465,7 @@ class ModelProviderModelDisableApi(Resource): class ParserValidate(BaseModel): model: str model_type: ModelType - credentials: dict + credentials: dict[str, Any] console_ns.schema_model( diff --git a/api/controllers/console/workspace/plugin.py b/api/controllers/console/workspace/plugin.py index aa674a63b3..b3e344ccea 100644 --- a/api/controllers/console/workspace/plugin.py +++ b/api/controllers/console/workspace/plugin.py @@ -4,7 +4,6 @@ from typing import Any, Literal from flask import request, send_file from flask_restx import Resource -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field from werkzeug.datastructures import FileStorage from werkzeug.exceptions import Forbidden @@ -15,6 +14,7 @@ from controllers.console import console_ns from controllers.console.workspace import plugin_permission_required from controllers.console.wraps import account_initialization_required, is_admin_or_owner_required, setup_required from core.plugin.impl.exc import PluginDaemonClientSideError +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.login import current_account_with_tenant, login_required from models.account import TenantPluginAutoUpgradeStrategy, TenantPluginPermission from services.plugin.plugin_auto_upgrade_service import PluginAutoUpgradeService diff --git a/api/controllers/console/workspace/tool_providers.py b/api/controllers/console/workspace/tool_providers.py index c9956501e2..471594f349 100644 --- a/api/controllers/console/workspace/tool_providers.py +++ b/api/controllers/console/workspace/tool_providers.py @@ -5,7 +5,6 @@ from urllib.parse import urlparse from flask import make_response, redirect, request, send_file from flask_restx import Resource -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field, HttpUrl, field_validator, model_validator from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import Forbidden @@ -28,6 +27,7 @@ from core.plugin.entities.plugin_daemon import CredentialType from core.plugin.impl.oauth import OAuthHandler from core.tools.entities.tool_entities import ApiProviderSchemaType, WorkflowToolParameterConfiguration from extensions.ext_database import db +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.helper import alphanumeric, uuid_value from libs.login import current_account_with_tenant, login_required from models.provider_ids import ToolProviderID diff --git a/api/controllers/console/workspace/trigger_providers.py b/api/controllers/console/workspace/trigger_providers.py index 7a28a09861..d11b66244f 100644 --- a/api/controllers/console/workspace/trigger_providers.py +++ b/api/controllers/console/workspace/trigger_providers.py @@ -3,7 +3,6 @@ from typing import Any from flask import make_response, redirect, request from flask_restx import Resource -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, model_validator from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import BadRequest, Forbidden @@ -16,6 +15,7 @@ from core.plugin.impl.oauth import OAuthHandler from core.trigger.entities.entities import SubscriptionBuilderUpdater from core.trigger.trigger_manager import TriggerManager from extensions.ext_database import db +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.login import current_user, login_required from models.account import Account from models.provider_ids import TriggerProviderID diff --git a/api/controllers/console/workspace/workspace.py b/api/controllers/console/workspace/workspace.py index 42874e6033..565099db61 100644 --- a/api/controllers/console/workspace/workspace.py +++ b/api/controllers/console/workspace/workspace.py @@ -1,8 +1,9 @@ import logging +from datetime import datetime from flask import request -from flask_restx import Resource, fields, marshal, marshal_with -from pydantic import BaseModel, Field +from flask_restx import Resource, fields, marshal +from pydantic import BaseModel, Field, field_validator from sqlalchemy import select from werkzeug.exceptions import Unauthorized @@ -26,6 +27,7 @@ from controllers.console.wraps import ( ) from enums.cloud_plan import CloudPlan from extensions.ext_database import db +from fields.base import ResponseModel from libs.helper import TimestampField from libs.login import current_account_with_tenant, login_required from models.account import Tenant, TenantCustomConfigDict, TenantStatus @@ -58,6 +60,37 @@ class WorkspaceInfoPayload(BaseModel): name: str +class TenantInfoResponse(ResponseModel): + id: str + name: str | None = None + plan: str | None = None + status: str | None = None + created_at: int | None = None + role: str | None = None + in_trial: bool | None = None + trial_end_reason: str | None = None + custom_config: dict | None = None + trial_credits: int | None = None + trial_credits_used: int | None = None + next_credit_reset_date: int | None = None + + @field_validator("plan", "status", "trial_end_reason", mode="before") + @classmethod + def _normalize_enum_like(cls, value): + if value is None: + return None + if isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_created_at(cls, value: datetime | int | None): + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + def reg(cls: type[BaseModel]): console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)) @@ -66,6 +99,7 @@ reg(WorkspaceListQuery) reg(SwitchWorkspacePayload) reg(WorkspaceCustomConfigPayload) reg(WorkspaceInfoPayload) +reg(TenantInfoResponse) provider_fields = { "provider_name": fields.String, @@ -180,7 +214,7 @@ class TenantApi(Resource): @setup_required @login_required @account_initialization_required - @marshal_with(tenant_fields) + @console_ns.response(200, "Success", console_ns.models[TenantInfoResponse.__name__]) def post(self): if request.path == "/info": logger.warning("Deprecated URL /info was used.") @@ -200,7 +234,13 @@ class TenantApi(Resource): else: raise Unauthorized("workspace is archived") - return WorkspaceService.get_tenant_info(tenant), 200 + return ( + TenantInfoResponse.model_validate( + WorkspaceService.get_tenant_info(tenant), + from_attributes=True, + ).model_dump(mode="json"), + 200, + ) @console_ns.route("/workspaces/switch") diff --git a/api/controllers/console/wraps.py b/api/controllers/console/wraps.py index 4b5fb7ca5b..ef2931ce9b 100644 --- a/api/controllers/console/wraps.py +++ b/api/controllers/console/wraps.py @@ -20,7 +20,7 @@ from models.account import AccountStatus from models.dataset import RateLimitLog from models.model import DifySetup from services.feature_service import FeatureService, LicenseStatus -from services.operation_service import OperationService +from services.operation_service import OperationService, UtmInfo from .error import NotInitValidateError, NotSetupError, UnauthorizedAndForceLogout @@ -205,7 +205,7 @@ def cloud_utm_record[**P, R](view: Callable[P, R]) -> Callable[P, R]: utm_info = request.cookies.get("utm_info") if utm_info: - utm_info_dict: dict = json.loads(utm_info) + utm_info_dict: UtmInfo = json.loads(utm_info) OperationService.record_utm(current_tenant_id, utm_info_dict) return view(*args, **kwargs) diff --git a/api/controllers/inner_api/app/dsl.py b/api/controllers/inner_api/app/dsl.py index b1986b2557..915a11dcdd 100644 --- a/api/controllers/inner_api/app/dsl.py +++ b/api/controllers/inner_api/app/dsl.py @@ -9,7 +9,7 @@ from flask import request from flask_restx import Resource from pydantic import BaseModel, Field from sqlalchemy import select -from sqlalchemy.orm import sessionmaker +from sqlalchemy.orm import Session from controllers.common.schema import register_schema_model from controllers.console.wraps import setup_required @@ -18,7 +18,8 @@ from controllers.inner_api.wraps import enterprise_inner_api_only from extensions.ext_database import db from models import Account, App from models.account import AccountStatus -from services.app_dsl_service import AppDslService, ImportMode, ImportStatus +from services.app_dsl_service import AppDslService +from services.entities.dsl_entities import ImportMode, ImportStatus class InnerAppDSLImportPayload(BaseModel): @@ -55,7 +56,7 @@ class EnterpriseAppDSLImport(Resource): account.set_tenant_id(workspace_id) - with sessionmaker(db.engine).begin() as session: + with Session(db.engine, expire_on_commit=False) as session: dsl_service = AppDslService(session) result = dsl_service.import_app( account=account, @@ -64,6 +65,10 @@ class EnterpriseAppDSLImport(Resource): name=args.name, description=args.description, ) + if result.status == ImportStatus.FAILED: + session.rollback() + else: + session.commit() if result.status == ImportStatus.FAILED: return result.model_dump(mode="json"), 400 diff --git a/api/controllers/inner_api/plugin/plugin.py b/api/controllers/inner_api/plugin/plugin.py index 83c8fa02fe..72cab3de73 100644 --- a/api/controllers/inner_api/plugin/plugin.py +++ b/api/controllers/inner_api/plugin/plugin.py @@ -1,5 +1,4 @@ from flask_restx import Resource -from graphon.model_runtime.utils.encoders import jsonable_encoder from controllers.console.wraps import setup_required from controllers.inner_api import inner_api_ns @@ -30,6 +29,7 @@ from core.plugin.entities.request import ( ) from core.tools.entities.tool_entities import ToolProviderType from core.tools.signature import get_signed_file_url_for_plugin +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.helper import length_prefixed_response from models import Account, Tenant from models.model import EndUser diff --git a/api/controllers/inner_api/plugin/wraps.py b/api/controllers/inner_api/plugin/wraps.py index 1d378c754c..a5846e2815 100644 --- a/api/controllers/inner_api/plugin/wraps.py +++ b/api/controllers/inner_api/plugin/wraps.py @@ -94,10 +94,9 @@ def get_user_tenant[**P, R](view_func: Callable[P, R]) -> Callable[P, R]: def plugin_data[**P, R]( - view: Callable[P, R] | None = None, *, payload_type: type[BaseModel], -) -> Callable[P, R] | Callable[[Callable[P, R]], Callable[P, R]]: +) -> Callable[[Callable[P, R]], Callable[P, R]]: def decorator(view_func: Callable[P, R]) -> Callable[P, R]: @wraps(view_func) def decorated_view(*args: P.args, **kwargs: P.kwargs) -> R: @@ -116,7 +115,4 @@ def plugin_data[**P, R]( return decorated_view - if view is None: - return decorator - else: - return decorator(view) + return decorator diff --git a/api/controllers/mcp/mcp.py b/api/controllers/mcp/mcp.py index d2ce0ea543..f652bbc581 100644 --- a/api/controllers/mcp/mcp.py +++ b/api/controllers/mcp/mcp.py @@ -2,7 +2,6 @@ from typing import Any, Union from flask import Response from flask_restx import Resource -from graphon.variables.input_entities import VariableEntity from pydantic import BaseModel, Field, ValidationError from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker @@ -12,6 +11,7 @@ from controllers.mcp import mcp_ns from core.mcp import types as mcp_types from core.mcp.server.streamable_http import handle_mcp_request from extensions.ext_database import db +from graphon.variables.input_entities import VariableEntity, VariableEntityType from libs import helper from models.enums import AppMCPServerStatus from models.model import App, AppMCPServer, AppMode, EndUser @@ -158,14 +158,20 @@ class MCPAppApi(Resource): except ValidationError as e: raise MCPRequestError(mcp_types.INVALID_PARAMS, f"Invalid user_input_form: {str(e)}") - def _convert_user_input_form(self, raw_form: list[dict]) -> list[VariableEntity]: + def _convert_user_input_form(self, raw_form: list[dict[str, Any]]) -> list[VariableEntity]: """Convert raw user input form to VariableEntity objects""" return [self._create_variable_entity(item) for item in raw_form] - def _create_variable_entity(self, item: dict) -> VariableEntity: + def _create_variable_entity(self, item: dict[str, Any]) -> VariableEntity: """Create a single VariableEntity from raw form item""" - variable_type = item.get("type", "") or list(item.keys())[0] - variable = item[variable_type] + variable_type_raw: str = item.get("type", "") or list(item.keys())[0] + try: + variable_type = VariableEntityType(variable_type_raw) + except ValueError as e: + raise MCPRequestError( + mcp_types.INVALID_PARAMS, f"Invalid user_input_form variable type: {variable_type_raw}" + ) from e + variable = item[variable_type_raw] return VariableEntity( type=variable_type, @@ -178,7 +184,7 @@ class MCPAppApi(Resource): json_schema=variable.get("json_schema"), ) - def _parse_mcp_request(self, args: dict) -> mcp_types.ClientRequest | mcp_types.ClientNotification: + def _parse_mcp_request(self, args: dict[str, Any]) -> mcp_types.ClientRequest | mcp_types.ClientNotification: """Parse and validate MCP request""" try: return mcp_types.ClientRequest.model_validate(args) diff --git a/api/controllers/service_api/app/annotation.py b/api/controllers/service_api/app/annotation.py index c22190cbc9..00bb9aa463 100644 --- a/api/controllers/service_api/app/annotation.py +++ b/api/controllers/service_api/app/annotation.py @@ -12,7 +12,12 @@ from controllers.service_api.wraps import validate_app_token from extensions.ext_redis import redis_client from fields.annotation_fields import Annotation, AnnotationList from models.model import App -from services.annotation_service import AppAnnotationService +from services.annotation_service import ( + AppAnnotationService, + EnableAnnotationArgs, + InsertAnnotationArgs, + UpdateAnnotationArgs, +) class AnnotationCreatePayload(BaseModel): @@ -46,10 +51,15 @@ class AnnotationReplyActionApi(Resource): @validate_app_token def post(self, app_model: App, action: Literal["enable", "disable"]): """Enable or disable annotation reply feature.""" - args = AnnotationReplyActionPayload.model_validate(service_api_ns.payload or {}).model_dump() + payload = AnnotationReplyActionPayload.model_validate(service_api_ns.payload or {}) match action: case "enable": - result = AppAnnotationService.enable_app_annotation(args, app_model.id) + enable_args: EnableAnnotationArgs = { + "score_threshold": payload.score_threshold, + "embedding_provider_name": payload.embedding_provider_name, + "embedding_model_name": payload.embedding_model_name, + } + result = AppAnnotationService.enable_app_annotation(enable_args, app_model.id) case "disable": result = AppAnnotationService.disable_app_annotation(app_model.id) return result, 200 @@ -135,8 +145,9 @@ class AnnotationListApi(Resource): @validate_app_token def post(self, app_model: App): """Create a new annotation.""" - args = AnnotationCreatePayload.model_validate(service_api_ns.payload or {}).model_dump() - annotation = AppAnnotationService.insert_app_annotation_directly(args, app_model.id) + payload = AnnotationCreatePayload.model_validate(service_api_ns.payload or {}) + insert_args: InsertAnnotationArgs = {"question": payload.question, "answer": payload.answer} + annotation = AppAnnotationService.insert_app_annotation_directly(insert_args, app_model.id) response = Annotation.model_validate(annotation, from_attributes=True) return response.model_dump(mode="json"), HTTPStatus.CREATED @@ -164,8 +175,9 @@ class AnnotationUpdateDeleteApi(Resource): @edit_permission_required def put(self, app_model: App, annotation_id: str): """Update an existing annotation.""" - args = AnnotationCreatePayload.model_validate(service_api_ns.payload or {}).model_dump() - annotation = AppAnnotationService.update_app_annotation_directly(args, app_model.id, annotation_id) + payload = AnnotationCreatePayload.model_validate(service_api_ns.payload or {}) + update_args: UpdateAnnotationArgs = {"question": payload.question, "answer": payload.answer} + annotation = AppAnnotationService.update_app_annotation_directly(update_args, app_model.id, annotation_id) response = Annotation.model_validate(annotation, from_attributes=True) return response.model_dump(mode="json") diff --git a/api/controllers/service_api/app/audio.py b/api/controllers/service_api/app/audio.py index 6228cfc25b..e818573b8f 100644 --- a/api/controllers/service_api/app/audio.py +++ b/api/controllers/service_api/app/audio.py @@ -2,11 +2,10 @@ import logging from flask import request from flask_restx import Resource -from graphon.model_runtime.errors.invoke import InvokeError -from pydantic import BaseModel, Field from werkzeug.exceptions import InternalServerError import services +from controllers.common.controller_schemas import TextToAudioPayload from controllers.common.schema import register_schema_model from controllers.service_api import service_api_ns from controllers.service_api.app.error import ( @@ -22,6 +21,7 @@ from controllers.service_api.app.error import ( ) from controllers.service_api.wraps import FetchUserArg, WhereisUserArg, validate_app_token from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from models.model import App, EndUser from services.audio_service import AudioService from services.errors.audio import ( @@ -86,13 +86,6 @@ class AudioApi(Resource): raise InternalServerError() -class TextToAudioPayload(BaseModel): - message_id: str | None = Field(default=None, description="Message ID") - voice: str | None = Field(default=None, description="Voice to use for TTS") - text: str | None = Field(default=None, description="Text to convert to audio") - streaming: bool | None = Field(default=None, description="Enable streaming response") - - register_schema_model(service_api_ns, TextToAudioPayload) diff --git a/api/controllers/service_api/app/completion.py b/api/controllers/service_api/app/completion.py index 3142e5118e..31f2797d66 100644 --- a/api/controllers/service_api/app/completion.py +++ b/api/controllers/service_api/app/completion.py @@ -4,7 +4,6 @@ from uuid import UUID from flask import request from flask_restx import Resource -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field, field_validator from werkzeug.exceptions import BadRequest, InternalServerError, NotFound @@ -29,6 +28,7 @@ from core.errors.error import ( QuotaExceededError, ) from core.helper.trace_id_helper import get_external_trace_id +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from libs.helper import UUIDStrOrEmpty from models.model import App, AppMode, EndUser diff --git a/api/controllers/service_api/app/conversation.py b/api/controllers/service_api/app/conversation.py index 1ec289e2a2..c4353ca7b8 100644 --- a/api/controllers/service_api/app/conversation.py +++ b/api/controllers/service_api/app/conversation.py @@ -1,3 +1,4 @@ +from datetime import datetime from typing import Any, Literal from flask import request @@ -14,14 +15,13 @@ from controllers.service_api.app.error import NotChatAppError from controllers.service_api.wraps import FetchUserArg, WhereisUserArg, validate_app_token from core.app.entities.app_invoke_entities import InvokeFrom from extensions.ext_database import db +from fields._value_type_serializer import serialize_value_type +from fields.base import ResponseModel from fields.conversation_fields import ( ConversationInfiniteScrollPagination, SimpleConversation, ) -from fields.conversation_variable_fields import ( - build_conversation_variable_infinite_scroll_pagination_model, - build_conversation_variable_model, -) +from graphon.variables.types import SegmentType from libs.helper import UUIDStrOrEmpty from models.model import App, AppMode, EndUser from services.conversation_service import ConversationService @@ -70,12 +70,70 @@ class ConversationVariableUpdatePayload(BaseModel): value: Any +class ConversationVariableResponse(ResponseModel): + id: str + name: str + value_type: str + value: str | None = None + description: str | None = None + created_at: int | None = None + updated_at: int | None = None + + @field_validator("value_type", mode="before") + @classmethod + def normalize_value_type(cls, value: Any) -> str: + exposed_type = getattr(value, "exposed_type", None) + if callable(exposed_type): + return str(exposed_type().value) + if isinstance(value, str): + try: + return str(SegmentType(value).exposed_type().value) + except ValueError: + return value + try: + return serialize_value_type(value) + except (AttributeError, TypeError, ValueError): + pass + + try: + return serialize_value_type({"value_type": value}) + except (AttributeError, TypeError, ValueError): + value_attr = getattr(value, "value", None) + if value_attr is not None: + return str(value_attr) + return str(value) + + @field_validator("value", mode="before") + @classmethod + def normalize_value(cls, value: Any | None) -> str | None: + if value is None: + return None + if isinstance(value, str): + return value + return str(value) + + @field_validator("created_at", "updated_at", mode="before") + @classmethod + def normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class ConversationVariableInfiniteScrollPaginationResponse(ResponseModel): + limit: int + has_more: bool + data: list[ConversationVariableResponse] + + register_schema_models( service_api_ns, ConversationListQuery, ConversationRenamePayload, ConversationVariablesQuery, ConversationVariableUpdatePayload, + ConversationVariableResponse, + ConversationVariableInfiniteScrollPaginationResponse, ) @@ -204,8 +262,12 @@ class ConversationVariablesApi(Resource): 404: "Conversation not found", } ) + @service_api_ns.response( + 200, + "Variables retrieved successfully", + service_api_ns.models[ConversationVariableInfiniteScrollPaginationResponse.__name__], + ) @validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.QUERY)) - @service_api_ns.marshal_with(build_conversation_variable_infinite_scroll_pagination_model(service_api_ns)) def get(self, app_model: App, end_user: EndUser, c_id): """List all variables for a conversation. @@ -222,9 +284,12 @@ class ConversationVariablesApi(Resource): last_id = str(query_args.last_id) if query_args.last_id else None try: - return ConversationService.get_conversational_variable( + pagination = ConversationService.get_conversational_variable( app_model, conversation_id, end_user, query_args.limit, last_id, query_args.variable_name ) + return ConversationVariableInfiniteScrollPaginationResponse.model_validate( + pagination, from_attributes=True + ).model_dump(mode="json") except services.errors.conversation.ConversationNotExistsError: raise NotFound("Conversation Not Exists.") @@ -243,8 +308,12 @@ class ConversationVariableDetailApi(Resource): 404: "Conversation or variable not found", } ) + @service_api_ns.response( + 200, + "Variable updated successfully", + service_api_ns.models[ConversationVariableResponse.__name__], + ) @validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.JSON)) - @service_api_ns.marshal_with(build_conversation_variable_model(service_api_ns)) def put(self, app_model: App, end_user: EndUser, c_id, variable_id): """Update a conversation variable's value. @@ -261,9 +330,10 @@ class ConversationVariableDetailApi(Resource): payload = ConversationVariableUpdatePayload.model_validate(service_api_ns.payload or {}) try: - return ConversationService.update_conversation_variable( + variable = ConversationService.update_conversation_variable( app_model, conversation_id, variable_id, end_user, payload.value ) + return ConversationVariableResponse.model_validate(variable, from_attributes=True).model_dump(mode="json") except services.errors.conversation.ConversationNotExistsError: raise NotFound("Conversation Not Exists.") except services.errors.conversation.ConversationVariableNotExistsError: diff --git a/api/controllers/service_api/app/workflow.py b/api/controllers/service_api/app/workflow.py index e0a64ffe26..cc763fa89c 100644 --- a/api/controllers/service_api/app/workflow.py +++ b/api/controllers/service_api/app/workflow.py @@ -1,13 +1,12 @@ import logging +from collections.abc import Mapping +from datetime import datetime from typing import Literal from dateutil.parser import isoparse from flask import request -from flask_restx import Namespace, Resource, fields -from graphon.enums import WorkflowExecutionStatus -from graphon.graph_engine.manager import GraphEngineManager -from graphon.model_runtime.errors.invoke import InvokeError -from pydantic import BaseModel, Field +from flask_restx import Resource, fields +from pydantic import BaseModel, Field, field_validator from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import BadRequest, InternalServerError, NotFound @@ -33,9 +32,13 @@ from core.errors.error import ( from core.helper.trace_id_helper import get_external_trace_id from extensions.ext_database import db from extensions.ext_redis import redis_client -from fields.workflow_app_log_fields import build_workflow_app_log_pagination_model +from fields.base import ResponseModel +from fields.end_user_fields import SimpleEndUser +from fields.member_fields import SimpleAccount +from graphon.enums import WorkflowExecutionStatus +from graphon.graph_engine.manager import GraphEngineManager +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper -from libs.helper import OptionalTimestampField, TimestampField from models.model import App, AppMode, EndUser from models.workflow import WorkflowRun from repositories.factory import DifyAPIRepositoryFactory @@ -65,38 +68,142 @@ class WorkflowLogQuery(BaseModel): register_schema_models(service_api_ns, WorkflowRunPayload, WorkflowLogQuery) +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +def _enum_value(value): + return getattr(value, "value", value) + + class WorkflowRunStatusField(fields.Raw): def output(self, key, obj: WorkflowRun, **kwargs): - return obj.status.value + return _enum_value(obj.status) class WorkflowRunOutputsField(fields.Raw): def output(self, key, obj: WorkflowRun, **kwargs): - if obj.status == WorkflowExecutionStatus.PAUSED: + status = _enum_value(obj.status) + if status == WorkflowExecutionStatus.PAUSED.value: return {} outputs = obj.outputs_dict return outputs or {} -workflow_run_fields = { - "id": fields.String, - "workflow_id": fields.String, - "status": WorkflowRunStatusField, - "inputs": fields.Raw, - "outputs": WorkflowRunOutputsField, - "error": fields.String, - "total_steps": fields.Integer, - "total_tokens": fields.Integer, - "created_at": TimestampField, - "finished_at": OptionalTimestampField, - "elapsed_time": fields.Float, -} +class WorkflowRunResponse(ResponseModel): + id: str + workflow_id: str + status: str + inputs: dict | list | str | int | float | bool | None = None + outputs: dict = Field(default_factory=dict) + error: str | None = None + total_steps: int | None = None + total_tokens: int | None = None + created_at: int | None = None + finished_at: int | None = None + elapsed_time: float | int | None = None + + @field_validator("created_at", "finished_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) -def build_workflow_run_model(api_or_ns: Namespace): - """Build the workflow run model for the API or Namespace.""" - return api_or_ns.model("WorkflowRun", workflow_run_fields) +class WorkflowRunForLogResponse(ResponseModel): + id: str + version: str | None = None + status: str | None = None + triggered_from: str | None = None + error: str | None = None + elapsed_time: float | int | None = None + total_tokens: int | None = None + total_steps: int | None = None + created_at: int | None = None + finished_at: int | None = None + exceptions_count: int | None = None + + @field_validator("status", "triggered_from", mode="before") + @classmethod + def _normalize_enum(cls, value): + return _enum_value(value) + + @field_validator("created_at", "finished_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class WorkflowAppLogPartialResponse(ResponseModel): + id: str + workflow_run: WorkflowRunForLogResponse | None = None + details: dict | list | str | int | float | bool | None = None + created_from: str | None = None + created_by_role: str | None = None + created_by_account: SimpleAccount | None = None + created_by_end_user: SimpleEndUser | None = None + created_at: int | None = None + + @field_validator("created_from", "created_by_role", mode="before") + @classmethod + def _normalize_enum(cls, value): + return _enum_value(value) + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class WorkflowAppLogPaginationResponse(ResponseModel): + page: int + limit: int + total: int + has_more: bool + data: list[WorkflowAppLogPartialResponse] + + +register_schema_models( + service_api_ns, + WorkflowRunResponse, + WorkflowRunForLogResponse, + WorkflowAppLogPartialResponse, + WorkflowAppLogPaginationResponse, +) + + +def _serialize_workflow_run(workflow_run: WorkflowRun) -> dict: + status = _enum_value(workflow_run.status) + raw_outputs = workflow_run.outputs_dict + if status == WorkflowExecutionStatus.PAUSED.value or raw_outputs is None: + outputs: dict = {} + elif isinstance(raw_outputs, dict): + outputs = raw_outputs + elif isinstance(raw_outputs, Mapping): + outputs = dict(raw_outputs) + else: + outputs = {} + return WorkflowRunResponse.model_validate( + { + "id": workflow_run.id, + "workflow_id": workflow_run.workflow_id, + "status": status, + "inputs": workflow_run.inputs, + "outputs": outputs, + "error": workflow_run.error, + "total_steps": workflow_run.total_steps, + "total_tokens": workflow_run.total_tokens, + "created_at": workflow_run.created_at, + "finished_at": workflow_run.finished_at, + "elapsed_time": workflow_run.elapsed_time, + } + ).model_dump(mode="json") + + +def _serialize_workflow_log_pagination(pagination) -> dict: + return WorkflowAppLogPaginationResponse.model_validate(pagination, from_attributes=True).model_dump(mode="json") @service_api_ns.route("/workflows/run/") @@ -112,7 +219,11 @@ class WorkflowRunDetailApi(Resource): } ) @validate_app_token - @service_api_ns.marshal_with(build_workflow_run_model(service_api_ns)) + @service_api_ns.response( + 200, + "Workflow run details retrieved successfully", + service_api_ns.models[WorkflowRunResponse.__name__], + ) def get(self, app_model: App, workflow_run_id: str): """Get a workflow task running detail. @@ -133,7 +244,7 @@ class WorkflowRunDetailApi(Resource): ) if not workflow_run: raise NotFound("Workflow run not found.") - return workflow_run + return _serialize_workflow_run(workflow_run) @service_api_ns.route("/workflows/run") @@ -299,7 +410,11 @@ class WorkflowAppLogApi(Resource): } ) @validate_app_token - @service_api_ns.marshal_with(build_workflow_app_log_pagination_model(service_api_ns)) + @service_api_ns.response( + 200, + "Logs retrieved successfully", + service_api_ns.models[WorkflowAppLogPaginationResponse.__name__], + ) def get(self, app_model: App): """Get workflow app logs. @@ -327,4 +442,4 @@ class WorkflowAppLogApi(Resource): created_by_account=args.created_by_account, ) - return workflow_app_log_pagination + return _serialize_workflow_log_pagination(workflow_app_log_pagination) diff --git a/api/controllers/service_api/dataset/dataset.py b/api/controllers/service_api/dataset/dataset.py index fd954be6b1..76519cad0a 100644 --- a/api/controllers/service_api/dataset/dataset.py +++ b/api/controllers/service_api/dataset/dataset.py @@ -2,7 +2,6 @@ from typing import Any, Literal, cast from flask import request from flask_restx import marshal -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import BaseModel, Field, TypeAdapter, field_validator from werkzeug.exceptions import Forbidden, NotFound @@ -19,6 +18,7 @@ from core.plugin.impl.model_runtime_factory import create_plugin_provider_manage from core.rag.index_processor.constant.index_type import IndexTechniqueType from fields.dataset_fields import dataset_detail_fields from fields.tag_fields import DataSetTag +from graphon.model_runtime.entities.model_entities import ModelType from libs.login import current_user from models.account import Account from models.dataset import DatasetPermissionEnum diff --git a/api/controllers/service_api/dataset/document.py b/api/controllers/service_api/dataset/document.py index 9f1ce17ed9..6db047567f 100644 --- a/api/controllers/service_api/dataset/document.py +++ b/api/controllers/service_api/dataset/document.py @@ -10,6 +10,7 @@ from sqlalchemy import desc, func, select from werkzeug.exceptions import Forbidden, NotFound import services +from controllers.common.controller_schemas import DocumentBatchDownloadZipPayload from controllers.common.errors import ( FilenameNotExistsError, FileTooLargeError, @@ -100,15 +101,6 @@ class DocumentListQuery(BaseModel): status: str | None = Field(default=None, description="Document status filter") -DOCUMENT_BATCH_DOWNLOAD_ZIP_MAX_DOCS = 100 - - -class DocumentBatchDownloadZipPayload(BaseModel): - """Request payload for bulk downloading uploaded documents as a ZIP archive.""" - - document_ids: list[UUID] = Field(..., min_length=1, max_length=DOCUMENT_BATCH_DOWNLOAD_ZIP_MAX_DOCS) - - register_enum_models(service_api_ns, RetrievalMethod) register_schema_models( @@ -527,7 +519,7 @@ class DocumentListApi(DatasetApiResource): if not dataset: raise NotFound("Dataset not found.") - query = select(Document).filter_by(dataset_id=str(dataset_id), tenant_id=tenant_id) + query = select(Document).where(Document.dataset_id == dataset_id, Document.tenant_id == tenant_id) if query_params.status: query = DocumentService.apply_display_status_filter(query, query_params.status) diff --git a/api/controllers/service_api/dataset/metadata.py b/api/controllers/service_api/dataset/metadata.py index 52166f7fcc..21db7d0cb8 100644 --- a/api/controllers/service_api/dataset/metadata.py +++ b/api/controllers/service_api/dataset/metadata.py @@ -2,9 +2,9 @@ from typing import Literal from flask_login import current_user from flask_restx import marshal -from pydantic import BaseModel from werkzeug.exceptions import NotFound +from controllers.common.controller_schemas import MetadataUpdatePayload from controllers.common.schema import register_schema_model, register_schema_models from controllers.service_api import service_api_ns from controllers.service_api.wraps import DatasetApiResource, cloud_edition_billing_rate_limit_check @@ -18,11 +18,6 @@ from services.entities.knowledge_entities.knowledge_entities import ( ) from services.metadata_service import MetadataService - -class MetadataUpdatePayload(BaseModel): - name: str - - register_schema_model(service_api_ns, MetadataUpdatePayload) register_schema_models( service_api_ns, diff --git a/api/controllers/service_api/dataset/segment.py b/api/controllers/service_api/dataset/segment.py index 5b16da81e0..5992fa7410 100644 --- a/api/controllers/service_api/dataset/segment.py +++ b/api/controllers/service_api/dataset/segment.py @@ -2,12 +2,12 @@ from typing import Any from flask import request from flask_restx import marshal -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import BaseModel, Field from sqlalchemy import select from werkzeug.exceptions import NotFound from configs import dify_config +from controllers.common.controller_schemas import ChildChunkCreatePayload, ChildChunkUpdatePayload from controllers.common.schema import register_schema_models from controllers.service_api import service_api_ns from controllers.service_api.app.error import ProviderNotInitializeError @@ -22,6 +22,7 @@ from core.model_manager import ModelManager from core.rag.index_processor.constant.index_type import IndexTechniqueType from extensions.ext_database import db from fields.segment_fields import child_chunk_fields, segment_fields +from graphon.model_runtime.entities.model_entities import ModelType from libs.login import current_account_with_tenant from models.dataset import Dataset from services.dataset_service import DatasetService, DocumentService, SegmentService @@ -32,25 +33,25 @@ from services.errors.chunk import ChildChunkIndexingError as ChildChunkIndexingS from services.summary_index_service import SummaryIndexService -def _marshal_segment_with_summary(segment, dataset_id: str) -> dict: +def _marshal_segment_with_summary(segment, dataset_id: str) -> dict[str, Any]: """Marshal a single segment and enrich it with summary content.""" - segment_dict = dict(marshal(segment, segment_fields)) # type: ignore[arg-type] + segment_dict: dict[str, Any] = dict(marshal(segment, segment_fields)) # type: ignore[arg-type] summary = SummaryIndexService.get_segment_summary(segment_id=segment.id, dataset_id=dataset_id) segment_dict["summary"] = summary.summary_content if summary else None return segment_dict -def _marshal_segments_with_summary(segments, dataset_id: str) -> list[dict]: +def _marshal_segments_with_summary(segments, dataset_id: str) -> list[dict[str, Any]]: """Marshal multiple segments and enrich them with summary content (batch query).""" segment_ids = [segment.id for segment in segments] - summaries: dict = {} + summaries: dict[str, str | None] = {} if segment_ids: summary_records = SummaryIndexService.get_segments_summaries(segment_ids=segment_ids, dataset_id=dataset_id) summaries = {chunk_id: record.summary_content for chunk_id, record in summary_records.items()} - result = [] + result: list[dict[str, Any]] = [] for segment in segments: - segment_dict = dict(marshal(segment, segment_fields)) # type: ignore[arg-type] + segment_dict: dict[str, Any] = dict(marshal(segment, segment_fields)) # type: ignore[arg-type] segment_dict["summary"] = summaries.get(segment.id) result.append(segment_dict) return result @@ -69,20 +70,12 @@ class SegmentUpdatePayload(BaseModel): segment: SegmentUpdateArgs -class ChildChunkCreatePayload(BaseModel): - content: str - - class ChildChunkListQuery(BaseModel): limit: int = Field(default=20, ge=1) keyword: str | None = None page: int = Field(default=1, ge=1) -class ChildChunkUpdatePayload(BaseModel): - content: str - - register_schema_models( service_api_ns, SegmentCreatePayload, diff --git a/api/controllers/service_api/workspace/models.py b/api/controllers/service_api/workspace/models.py index c0a6cb0a76..5ac65fc4e6 100644 --- a/api/controllers/service_api/workspace/models.py +++ b/api/controllers/service_api/workspace/models.py @@ -1,9 +1,9 @@ from flask_login import current_user from flask_restx import Resource -from graphon.model_runtime.utils.encoders import jsonable_encoder from controllers.service_api import service_api_ns from controllers.service_api.wraps import validate_dataset_token +from graphon.model_runtime.utils.encoders import jsonable_encoder from services.model_provider_service import ModelProviderService diff --git a/api/controllers/web/audio.py b/api/controllers/web/audio.py index 0ef4471018..3ad595f1f4 100644 --- a/api/controllers/web/audio.py +++ b/api/controllers/web/audio.py @@ -2,7 +2,6 @@ import logging from flask import request from flask_restx import fields, marshal_with -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import field_validator from werkzeug.exceptions import InternalServerError @@ -22,6 +21,7 @@ from controllers.web.error import ( ) from controllers.web.wraps import WebApiResource from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from libs.helper import uuid_value from models.model import App from services.audio_service import AudioService diff --git a/api/controllers/web/completion.py b/api/controllers/web/completion.py index e37f9af5f0..0528184d79 100644 --- a/api/controllers/web/completion.py +++ b/api/controllers/web/completion.py @@ -1,7 +1,6 @@ import logging from typing import Any, Literal -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field, field_validator from werkzeug.exceptions import InternalServerError, NotFound @@ -26,6 +25,7 @@ from core.errors.error import ( ProviderTokenNotInitError, QuotaExceededError, ) +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from libs.helper import uuid_value from models.model import AppMode diff --git a/api/controllers/web/forgot_password.py b/api/controllers/web/forgot_password.py index 80c3289fb4..61fd794c22 100644 --- a/api/controllers/web/forgot_password.py +++ b/api/controllers/web/forgot_password.py @@ -3,7 +3,6 @@ import secrets from flask import request from flask_restx import Resource -from sqlalchemy.orm import sessionmaker from controllers.common.schema import register_schema_models from controllers.console.auth.error import ( @@ -62,9 +61,7 @@ class ForgotPasswordSendEmailApi(Resource): else: language = "en-US" - with sessionmaker(db.engine).begin() as session: - account = AccountService.get_account_by_email_with_case_fallback(request_email, session=session) - token = None + account = AccountService.get_account_by_email_with_case_fallback(request_email) if account is None: raise AuthenticationFailedError() else: @@ -161,13 +158,14 @@ class ForgotPasswordResetApi(Resource): email = reset_data.get("email", "") - with sessionmaker(db.engine).begin() as session: - account = AccountService.get_account_by_email_with_case_fallback(email, session=session) + account = AccountService.get_account_by_email_with_case_fallback(email) - if account: - self._update_existing_account(account, password_hashed, salt) - else: - raise AuthenticationFailedError() + if account: + account = db.session.merge(account) + self._update_existing_account(account, password_hashed, salt) + db.session.commit() + else: + raise AuthenticationFailedError() return {"result": "success"} diff --git a/api/controllers/web/human_input_form.py b/api/controllers/web/human_input_form.py index 36728a47d1..44876f8303 100644 --- a/api/controllers/web/human_input_form.py +++ b/api/controllers/web/human_input_form.py @@ -5,9 +5,11 @@ Web App Human Input Form APIs. import json import logging from datetime import datetime +from typing import Any, NotRequired, TypedDict from flask import Response, request -from flask_restx import Resource, reqparse +from flask_restx import Resource +from pydantic import BaseModel from sqlalchemy import select from werkzeug.exceptions import Forbidden @@ -23,6 +25,12 @@ from services.human_input_service import Form, FormNotFoundError, HumanInputServ logger = logging.getLogger(__name__) + +class HumanInputFormSubmitPayload(BaseModel): + inputs: dict + action: str + + _FORM_SUBMIT_RATE_LIMITER = RateLimiter( prefix="web_form_submit_rate_limit", max_attempts=dify_config.WEB_FORM_SUBMIT_RATE_LIMIT_MAX_ATTEMPTS, @@ -51,10 +59,19 @@ def _to_timestamp(value: datetime) -> int: return int(value.timestamp()) +class FormDefinitionPayload(TypedDict): + form_content: Any + inputs: Any + resolved_default_values: dict[str, str] + user_actions: Any + expiration_time: int + site: NotRequired[dict] + + def _jsonify_form_definition(form: Form, site_payload: dict | None = None) -> Response: """Return the form payload (optionally with site) as a JSON response.""" definition_payload = form.get_definition().model_dump() - payload = { + payload: FormDefinitionPayload = { "form_content": definition_payload["rendered_content"], "inputs": definition_payload["inputs"], "resolved_default_values": _stringify_default_values(definition_payload["default_values"]), @@ -85,7 +102,7 @@ class HumanInputFormApi(Resource): _FORM_ACCESS_RATE_LIMITER.increment_rate_limit(ip_address) service = HumanInputService(db.engine) - # TODO(QuantumGhost): forbid submision for form tokens + # TODO(QuantumGhost): forbid submission for form tokens # that are only for console. form = service.get_form_by_token(form_token) @@ -112,10 +129,7 @@ class HumanInputFormApi(Resource): "action": "Approve" } """ - parser = reqparse.RequestParser() - parser.add_argument("inputs", type=dict, required=True, location="json") - parser.add_argument("action", type=str, required=True, location="json") - args = parser.parse_args() + payload = HumanInputFormSubmitPayload.model_validate(request.get_json()) ip_address = extract_remote_ip(request) if _FORM_SUBMIT_RATE_LIMITER.is_rate_limited(ip_address): @@ -135,8 +149,8 @@ class HumanInputFormApi(Resource): service.submit_form_by_token( recipient_type=recipient_type, form_token=form_token, - selected_action_id=args["action"], - form_data=args["inputs"], + selected_action_id=payload.action, + form_data=payload.inputs, submission_end_user_id=None, # submission_end_user_id=_end_user.id, ) diff --git a/api/controllers/web/login.py b/api/controllers/web/login.py index ae0e6789ef..2255dd0332 100644 --- a/api/controllers/web/login.py +++ b/api/controllers/web/login.py @@ -1,7 +1,10 @@ +import logging + from flask import make_response, request from flask_restx import Resource from jwt import InvalidTokenError from pydantic import BaseModel, Field, field_validator +from werkzeug.exceptions import Unauthorized import services from configs import dify_config @@ -20,7 +23,7 @@ from controllers.console.wraps import ( ) from controllers.web import web_ns from controllers.web.wraps import decode_jwt_token -from libs.helper import EmailStr +from libs.helper import EmailStr, extract_remote_ip from libs.passport import PassportService from libs.password import valid_password from libs.token import ( @@ -29,9 +32,11 @@ from libs.token import ( ) from services.account_service import AccountService from services.app_service import AppService -from services.entities.auth_entities import LoginPayloadBase +from services.entities.auth_entities import LoginFailureReason, LoginPayloadBase from services.webapp_auth_service import WebAppAuthService +logger = logging.getLogger(__name__) + class LoginPayload(LoginPayloadBase): @field_validator("password") @@ -76,14 +81,18 @@ class LoginApi(Resource): def post(self): """Authenticate user and login.""" payload = LoginPayload.model_validate(web_ns.payload or {}) + normalized_email = payload.email.lower() try: account = WebAppAuthService.authenticate(payload.email, payload.password) except services.errors.account.AccountLoginError: + _log_web_login_failure(email=normalized_email, reason=LoginFailureReason.ACCOUNT_BANNED) raise AccountBannedError() except services.errors.account.AccountPasswordError: + _log_web_login_failure(email=normalized_email, reason=LoginFailureReason.INVALID_CREDENTIALS) raise AuthenticationFailedError() except services.errors.account.AccountNotFoundError: + _log_web_login_failure(email=normalized_email, reason=LoginFailureReason.ACCOUNT_NOT_FOUND) raise AuthenticationFailedError() token = WebAppAuthService.login(account=account) @@ -212,21 +221,30 @@ class EmailCodeLoginApi(Resource): token_data = WebAppAuthService.get_email_code_login_data(payload.token) if token_data is None: + _log_web_login_failure(email=user_email, reason=LoginFailureReason.INVALID_EMAIL_CODE_TOKEN) raise InvalidTokenError() token_email = token_data.get("email") if not isinstance(token_email, str): + _log_web_login_failure(email=user_email, reason=LoginFailureReason.EMAIL_CODE_EMAIL_MISMATCH) raise InvalidEmailError() normalized_token_email = token_email.lower() if normalized_token_email != user_email: + _log_web_login_failure(email=user_email, reason=LoginFailureReason.EMAIL_CODE_EMAIL_MISMATCH) raise InvalidEmailError() if token_data["code"] != payload.code: + _log_web_login_failure(email=user_email, reason=LoginFailureReason.INVALID_EMAIL_CODE) raise EmailCodeError() WebAppAuthService.revoke_email_code_login_token(payload.token) - account = WebAppAuthService.get_user_through_email(token_email) + try: + account = WebAppAuthService.get_user_through_email(token_email) + except Unauthorized as exc: + _log_web_login_failure(email=user_email, reason=LoginFailureReason.ACCOUNT_BANNED) + raise AccountBannedError() from exc if not account: + _log_web_login_failure(email=user_email, reason=LoginFailureReason.ACCOUNT_NOT_FOUND) raise AuthenticationFailedError() token = WebAppAuthService.login(account=account) @@ -234,3 +252,12 @@ class EmailCodeLoginApi(Resource): response = make_response({"result": "success", "data": {"access_token": token}}) # set_access_token_to_cookie(request, response, token, samesite="None", httponly=False) return response + + +def _log_web_login_failure(*, email: str, reason: LoginFailureReason) -> None: + logger.warning( + "Web login failed: email=%s reason=%s ip_address=%s", + email, + reason, + extract_remote_ip(request), + ) diff --git a/api/controllers/web/message.py b/api/controllers/web/message.py index 25cb6b2b9e..07ecf8035b 100644 --- a/api/controllers/web/message.py +++ b/api/controllers/web/message.py @@ -2,11 +2,10 @@ import logging from typing import Literal from flask import request -from graphon.model_runtime.errors.invoke import InvokeError -from pydantic import BaseModel, Field, TypeAdapter, field_validator +from pydantic import BaseModel, Field, TypeAdapter from werkzeug.exceptions import InternalServerError, NotFound -from controllers.common.controller_schemas import MessageFeedbackPayload +from controllers.common.controller_schemas import MessageFeedbackPayload, MessageListQuery from controllers.common.schema import register_schema_models from controllers.web import web_ns from controllers.web.error import ( @@ -24,8 +23,8 @@ from core.app.entities.app_invoke_entities import InvokeFrom from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError from fields.conversation_fields import ResultResponse from fields.message_fields import SuggestedQuestionsResponse, WebMessageInfiniteScrollPagination, WebMessageListItem +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper -from libs.helper import uuid_value from models.enums import FeedbackRating from models.model import AppMode from services.app_generate_service import AppGenerateService @@ -41,19 +40,6 @@ from services.message_service import MessageService logger = logging.getLogger(__name__) -class MessageListQuery(BaseModel): - conversation_id: str = Field(description="Conversation UUID") - first_id: str | None = Field(default=None, description="First message ID for pagination") - limit: int = Field(default=20, ge=1, le=100, description="Number of messages to return (1-100)") - - @field_validator("conversation_id", "first_id") - @classmethod - def validate_uuid(cls, value: str | None) -> str | None: - if value is None: - return value - return uuid_value(value) - - class MessageMoreLikeThisQuery(BaseModel): response_mode: Literal["blocking", "streaming"] = Field( description="Response mode", diff --git a/api/controllers/web/passport.py b/api/controllers/web/passport.py index 6a2e0b65fb..0293df74b0 100644 --- a/api/controllers/web/passport.py +++ b/api/controllers/web/passport.py @@ -1,5 +1,6 @@ import uuid from datetime import UTC, datetime, timedelta +from typing import Any from flask import make_response, request from flask_restx import Resource @@ -103,21 +104,23 @@ class PassportResource(Resource): return response -def decode_enterprise_webapp_user_id(jwt_token: str | None): +def decode_enterprise_webapp_user_id(jwt_token: str | None) -> dict[str, Any] | None: """ Decode the enterprise user session from the Authorization header. """ if not jwt_token: return None - decoded = PassportService().verify(jwt_token) + decoded: dict[str, Any] = PassportService().verify(jwt_token) source = decoded.get("token_source") if not source or source != "webapp_login_token": raise Unauthorized("Invalid token source. Expected 'webapp_login_token'.") return decoded -def exchange_token_for_existing_web_user(app_code: str, enterprise_user_decoded: dict, auth_type: WebAppAuthType): +def exchange_token_for_existing_web_user( + app_code: str, enterprise_user_decoded: dict[str, Any], auth_type: WebAppAuthType +): """ Exchange a token for an existing web user session. """ @@ -138,12 +141,15 @@ def exchange_token_for_existing_web_user(app_code: str, enterprise_user_decoded: if not app_model or app_model.status != "normal" or not app_model.enable_site: raise NotFound() - if auth_type == WebAppAuthType.PUBLIC: - return _exchange_for_public_app_token(app_model, site, enterprise_user_decoded) - elif auth_type == WebAppAuthType.EXTERNAL and user_auth_type != "external": - raise WebAppAuthRequiredError("Please login as external user.") - elif auth_type == WebAppAuthType.INTERNAL and user_auth_type != "internal": - raise WebAppAuthRequiredError("Please login as internal user.") + match auth_type: + case WebAppAuthType.PUBLIC: + return _exchange_for_public_app_token(app_model, site, enterprise_user_decoded) + case WebAppAuthType.EXTERNAL: + if user_auth_type != "external": + raise WebAppAuthRequiredError("Please login as external user.") + case WebAppAuthType.INTERNAL: + if user_auth_type != "internal": + raise WebAppAuthRequiredError("Please login as internal user.") end_user = None if end_user_id: diff --git a/api/controllers/web/remote_files.py b/api/controllers/web/remote_files.py index 38aeccc642..fe31e9d4ac 100644 --- a/api/controllers/web/remote_files.py +++ b/api/controllers/web/remote_files.py @@ -1,7 +1,6 @@ import urllib.parse import httpx -from graphon.file import helpers as file_helpers from pydantic import BaseModel, Field, HttpUrl import services @@ -14,6 +13,7 @@ from controllers.common.errors import ( from core.helper import ssrf_proxy from extensions.ext_database import db from fields.file_fields import FileWithSignedUrl, RemoteFileInfo +from graphon.file import helpers as file_helpers from services.file_service import FileService from ..common.schema import register_schema_models diff --git a/api/controllers/web/site.py b/api/controllers/web/site.py index 1a0c6d4252..7d2080dd91 100644 --- a/api/controllers/web/site.py +++ b/api/controllers/web/site.py @@ -1,4 +1,4 @@ -from typing import cast +from typing import Any, cast from flask_restx import fields, marshal, marshal_with from sqlalchemy import select @@ -113,12 +113,12 @@ class AppSiteInfo: } -def serialize_site(site: Site) -> dict: +def serialize_site(site: Site) -> dict[str, Any]: """Serialize Site model using the same schema as AppSiteApi.""" - return cast(dict, marshal(site, AppSiteApi.site_fields)) + return cast(dict[str, Any], marshal(site, AppSiteApi.site_fields)) -def serialize_app_site_payload(app_model: App, site: Site, end_user_id: str | None) -> dict: +def serialize_app_site_payload(app_model: App, site: Site, end_user_id: str | None) -> dict[str, Any]: can_replace_logo = FeatureService.get_features(app_model.tenant_id).can_replace_logo app_site_info = AppSiteInfo(app_model.tenant, app_model, site, end_user_id, can_replace_logo) - return cast(dict, marshal(app_site_info, AppSiteApi.app_fields)) + return cast(dict[str, Any], marshal(app_site_info, AppSiteApi.app_fields)) diff --git a/api/controllers/web/workflow.py b/api/controllers/web/workflow.py index 796e090976..98211193a0 100644 --- a/api/controllers/web/workflow.py +++ b/api/controllers/web/workflow.py @@ -1,7 +1,5 @@ import logging -from graphon.graph_engine.manager import GraphEngineManager -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import InternalServerError from controllers.common.controller_schemas import WorkflowRunPayload @@ -24,6 +22,8 @@ from core.errors.error import ( QuotaExceededError, ) from extensions.ext_redis import redis_client +from graphon.graph_engine.manager import GraphEngineManager +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from models.model import App, AppMode, EndUser from services.app_generate_service import AppGenerateService diff --git a/api/controllers/web/workflow_events.py b/api/controllers/web/workflow_events.py index 61568e70e6..474f9c0957 100644 --- a/api/controllers/web/workflow_events.py +++ b/api/controllers/web/workflow_events.py @@ -72,12 +72,13 @@ class WorkflowEventsApi(WebApiResource): app_mode = AppMode.value_of(app_model.mode) msg_generator = MessageGenerator() generator: BaseAppGenerator - if app_mode == AppMode.ADVANCED_CHAT: - generator = AdvancedChatAppGenerator() - elif app_mode == AppMode.WORKFLOW: - generator = WorkflowAppGenerator() - else: - raise InvalidArgumentError(f"cannot subscribe to workflow run, workflow_run_id={workflow_run.id}") + match app_mode: + case AppMode.ADVANCED_CHAT: + generator = AdvancedChatAppGenerator() + case AppMode.WORKFLOW: + generator = WorkflowAppGenerator() + case _: + raise InvalidArgumentError(f"cannot subscribe to workflow run, workflow_run_id={workflow_run.id}") include_state_snapshot = request.args.get("include_state_snapshot", "false").lower() == "true" diff --git a/api/core/agent/base_agent_runner.py b/api/core/agent/base_agent_runner.py index 06c746990d..790602ef5d 100644 --- a/api/core/agent/base_agent_runner.py +++ b/api/core/agent/base_agent_runner.py @@ -4,20 +4,6 @@ import uuid from decimal import Decimal from typing import Union, cast -from graphon.file import file_manager -from graphon.model_runtime.entities import ( - AssistantPromptMessage, - LLMUsage, - PromptMessage, - PromptMessageTool, - SystemPromptMessage, - TextPromptMessageContent, - ToolPromptMessage, - UserPromptMessage, -) -from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes -from graphon.model_runtime.entities.model_entities import ModelFeature -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from sqlalchemy import func, select from core.agent.entities import AgentEntity, AgentToolEntity @@ -43,6 +29,20 @@ from core.tools.tool_manager import ToolManager from core.tools.utils.dataset_retriever_tool import DatasetRetrieverTool from extensions.ext_database import db from factories import file_factory +from graphon.file import file_manager +from graphon.model_runtime.entities import ( + AssistantPromptMessage, + LLMUsage, + PromptMessage, + PromptMessageTool, + SystemPromptMessage, + TextPromptMessageContent, + ToolPromptMessage, + UserPromptMessage, +) +from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes +from graphon.model_runtime.entities.model_entities import ModelFeature +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from models.enums import CreatorUserRole from models.model import Conversation, Message, MessageAgentThought, MessageFile diff --git a/api/core/agent/cot_agent_runner.py b/api/core/agent/cot_agent_runner.py index 11e2aa062d..0bc93ad34d 100644 --- a/api/core/agent/cot_agent_runner.py +++ b/api/core/agent/cot_agent_runner.py @@ -2,16 +2,7 @@ import json import logging from abc import ABC, abstractmethod from collections.abc import Generator, Mapping, Sequence -from typing import Any - -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - PromptMessage, - PromptMessageTool, - ToolPromptMessage, - UserPromptMessage, -) +from typing import Any, TypedDict from core.agent.base_agent_runner import BaseAgentRunner from core.agent.entities import AgentScratchpadUnit @@ -24,11 +15,26 @@ from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransfo from core.tools.__base.tool import Tool from core.tools.entities.tool_entities import ToolInvokeMeta from core.tools.tool_engine import ToolEngine +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + PromptMessage, + PromptMessageTool, + ToolPromptMessage, + UserPromptMessage, +) from models.model import Message logger = logging.getLogger(__name__) +class ActionDict(TypedDict): + """Shape produced by AgentScratchpadUnit.Action.to_dict().""" + + action: str + action_input: dict[str, Any] | str + + class CotAgentRunner(BaseAgentRunner, ABC): _is_first_iteration = True _ignore_observation_providers = ["wenxin"] @@ -331,7 +337,7 @@ class CotAgentRunner(BaseAgentRunner, ABC): return tool_invoke_response, tool_invoke_meta - def _convert_dict_to_action(self, action: dict) -> AgentScratchpadUnit.Action: + def _convert_dict_to_action(self, action: ActionDict) -> AgentScratchpadUnit.Action: """ convert dict to action """ diff --git a/api/core/agent/cot_chat_agent_runner.py b/api/core/agent/cot_chat_agent_runner.py index 2b2e26987e..a2186be100 100644 --- a/api/core/agent/cot_chat_agent_runner.py +++ b/api/core/agent/cot_chat_agent_runner.py @@ -1,5 +1,6 @@ import json +from core.agent.cot_agent_runner import CotAgentRunner from graphon.file import file_manager from graphon.model_runtime.entities import ( AssistantPromptMessage, @@ -11,8 +12,6 @@ from graphon.model_runtime.entities import ( from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes from graphon.model_runtime.utils.encoders import jsonable_encoder -from core.agent.cot_agent_runner import CotAgentRunner - class CotChatAgentRunner(CotAgentRunner): def _organize_system_prompt(self) -> SystemPromptMessage: diff --git a/api/core/agent/cot_completion_agent_runner.py b/api/core/agent/cot_completion_agent_runner.py index d4c52a8eb1..51a30998ae 100644 --- a/api/core/agent/cot_completion_agent_runner.py +++ b/api/core/agent/cot_completion_agent_runner.py @@ -1,5 +1,6 @@ import json +from core.agent.cot_agent_runner import CotAgentRunner from graphon.model_runtime.entities.message_entities import ( AssistantPromptMessage, PromptMessage, @@ -8,8 +9,6 @@ from graphon.model_runtime.entities.message_entities import ( ) from graphon.model_runtime.utils.encoders import jsonable_encoder -from core.agent.cot_agent_runner import CotAgentRunner - class CotCompletionAgentRunner(CotAgentRunner): def _organize_instruction_prompt(self) -> str: diff --git a/api/core/agent/fc_agent_runner.py b/api/core/agent/fc_agent_runner.py index fdffde85d0..d38d24d1e7 100644 --- a/api/core/agent/fc_agent_runner.py +++ b/api/core/agent/fc_agent_runner.py @@ -4,6 +4,13 @@ from collections.abc import Generator from copy import deepcopy from typing import Any, Union +from core.agent.base_agent_runner import BaseAgentRunner +from core.agent.errors import AgentMaxIterationError +from core.app.apps.base_app_queue_manager import PublishFrom +from core.app.entities.queue_entities import QueueAgentThoughtEvent, QueueMessageEndEvent, QueueMessageFileEvent +from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransform +from core.tools.entities.tool_entities import ToolInvokeMeta +from core.tools.tool_engine import ToolEngine from graphon.file import file_manager from graphon.model_runtime.entities import ( AssistantPromptMessage, @@ -19,14 +26,6 @@ from graphon.model_runtime.entities import ( UserPromptMessage, ) from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes - -from core.agent.base_agent_runner import BaseAgentRunner -from core.agent.errors import AgentMaxIterationError -from core.app.apps.base_app_queue_manager import PublishFrom -from core.app.entities.queue_entities import QueueAgentThoughtEvent, QueueMessageEndEvent, QueueMessageFileEvent -from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransform -from core.tools.entities.tool_entities import ToolInvokeMeta -from core.tools.tool_engine import ToolEngine from models.model import Message logger = logging.getLogger(__name__) diff --git a/api/core/agent/output_parser/cot_output_parser.py b/api/core/agent/output_parser/cot_output_parser.py index 46c1f1230d..f341ca5a0b 100644 --- a/api/core/agent/output_parser/cot_output_parser.py +++ b/api/core/agent/output_parser/cot_output_parser.py @@ -1,17 +1,16 @@ import json import re from collections.abc import Generator -from typing import Union - -from graphon.model_runtime.entities.llm_entities import LLMResultChunk +from typing import Any, Union from core.agent.entities import AgentScratchpadUnit +from graphon.model_runtime.entities.llm_entities import LLMResultChunk class CotAgentOutputParser: @classmethod def handle_react_stream_output( - cls, llm_response: Generator[LLMResultChunk, None, None], usage_dict: dict + cls, llm_response: Generator[LLMResultChunk, None, None], usage_dict: dict[str, Any] ) -> Generator[Union[str, AgentScratchpadUnit.Action], None, None]: def parse_action(action) -> Union[str, AgentScratchpadUnit.Action]: action_name = None diff --git a/api/core/agent/plugin_entities.py b/api/core/agent/plugin_entities.py index 90aa7b5fd4..8d25863a91 100644 --- a/api/core/agent/plugin_entities.py +++ b/api/core/agent/plugin_entities.py @@ -84,7 +84,7 @@ class AgentStrategyEntity(BaseModel): identity: AgentStrategyIdentity parameters: list[AgentStrategyParameter] = Field(default_factory=list) description: I18nObject = Field(..., description="The description of the agent strategy") - output_schema: dict | None = None + output_schema: dict[str, Any] | None = None features: list[AgentFeature] | None = None meta_version: str | None = None # pydantic configs diff --git a/api/core/app/app_config/common/sensitive_word_avoidance/manager.py b/api/core/app/app_config/common/sensitive_word_avoidance/manager.py index 7d1b11c008..c8ec7cb44d 100644 --- a/api/core/app/app_config/common/sensitive_word_avoidance/manager.py +++ b/api/core/app/app_config/common/sensitive_word_avoidance/manager.py @@ -22,8 +22,8 @@ class SensitiveWordAvoidanceConfigManager: @classmethod def validate_and_set_defaults( - cls, tenant_id: str, config: dict, only_structure_validate: bool = False - ) -> tuple[dict, list[str]]: + cls, tenant_id: str, config: dict[str, Any], only_structure_validate: bool = False + ) -> tuple[dict[str, Any], list[str]]: if not config.get("sensitive_word_avoidance"): config["sensitive_word_avoidance"] = {"enabled": False} diff --git a/api/core/app/app_config/easy_ui_based_app/dataset/manager.py b/api/core/app/app_config/easy_ui_based_app/dataset/manager.py index f04a8df119..3d857a4e9c 100644 --- a/api/core/app/app_config/easy_ui_based_app/dataset/manager.py +++ b/api/core/app/app_config/easy_ui_based_app/dataset/manager.py @@ -138,7 +138,9 @@ class DatasetConfigManager: ) @classmethod - def validate_and_set_defaults(cls, tenant_id: str, app_mode: AppMode, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults( + cls, tenant_id: str, app_mode: AppMode, config: dict[str, Any] + ) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for dataset feature @@ -172,7 +174,7 @@ class DatasetConfigManager: return config, ["agent_mode", "dataset_configs", "dataset_query_variable"] @classmethod - def extract_dataset_config_for_legacy_compatibility(cls, tenant_id: str, app_mode: AppMode, config: dict): + def extract_dataset_config_for_legacy_compatibility(cls, tenant_id: str, app_mode: AppMode, config: dict[str, Any]): """ Extract dataset config for legacy compatibility diff --git a/api/core/app/app_config/easy_ui_based_app/model_config/converter.py b/api/core/app/app_config/easy_ui_based_app/model_config/converter.py index b7dd55632e..dbd7527fc6 100644 --- a/api/core/app/app_config/easy_ui_based_app/model_config/converter.py +++ b/api/core/app/app_config/easy_ui_based_app/model_config/converter.py @@ -1,14 +1,13 @@ from typing import cast -from graphon.model_runtime.entities.llm_entities import LLMMode -from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel - from core.app.app_config.entities import EasyUIBasedAppConfig from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.entities.model_entities import ModelStatus from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError from core.plugin.impl.model_runtime_factory import create_plugin_provider_manager +from graphon.model_runtime.entities.llm_entities import LLMMode +from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel class ModelConfigConverter: diff --git a/api/core/app/app_config/easy_ui_based_app/model_config/manager.py b/api/core/app/app_config/easy_ui_based_app/model_config/manager.py index 5cc385c378..02498c23e1 100644 --- a/api/core/app/app_config/easy_ui_based_app/model_config/manager.py +++ b/api/core/app/app_config/easy_ui_based_app/model_config/manager.py @@ -1,10 +1,9 @@ from collections.abc import Mapping from typing import Any -from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType - from core.app.app_config.entities import ModelConfigEntity from core.plugin.impl.model_runtime_factory import create_plugin_model_assembly +from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType from models.model import AppModelConfigDict from models.provider_ids import ModelProviderID @@ -41,7 +40,7 @@ class ModelConfigManager: ) @classmethod - def validate_and_set_defaults(cls, tenant_id: str, config: Mapping[str, Any]) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, tenant_id: str, config: Mapping[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for model config @@ -108,7 +107,7 @@ class ModelConfigManager: return dict(config), ["model"] @classmethod - def validate_model_completion_params(cls, cp: dict): + def validate_model_completion_params(cls, cp: dict[str, Any]): # model.completion_params if not isinstance(cp, dict): raise ValueError("model.completion_params must be of object type") diff --git a/api/core/app/app_config/easy_ui_based_app/prompt_template/manager.py b/api/core/app/app_config/easy_ui_based_app/prompt_template/manager.py index 76196e7034..4c07445df3 100644 --- a/api/core/app/app_config/easy_ui_based_app/prompt_template/manager.py +++ b/api/core/app/app_config/easy_ui_based_app/prompt_template/manager.py @@ -1,7 +1,5 @@ from typing import Any -from graphon.model_runtime.entities.message_entities import PromptMessageRole - from core.app.app_config.entities import ( AdvancedChatMessageEntity, AdvancedChatPromptTemplateEntity, @@ -9,6 +7,7 @@ from core.app.app_config.entities import ( PromptTemplateEntity, ) from core.prompt.simple_prompt_transform import ModelMode +from graphon.model_runtime.entities.message_entities import PromptMessageRole from models.model import AppMode, AppModelConfigDict @@ -65,7 +64,7 @@ class PromptTemplateConfigManager: ) @classmethod - def validate_and_set_defaults(cls, app_mode: AppMode, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, app_mode: AppMode, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate pre_prompt and set defaults for prompt feature depending on the config['model'] @@ -130,7 +129,7 @@ class PromptTemplateConfigManager: return config, ["prompt_type", "pre_prompt", "chat_prompt_config", "completion_prompt_config"] @classmethod - def validate_post_prompt_and_set_defaults(cls, config: dict): + def validate_post_prompt_and_set_defaults(cls, config: dict[str, Any]): """ Validate post_prompt and set defaults for prompt feature diff --git a/api/core/app/app_config/easy_ui_based_app/variables/manager.py b/api/core/app/app_config/easy_ui_based_app/variables/manager.py index f0b71c5801..ddb500cccf 100644 --- a/api/core/app/app_config/easy_ui_based_app/variables/manager.py +++ b/api/core/app/app_config/easy_ui_based_app/variables/manager.py @@ -1,10 +1,9 @@ import re -from typing import cast - -from graphon.variables.input_entities import VariableEntity, VariableEntityType +from typing import Any, cast from core.app.app_config.entities import ExternalDataVariableEntity from core.external_data_tool.factory import ExternalDataToolFactory +from graphon.variables.input_entities import VariableEntity, VariableEntityType from models.model import AppModelConfigDict _ALLOWED_VARIABLE_ENTITY_TYPE = frozenset( @@ -82,7 +81,7 @@ class BasicVariablesConfigManager: return variable_entities, external_data_variables @classmethod - def validate_and_set_defaults(cls, tenant_id: str, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, tenant_id: str, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for user input form @@ -99,7 +98,7 @@ class BasicVariablesConfigManager: return config, related_config_keys @classmethod - def validate_variables_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_variables_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for user input form @@ -164,7 +163,9 @@ class BasicVariablesConfigManager: return config, ["user_input_form"] @classmethod - def validate_external_data_tools_and_set_defaults(cls, tenant_id: str, config: dict) -> tuple[dict, list[str]]: + def validate_external_data_tools_and_set_defaults( + cls, tenant_id: str, config: dict[str, Any] + ) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for external data fetch feature diff --git a/api/core/app/app_config/entities.py b/api/core/app/app_config/entities.py index 819aca864c..53563dc5da 100644 --- a/api/core/app/app_config/entities.py +++ b/api/core/app/app_config/entities.py @@ -1,14 +1,14 @@ from enum import StrEnum, auto from typing import Any, Literal -from graphon.file import FileUploadConfig -from graphon.model_runtime.entities.llm_entities import LLMMode -from graphon.model_runtime.entities.message_entities import PromptMessageRole -from graphon.variables.input_entities import VariableEntity as WorkflowVariableEntity from pydantic import BaseModel, Field from core.rag.data_post_processor.data_post_processor import RerankingModelDict, WeightsDict from core.rag.entities import MetadataFilteringCondition +from graphon.file import FileUploadConfig +from graphon.model_runtime.entities.llm_entities import LLMMode +from graphon.model_runtime.entities.message_entities import PromptMessageRole +from graphon.variables.input_entities import VariableEntity as WorkflowVariableEntity from models.model import AppMode diff --git a/api/core/app/app_config/features/file_upload/manager.py b/api/core/app/app_config/features/file_upload/manager.py index e96517c426..8f20ef2ff9 100644 --- a/api/core/app/app_config/features/file_upload/manager.py +++ b/api/core/app/app_config/features/file_upload/manager.py @@ -1,9 +1,8 @@ from collections.abc import Mapping from typing import Any -from graphon.file import FileUploadConfig - from constants import DEFAULT_FILE_NUMBER_LIMITS +from graphon.file import FileUploadConfig class FileUploadConfigManager: @@ -30,7 +29,7 @@ class FileUploadConfigManager: return FileUploadConfig.model_validate(file_upload_dict) @classmethod - def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for file upload feature diff --git a/api/core/app/app_config/features/more_like_this/manager.py b/api/core/app/app_config/features/more_like_this/manager.py index ef71bb348a..b167c04ab5 100644 --- a/api/core/app/app_config/features/more_like_this/manager.py +++ b/api/core/app/app_config/features/more_like_this/manager.py @@ -1,3 +1,5 @@ +from typing import Any + from pydantic import BaseModel, ConfigDict, Field, ValidationError @@ -13,7 +15,7 @@ class AppConfigModel(BaseModel): class MoreLikeThisConfigManager: @classmethod - def convert(cls, config: dict) -> bool: + def convert(cls, config: dict[str, Any]) -> bool: """ Convert model config to model config @@ -23,7 +25,7 @@ class MoreLikeThisConfigManager: return AppConfigModel.model_validate(validated_config).more_like_this.enabled @classmethod - def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: try: return AppConfigModel.model_validate(config).model_dump(), ["more_like_this"] except ValidationError: diff --git a/api/core/app/app_config/features/opening_statement/manager.py b/api/core/app/app_config/features/opening_statement/manager.py index 92b4185abf..33f5aec183 100644 --- a/api/core/app/app_config/features/opening_statement/manager.py +++ b/api/core/app/app_config/features/opening_statement/manager.py @@ -1,6 +1,9 @@ +from typing import Any + + class OpeningStatementConfigManager: @classmethod - def convert(cls, config: dict) -> tuple[str, list]: + def convert(cls, config: dict[str, Any]) -> tuple[str, list[str]]: """ Convert model config to model config @@ -15,7 +18,7 @@ class OpeningStatementConfigManager: return opening_statement, suggested_questions_list @classmethod - def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for opening statement feature diff --git a/api/core/app/app_config/features/retrieval_resource/manager.py b/api/core/app/app_config/features/retrieval_resource/manager.py index d098abac2f..8157fb41db 100644 --- a/api/core/app/app_config/features/retrieval_resource/manager.py +++ b/api/core/app/app_config/features/retrieval_resource/manager.py @@ -1,6 +1,9 @@ +from typing import Any + + class RetrievalResourceConfigManager: @classmethod - def convert(cls, config: dict) -> bool: + def convert(cls, config: dict[str, Any]) -> bool: show_retrieve_source = False retriever_resource_dict = config.get("retriever_resource") if retriever_resource_dict: @@ -10,7 +13,7 @@ class RetrievalResourceConfigManager: return show_retrieve_source @classmethod - def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for retriever resource feature diff --git a/api/core/app/app_config/features/speech_to_text/manager.py b/api/core/app/app_config/features/speech_to_text/manager.py index e10ae03e04..679b8c343b 100644 --- a/api/core/app/app_config/features/speech_to_text/manager.py +++ b/api/core/app/app_config/features/speech_to_text/manager.py @@ -1,6 +1,9 @@ +from typing import Any + + class SpeechToTextConfigManager: @classmethod - def convert(cls, config: dict) -> bool: + def convert(cls, config: dict[str, Any]) -> bool: """ Convert model config to model config @@ -15,7 +18,7 @@ class SpeechToTextConfigManager: return speech_to_text @classmethod - def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for speech to text feature diff --git a/api/core/app/app_config/features/suggested_questions_after_answer/manager.py b/api/core/app/app_config/features/suggested_questions_after_answer/manager.py index 9ac5114d12..2dddce349c 100644 --- a/api/core/app/app_config/features/suggested_questions_after_answer/manager.py +++ b/api/core/app/app_config/features/suggested_questions_after_answer/manager.py @@ -1,6 +1,9 @@ +from typing import Any + + class SuggestedQuestionsAfterAnswerConfigManager: @classmethod - def convert(cls, config: dict) -> bool: + def convert(cls, config: dict[str, Any]) -> bool: """ Convert model config to model config @@ -15,7 +18,7 @@ class SuggestedQuestionsAfterAnswerConfigManager: return suggested_questions_after_answer @classmethod - def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for suggested questions feature diff --git a/api/core/app/app_config/features/text_to_speech/manager.py b/api/core/app/app_config/features/text_to_speech/manager.py index 1c75981785..ca84ec9c3b 100644 --- a/api/core/app/app_config/features/text_to_speech/manager.py +++ b/api/core/app/app_config/features/text_to_speech/manager.py @@ -1,9 +1,11 @@ +from typing import Any + from core.app.app_config.entities import TextToSpeechEntity class TextToSpeechConfigManager: @classmethod - def convert(cls, config: dict): + def convert(cls, config: dict[str, Any]): """ Convert model config to model config @@ -22,7 +24,7 @@ class TextToSpeechConfigManager: return text_to_speech @classmethod - def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for text to speech feature diff --git a/api/core/app/app_config/workflow_ui_based_app/variables/manager.py b/api/core/app/app_config/workflow_ui_based_app/variables/manager.py index 62e0c31d1a..13ace32fd6 100644 --- a/api/core/app/app_config/workflow_ui_based_app/variables/manager.py +++ b/api/core/app/app_config/workflow_ui_based_app/variables/manager.py @@ -1,8 +1,7 @@ import re -from graphon.variables.input_entities import VariableEntity - from core.app.app_config.entities import RagPipelineVariableEntity +from graphon.variables.input_entities import VariableEntity from models.workflow import Workflow diff --git a/api/core/app/apps/advanced_chat/app_generator.py b/api/core/app/apps/advanced_chat/app_generator.py index 985ded0f74..9e64b471cb 100644 --- a/api/core/app/apps/advanced_chat/app_generator.py +++ b/api/core/app/apps/advanced_chat/app_generator.py @@ -18,11 +18,6 @@ from constants import UUID_NIL if TYPE_CHECKING: from controllers.console.app.workflow import LoopNodeRunPayload -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError -from graphon.runtime import GraphRuntimeState -from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader - from core.app.app_config.features.file_upload.manager import FileUploadConfigManager from core.app.apps.advanced_chat.app_config_manager import AdvancedChatAppConfigManager from core.app.apps.advanced_chat.app_runner import AdvancedChatAppRunner @@ -48,6 +43,10 @@ from core.repositories import DifyCoreRepositoryFactory from core.repositories.factory import WorkflowExecutionRepository, WorkflowNodeExecutionRepository from extensions.ext_database import db from factories import file_factory +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError +from graphon.runtime import GraphRuntimeState +from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader from libs.flask_utils import preserve_flask_contexts from models import Account, App, Conversation, EndUser, Message, Workflow, WorkflowNodeExecutionTriggeredFrom from models.enums import WorkflowRunTriggeredFrom diff --git a/api/core/app/apps/advanced_chat/app_runner.py b/api/core/app/apps/advanced_chat/app_runner.py index a884a1c7f9..4e57b4dedc 100644 --- a/api/core/app/apps/advanced_chat/app_runner.py +++ b/api/core/app/apps/advanced_chat/app_runner.py @@ -3,14 +3,8 @@ import time from collections.abc import Mapping, Sequence from typing import Any, cast -from graphon.enums import WorkflowType -from graphon.graph_engine.command_channels import RedisChannel -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.runtime import GraphRuntimeState, VariablePool -from graphon.variable_loader import VariableLoader -from graphon.variables.variables import Variable from sqlalchemy import select -from sqlalchemy.orm import Session +from sqlalchemy.orm import Session, sessionmaker from core.app.apps.advanced_chat.app_config_manager import AdvancedChatAppConfig from core.app.apps.base_app_queue_manager import AppQueueManager @@ -43,6 +37,12 @@ from core.workflow.workflow_entry import WorkflowEntry from extensions.ext_database import db from extensions.ext_redis import redis_client from extensions.otel import WorkflowAppRunnerHandler, trace_span +from graphon.enums import WorkflowType +from graphon.graph_engine.command_channels import RedisChannel +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.runtime import GraphRuntimeState, VariablePool +from graphon.variable_loader import VariableLoader +from graphon.variables.variables import Variable from models import Workflow from models.model import App, Conversation, Message, MessageAnnotation from models.workflow import ConversationVariable @@ -363,7 +363,7 @@ class AdvancedChatAppRunner(WorkflowBasedAppRunner): :return: List of conversation variables ready for use """ - with Session(db.engine) as session: + with sessionmaker(bind=db.engine).begin() as session: existing_variables = self._load_existing_conversation_variables(session) if not existing_variables: @@ -376,7 +376,6 @@ class AdvancedChatAppRunner(WorkflowBasedAppRunner): # Convert to Variable objects for use in the workflow conversation_variables = [var.to_variable() for var in existing_variables] - session.commit() return cast(list[Variable], conversation_variables) def _load_existing_conversation_variables(self, session: Session) -> list[ConversationVariable]: diff --git a/api/core/app/apps/advanced_chat/generate_response_converter.py b/api/core/app/apps/advanced_chat/generate_response_converter.py index 5c9bc43992..fe2702ed69 100644 --- a/api/core/app/apps/advanced_chat/generate_response_converter.py +++ b/api/core/app/apps/advanced_chat/generate_response_converter.py @@ -57,7 +57,7 @@ class AdvancedChatAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_full_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, Any, None]: + ) -> Generator[dict[str, Any] | str, Any, None]: """ Convert stream full response. :param stream_response: stream response @@ -88,7 +88,7 @@ class AdvancedChatAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_simple_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, Any, None]: + ) -> Generator[dict[str, Any] | str, Any, None]: """ Convert stream simple response. :param stream_response: stream response diff --git a/api/core/app/apps/advanced_chat/generate_task_pipeline.py b/api/core/app/apps/advanced_chat/generate_task_pipeline.py index 5203de225c..78b582bdf5 100644 --- a/api/core/app/apps/advanced_chat/generate_task_pipeline.py +++ b/api/core/app/apps/advanced_chat/generate_task_pipeline.py @@ -9,14 +9,8 @@ from datetime import datetime from threading import Thread from typing import Any, Union -from graphon.entities.pause_reason import HumanInputRequired -from graphon.enums import WorkflowExecutionStatus -from graphon.model_runtime.entities.llm_entities import LLMUsage -from graphon.model_runtime.utils.encoders import jsonable_encoder -from graphon.nodes import BuiltinNodeTypes -from graphon.runtime import GraphRuntimeState from sqlalchemy import select -from sqlalchemy.orm import Session +from sqlalchemy.orm import Session, sessionmaker from constants.tts_auto_play_timeout import TTS_AUTO_PLAY_TIMEOUT, TTS_AUTO_PLAY_YIELD_CPU_TIME from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom @@ -77,6 +71,12 @@ from core.repositories.human_input_repository import HumanInputFormRepositoryImp from core.workflow.file_reference import resolve_file_record_id from core.workflow.system_variables import build_system_variables from extensions.ext_database import db +from graphon.entities.pause_reason import HumanInputRequired +from graphon.enums import WorkflowExecutionStatus +from graphon.model_runtime.entities.llm_entities import LLMUsage +from graphon.model_runtime.utils.encoders import jsonable_encoder +from graphon.nodes import BuiltinNodeTypes +from graphon.runtime import GraphRuntimeState from libs.datetime_utils import naive_utc_now from models import Account, Conversation, EndUser, Message, MessageFile from models.enums import CreatorUserRole, MessageFileBelongsTo, MessageStatus @@ -328,13 +328,8 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport): @contextmanager def _database_session(self): """Context manager for database sessions.""" - with Session(db.engine, expire_on_commit=False) as session: - try: - yield session - session.commit() - except Exception: - session.rollback() - raise + with sessionmaker(bind=db.engine, expire_on_commit=False).begin() as session: + yield session def _ensure_workflow_initialized(self): """Fluent validation for workflow state.""" diff --git a/api/core/app/apps/agent_chat/app_generator.py b/api/core/app/apps/agent_chat/app_generator.py index 5872f6b264..5cdc477028 100644 --- a/api/core/app/apps/agent_chat/app_generator.py +++ b/api/core/app/apps/agent_chat/app_generator.py @@ -6,7 +6,6 @@ from collections.abc import Generator, Mapping from typing import Any, Literal, overload from flask import Flask, current_app -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from pydantic import ValidationError from configs import dify_config @@ -24,6 +23,7 @@ from core.app.entities.app_invoke_entities import AgentChatAppGenerateEntity, In from core.ops.ops_trace_manager import TraceQueueManager from extensions.ext_database import db from factories import file_factory +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from libs.flask_utils import preserve_flask_contexts from models import Account, App, EndUser from services.conversation_service import ConversationService diff --git a/api/core/app/apps/agent_chat/app_runner.py b/api/core/app/apps/agent_chat/app_runner.py index a20d3f3c38..09ddce327e 100644 --- a/api/core/app/apps/agent_chat/app_runner.py +++ b/api/core/app/apps/agent_chat/app_runner.py @@ -1,9 +1,6 @@ import logging from typing import cast -from graphon.model_runtime.entities.llm_entities import LLMMode -from graphon.model_runtime.entities.model_entities import ModelFeature, ModelPropertyKey -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from sqlalchemy import select from core.agent.cot_chat_agent_runner import CotChatAgentRunner @@ -19,6 +16,9 @@ from core.memory.token_buffer_memory import TokenBufferMemory from core.model_manager import ModelInstance from core.moderation.base import ModerationError from extensions.ext_database import db +from graphon.model_runtime.entities.llm_entities import LLMMode +from graphon.model_runtime.entities.model_entities import ModelFeature, ModelPropertyKey +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from models.model import App, Conversation, Message logger = logging.getLogger(__name__) diff --git a/api/core/app/apps/agent_chat/generate_response_converter.py b/api/core/app/apps/agent_chat/generate_response_converter.py index 0c146c388f..731c6ee12e 100644 --- a/api/core/app/apps/agent_chat/generate_response_converter.py +++ b/api/core/app/apps/agent_chat/generate_response_converter.py @@ -1,5 +1,5 @@ from collections.abc import Generator -from typing import cast +from typing import Any, cast from core.app.apps.base_app_generate_response_converter import AppGenerateResponseConverter from core.app.entities.task_entities import ( @@ -56,7 +56,7 @@ class AgentChatAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_full_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream full response. :param stream_response: stream response @@ -87,7 +87,7 @@ class AgentChatAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_simple_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream simple response. :param stream_response: stream response diff --git a/api/core/app/apps/base_app_generate_response_converter.py b/api/core/app/apps/base_app_generate_response_converter.py index 6e5a86505c..d5edfaeb25 100644 --- a/api/core/app/apps/base_app_generate_response_converter.py +++ b/api/core/app/apps/base_app_generate_response_converter.py @@ -3,11 +3,10 @@ from abc import ABC, abstractmethod from collections.abc import Generator, Mapping from typing import Any, Union -from graphon.model_runtime.errors.invoke import InvokeError - from core.app.entities.app_invoke_entities import InvokeFrom from core.app.entities.task_entities import AppBlockingResponse, AppStreamResponse from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError logger = logging.getLogger(__name__) @@ -24,7 +23,7 @@ class AppGenerateResponseConverter(ABC): return cls.convert_blocking_full_response(response) else: - def _generate_full_response() -> Generator[dict | str, Any, None]: + def _generate_full_response() -> Generator[dict[str, Any] | str, Any, None]: yield from cls.convert_stream_full_response(response) return _generate_full_response() @@ -33,7 +32,7 @@ class AppGenerateResponseConverter(ABC): return cls.convert_blocking_simple_response(response) else: - def _generate_simple_response() -> Generator[dict | str, Any, None]: + def _generate_simple_response() -> Generator[dict[str, Any] | str, Any, None]: yield from cls.convert_stream_simple_response(response) return _generate_simple_response() @@ -52,14 +51,14 @@ class AppGenerateResponseConverter(ABC): @abstractmethod def convert_stream_full_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: raise NotImplementedError @classmethod @abstractmethod def convert_stream_simple_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: raise NotImplementedError @classmethod diff --git a/api/core/app/apps/base_app_generator.py b/api/core/app/apps/base_app_generator.py index 7eccd59d17..8e8ccf2b90 100644 --- a/api/core/app/apps/base_app_generator.py +++ b/api/core/app/apps/base_app_generator.py @@ -2,9 +2,6 @@ from collections.abc import Generator, Mapping, Sequence from contextlib import AbstractContextManager, nullcontext from typing import TYPE_CHECKING, Any, Union, final -from graphon.enums import NodeType -from graphon.file import File, FileUploadConfig -from graphon.variables.input_entities import VariableEntityType from sqlalchemy.orm import Session from core.app.apps.draft_variable_saver import ( @@ -16,6 +13,9 @@ from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.app.file_access import DatabaseFileAccessController, FileAccessScope, bind_file_access_scope from extensions.ext_database import db from factories import file_factory +from graphon.enums import NodeType +from graphon.file import File, FileUploadConfig +from graphon.variables.input_entities import VariableEntityType from libs.orjson import orjson_dumps from models import Account, EndUser from services.workflow_draft_variable_service import DraftVariableSaver as DraftVariableSaverImpl diff --git a/api/core/app/apps/base_app_queue_manager.py b/api/core/app/apps/base_app_queue_manager.py index 20bf81aeec..d1771452c5 100644 --- a/api/core/app/apps/base_app_queue_manager.py +++ b/api/core/app/apps/base_app_queue_manager.py @@ -7,7 +7,6 @@ from enum import IntEnum, auto from typing import Any from cachetools import TTLCache, cachedmethod -from graphon.runtime import GraphRuntimeState from redis.exceptions import RedisError from sqlalchemy.orm import DeclarativeMeta @@ -22,6 +21,7 @@ from core.app.entities.queue_entities import ( WorkflowQueueMessage, ) from extensions.ext_redis import redis_client +from graphon.runtime import GraphRuntimeState logger = logging.getLogger(__name__) diff --git a/api/core/app/apps/base_app_runner.py b/api/core/app/apps/base_app_runner.py index 4aebc0cb30..1251b397e2 100644 --- a/api/core/app/apps/base_app_runner.py +++ b/api/core/app/apps/base_app_runner.py @@ -5,17 +5,6 @@ from collections.abc import Generator, Mapping, Sequence from mimetypes import guess_extension from typing import TYPE_CHECKING, Any, Union -from graphon.file import FileTransferMethod, FileType -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - ImagePromptMessageContent, - PromptMessage, - TextPromptMessageContent, -) -from graphon.model_runtime.entities.model_entities import ModelPropertyKey -from graphon.model_runtime.errors.invoke import InvokeBadRequestError - from core.app.app_config.entities import ExternalDataVariableEntity, PromptTemplateEntity from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom from core.app.entities.app_invoke_entities import ( @@ -41,6 +30,16 @@ from core.prompt.entities.advanced_prompt_entities import ChatModelMessage, Comp from core.prompt.simple_prompt_transform import ModelMode, SimplePromptTransform from core.tools.tool_file_manager import ToolFileManager from extensions.ext_database import db +from graphon.file import FileTransferMethod, FileType +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + ImagePromptMessageContent, + PromptMessage, + TextPromptMessageContent, +) +from graphon.model_runtime.entities.model_entities import ModelPropertyKey +from graphon.model_runtime.errors.invoke import InvokeBadRequestError from models.enums import CreatorUserRole, MessageFileBelongsTo from models.model import App, AppMode, Message, MessageAnnotation, MessageFile diff --git a/api/core/app/apps/chat/app_generator.py b/api/core/app/apps/chat/app_generator.py index 891dcece73..58afefe296 100644 --- a/api/core/app/apps/chat/app_generator.py +++ b/api/core/app/apps/chat/app_generator.py @@ -6,7 +6,6 @@ from collections.abc import Generator, Mapping from typing import Any, Literal, overload from flask import Flask, copy_current_request_context, current_app -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from pydantic import ValidationError from configs import dify_config @@ -24,6 +23,7 @@ from core.app.entities.app_invoke_entities import ChatAppGenerateEntity, InvokeF from core.ops.ops_trace_manager import TraceQueueManager from extensions.ext_database import db from factories import file_factory +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from models import Account from models.model import App, EndUser from services.conversation_service import ConversationService diff --git a/api/core/app/apps/chat/app_runner.py b/api/core/app/apps/chat/app_runner.py index 050f763e95..077c5239f3 100644 --- a/api/core/app/apps/chat/app_runner.py +++ b/api/core/app/apps/chat/app_runner.py @@ -1,8 +1,6 @@ import logging from typing import cast -from graphon.file import File -from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent from sqlalchemy import select from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom @@ -18,6 +16,8 @@ from core.model_manager import ModelInstance from core.moderation.base import ModerationError from core.rag.retrieval.dataset_retrieval import DatasetRetrieval from extensions.ext_database import db +from graphon.file import File +from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent from models.model import App, Conversation, Message logger = logging.getLogger(__name__) diff --git a/api/core/app/apps/chat/generate_response_converter.py b/api/core/app/apps/chat/generate_response_converter.py index f23ee7f89f..3d0375151d 100644 --- a/api/core/app/apps/chat/generate_response_converter.py +++ b/api/core/app/apps/chat/generate_response_converter.py @@ -1,5 +1,5 @@ from collections.abc import Generator -from typing import cast +from typing import Any, cast from core.app.apps.base_app_generate_response_converter import AppGenerateResponseConverter from core.app.entities.task_entities import ( @@ -56,7 +56,7 @@ class ChatAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_full_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream full response. :param stream_response: stream response @@ -87,7 +87,7 @@ class ChatAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_simple_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream simple response. :param stream_response: stream response diff --git a/api/core/app/apps/common/graph_runtime_state_support.py b/api/core/app/apps/common/graph_runtime_state_support.py index ab277857fe..2a90fbdad0 100644 --- a/api/core/app/apps/common/graph_runtime_state_support.py +++ b/api/core/app/apps/common/graph_runtime_state_support.py @@ -4,9 +4,8 @@ from __future__ import annotations from typing import TYPE_CHECKING -from graphon.runtime import GraphRuntimeState - from core.workflow.system_variables import SystemVariableKey, get_system_text +from graphon.runtime import GraphRuntimeState if TYPE_CHECKING: from core.app.task_pipeline.based_generate_task_pipeline import BasedGenerateTaskPipeline diff --git a/api/core/app/apps/common/workflow_response_converter.py b/api/core/app/apps/common/workflow_response_converter.py index a515531616..bd685d5189 100644 --- a/api/core/app/apps/common/workflow_response_converter.py +++ b/api/core/app/apps/common/workflow_response_converter.py @@ -6,19 +6,6 @@ from dataclasses import dataclass from datetime import datetime from typing import Any, NewType, TypedDict, Union -from graphon.entities import WorkflowStartReason -from graphon.entities.pause_reason import HumanInputRequired -from graphon.enums import ( - BuiltinNodeTypes, - WorkflowExecutionStatus, - WorkflowNodeExecutionMetadataKey, - WorkflowNodeExecutionStatus, -) -from graphon.file import FILE_MODEL_IDENTITY, File -from graphon.runtime import GraphRuntimeState -from graphon.variables.segments import ArrayFileSegment, FileSegment, Segment -from graphon.variables.variables import Variable -from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from sqlalchemy import select from sqlalchemy.orm import Session @@ -68,6 +55,19 @@ from core.workflow.human_input_forms import load_form_tokens_by_form_id from core.workflow.system_variables import SystemVariableKey, system_variables_to_mapping from core.workflow.workflow_entry import WorkflowEntry from extensions.ext_database import db +from graphon.entities import WorkflowStartReason +from graphon.entities.pause_reason import HumanInputRequired +from graphon.enums import ( + BuiltinNodeTypes, + WorkflowExecutionStatus, + WorkflowNodeExecutionMetadataKey, + WorkflowNodeExecutionStatus, +) +from graphon.file import FILE_MODEL_IDENTITY, File +from graphon.runtime import GraphRuntimeState +from graphon.variables.segments import ArrayFileSegment, FileSegment, Segment +from graphon.variables.variables import Variable +from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from libs.datetime_utils import naive_utc_now from models import Account, EndUser from models.human_input import HumanInputForm diff --git a/api/core/app/apps/completion/app_generator.py b/api/core/app/apps/completion/app_generator.py index 61339b316a..423bfdac51 100644 --- a/api/core/app/apps/completion/app_generator.py +++ b/api/core/app/apps/completion/app_generator.py @@ -6,7 +6,6 @@ from collections.abc import Generator, Mapping from typing import Any, Literal, overload from flask import Flask, copy_current_request_context, current_app -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from pydantic import ValidationError from sqlalchemy import select @@ -24,6 +23,7 @@ from core.app.entities.app_invoke_entities import CompletionAppGenerateEntity, I from core.ops.ops_trace_manager import TraceQueueManager from extensions.ext_database import db from factories import file_factory +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from models import Account, App, EndUser, Message from services.errors.app import MoreLikeThisDisabledError from services.errors.message import MessageNotExistsError diff --git a/api/core/app/apps/completion/app_runner.py b/api/core/app/apps/completion/app_runner.py index b216f7cf7b..6bb1ecdcb1 100644 --- a/api/core/app/apps/completion/app_runner.py +++ b/api/core/app/apps/completion/app_runner.py @@ -1,8 +1,6 @@ import logging from typing import cast -from graphon.file import File -from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent from sqlalchemy import select from core.app.apps.base_app_queue_manager import AppQueueManager @@ -16,6 +14,8 @@ from core.model_manager import ModelInstance from core.moderation.base import ModerationError from core.rag.retrieval.dataset_retrieval import DatasetRetrieval from extensions.ext_database import db +from graphon.file import File +from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent from models.model import App, Message logger = logging.getLogger(__name__) diff --git a/api/core/app/apps/completion/generate_response_converter.py b/api/core/app/apps/completion/generate_response_converter.py index a4f574642d..71886b39ba 100644 --- a/api/core/app/apps/completion/generate_response_converter.py +++ b/api/core/app/apps/completion/generate_response_converter.py @@ -1,5 +1,5 @@ from collections.abc import Generator -from typing import cast +from typing import Any, cast from core.app.apps.base_app_generate_response_converter import AppGenerateResponseConverter from core.app.entities.task_entities import ( @@ -55,7 +55,7 @@ class CompletionAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_full_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream full response. :param stream_response: stream response @@ -85,7 +85,7 @@ class CompletionAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_simple_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream simple response. :param stream_response: stream response diff --git a/api/core/app/apps/pipeline/generate_response_converter.py b/api/core/app/apps/pipeline/generate_response_converter.py index cfacd8640d..02b3160b7c 100644 --- a/api/core/app/apps/pipeline/generate_response_converter.py +++ b/api/core/app/apps/pipeline/generate_response_converter.py @@ -1,5 +1,5 @@ from collections.abc import Generator -from typing import cast +from typing import Any, cast from core.app.apps.base_app_generate_response_converter import AppGenerateResponseConverter from core.app.entities.task_entities import ( @@ -17,7 +17,7 @@ class WorkflowAppGenerateResponseConverter(AppGenerateResponseConverter): _blocking_response_type = WorkflowAppBlockingResponse @classmethod - def convert_blocking_full_response(cls, blocking_response: WorkflowAppBlockingResponse) -> dict: # type: ignore[override] + def convert_blocking_full_response(cls, blocking_response: WorkflowAppBlockingResponse) -> dict[str, Any]: # type: ignore[override] """ Convert blocking full response. :param blocking_response: blocking response @@ -26,7 +26,7 @@ class WorkflowAppGenerateResponseConverter(AppGenerateResponseConverter): return dict(blocking_response.model_dump()) @classmethod - def convert_blocking_simple_response(cls, blocking_response: WorkflowAppBlockingResponse) -> dict: # type: ignore[override] + def convert_blocking_simple_response(cls, blocking_response: WorkflowAppBlockingResponse) -> dict[str, Any]: # type: ignore[override] """ Convert blocking simple response. :param blocking_response: blocking response @@ -37,7 +37,7 @@ class WorkflowAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_full_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream full response. :param stream_response: stream response @@ -66,7 +66,7 @@ class WorkflowAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_simple_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream simple response. :param stream_response: stream response diff --git a/api/core/app/apps/pipeline/pipeline_config_manager.py b/api/core/app/apps/pipeline/pipeline_config_manager.py index 72b7f4bef6..8bbd745538 100644 --- a/api/core/app/apps/pipeline/pipeline_config_manager.py +++ b/api/core/app/apps/pipeline/pipeline_config_manager.py @@ -1,3 +1,5 @@ +from typing import Any + from core.app.app_config.base_app_config_manager import BaseAppConfigManager from core.app.app_config.common.sensitive_word_avoidance.manager import SensitiveWordAvoidanceConfigManager from core.app.app_config.entities import RagPipelineVariableEntity, WorkflowUIBasedAppConfig @@ -34,7 +36,9 @@ class PipelineConfigManager(BaseAppConfigManager): return pipeline_config @classmethod - def config_validate(cls, tenant_id: str, config: dict, only_structure_validate: bool = False) -> dict: + def config_validate( + cls, tenant_id: str, config: dict[str, Any], only_structure_validate: bool = False + ) -> dict[str, Any]: """ Validate for pipeline config diff --git a/api/core/app/apps/pipeline/pipeline_generator.py b/api/core/app/apps/pipeline/pipeline_generator.py index 139c7e73e0..4b2f17189b 100644 --- a/api/core/app/apps/pipeline/pipeline_generator.py +++ b/api/core/app/apps/pipeline/pipeline_generator.py @@ -10,8 +10,6 @@ from collections.abc import Generator, Mapping from typing import Any, Literal, cast, overload from flask import Flask, current_app -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError -from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader from pydantic import ValidationError from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker @@ -43,6 +41,8 @@ from core.repositories.factory import ( WorkflowNodeExecutionRepository, ) from extensions.ext_database import db +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError +from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader from libs.flask_utils import preserve_flask_contexts from models import Account, EndUser, Workflow, WorkflowNodeExecutionTriggeredFrom from models.dataset import Document, DocumentPipelineExecutionLog, Pipeline @@ -782,7 +782,7 @@ class PipelineGenerator(BaseAppGenerator): user_id: str, all_files: list, datasource_info: Mapping[str, Any], - next_page_parameters: dict | None = None, + next_page_parameters: dict[str, Any] | None = None, ): """ Get files in a folder. diff --git a/api/core/app/apps/pipeline/pipeline_runner.py b/api/core/app/apps/pipeline/pipeline_runner.py index b4d2310da8..2ee0ae27eb 100644 --- a/api/core/app/apps/pipeline/pipeline_runner.py +++ b/api/core/app/apps/pipeline/pipeline_runner.py @@ -2,13 +2,6 @@ import logging import time from typing import cast -from graphon.entities import GraphInitParams -from graphon.enums import WorkflowType -from graphon.graph import Graph -from graphon.graph_events import GraphEngineEvent, GraphRunFailedEvent -from graphon.runtime import GraphRuntimeState, VariablePool -from graphon.variable_loader import VariableLoader -from graphon.variables.variables import RAGPipelineVariable, RAGPipelineVariableInput from sqlalchemy import select from core.app.apps.base_app_queue_manager import AppQueueManager @@ -22,11 +15,17 @@ from core.app.entities.app_invoke_entities import ( ) from core.app.workflow.layers.persistence import PersistenceWorkflowInfo, WorkflowPersistenceLayer from core.repositories.factory import WorkflowExecutionRepository, WorkflowNodeExecutionRepository -from core.workflow.node_factory import DifyNodeFactory, get_default_root_node_id +from core.workflow.node_factory import DifyGraphInitContext, DifyNodeFactory, get_default_root_node_id from core.workflow.system_variables import build_bootstrap_variables, build_system_variables from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add_variables_to_pool from core.workflow.workflow_entry import WorkflowEntry from extensions.ext_database import db +from graphon.enums import WorkflowType +from graphon.graph import Graph +from graphon.graph_events import GraphEngineEvent, GraphRunFailedEvent +from graphon.runtime import GraphRuntimeState, VariablePool +from graphon.variable_loader import VariableLoader +from graphon.variables.variables import RAGPipelineVariable, RAGPipelineVariableInput from models.dataset import Document, Pipeline from models.model import EndUser from models.workflow import Workflow @@ -265,22 +264,23 @@ class PipelineRunner(WorkflowBasedAppRunner): # graph_config["nodes"] = real_run_nodes # graph_config["edges"] = real_edges # init graph - # Create required parameters for Graph.init - graph_init_params = GraphInitParams( + # Create explicit graph init context for Graph.init. + run_context = build_dify_run_context( + tenant_id=workflow.tenant_id, + app_id=self._app_id, + user_id=self.application_generate_entity.user_id, + user_from=user_from, + invoke_from=invoke_from, + ) + graph_init_context = DifyGraphInitContext( workflow_id=workflow.id, graph_config=graph_config, - run_context=build_dify_run_context( - tenant_id=workflow.tenant_id, - app_id=self._app_id, - user_id=self.application_generate_entity.user_id, - user_from=user_from, - invoke_from=invoke_from, - ), + run_context=run_context, call_depth=0, ) - node_factory = DifyNodeFactory( - graph_init_params=graph_init_params, + node_factory = DifyNodeFactory.from_graph_init_context( + graph_init_context=graph_init_context, graph_runtime_state=graph_runtime_state, ) if start_node_id is None: diff --git a/api/core/app/apps/workflow/app_generator.py b/api/core/app/apps/workflow/app_generator.py index 6074e81d1e..6937014a06 100644 --- a/api/core/app/apps/workflow/app_generator.py +++ b/api/core/app/apps/workflow/app_generator.py @@ -8,10 +8,6 @@ from collections.abc import Generator, Mapping, Sequence from typing import TYPE_CHECKING, Any, Literal, overload from flask import Flask, current_app -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError -from graphon.runtime import GraphRuntimeState -from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader from pydantic import ValidationError from sqlalchemy import select from sqlalchemy.orm import sessionmaker @@ -38,6 +34,10 @@ from core.repositories import DifyCoreRepositoryFactory from core.repositories.factory import WorkflowExecutionRepository, WorkflowNodeExecutionRepository from extensions.ext_database import db from factories import file_factory +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError +from graphon.runtime import GraphRuntimeState +from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader from libs.flask_utils import preserve_flask_contexts from models.account import Account from models.enums import WorkflowRunTriggeredFrom diff --git a/api/core/app/apps/workflow/app_runner.py b/api/core/app/apps/workflow/app_runner.py index 2cb8088971..cfb9208486 100644 --- a/api/core/app/apps/workflow/app_runner.py +++ b/api/core/app/apps/workflow/app_runner.py @@ -3,12 +3,6 @@ import time from collections.abc import Sequence from typing import cast -from graphon.enums import WorkflowType -from graphon.graph_engine.command_channels import RedisChannel -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.runtime import GraphRuntimeState, VariablePool -from graphon.variable_loader import VariableLoader - from core.app.apps.base_app_queue_manager import AppQueueManager from core.app.apps.workflow.app_config_manager import WorkflowAppConfig from core.app.apps.workflow_app_runner import WorkflowBasedAppRunner @@ -21,6 +15,11 @@ from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add from core.workflow.workflow_entry import WorkflowEntry from extensions.ext_redis import redis_client from extensions.otel import WorkflowAppRunnerHandler, trace_span +from graphon.enums import WorkflowType +from graphon.graph_engine.command_channels import RedisChannel +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.runtime import GraphRuntimeState, VariablePool +from graphon.variable_loader import VariableLoader from libs.datetime_utils import naive_utc_now from models.workflow import Workflow diff --git a/api/core/app/apps/workflow/generate_response_converter.py b/api/core/app/apps/workflow/generate_response_converter.py index c64f44a603..c69826cbef 100644 --- a/api/core/app/apps/workflow/generate_response_converter.py +++ b/api/core/app/apps/workflow/generate_response_converter.py @@ -1,5 +1,5 @@ from collections.abc import Generator -from typing import cast +from typing import Any, cast from core.app.apps.base_app_generate_response_converter import AppGenerateResponseConverter from core.app.entities.task_entities import ( @@ -37,7 +37,7 @@ class WorkflowAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_full_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream full response. :param stream_response: stream response @@ -66,7 +66,7 @@ class WorkflowAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_simple_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream simple response. :param stream_response: stream response diff --git a/api/core/app/apps/workflow/generate_task_pipeline.py b/api/core/app/apps/workflow/generate_task_pipeline.py index 49af169e88..15645add57 100644 --- a/api/core/app/apps/workflow/generate_task_pipeline.py +++ b/api/core/app/apps/workflow/generate_task_pipeline.py @@ -4,10 +4,7 @@ from collections.abc import Callable, Generator from contextlib import contextmanager from typing import Union -from graphon.entities import WorkflowStartReason -from graphon.enums import WorkflowExecutionStatus -from graphon.runtime import GraphRuntimeState -from sqlalchemy.orm import Session +from sqlalchemy.orm import Session, sessionmaker from constants.tts_auto_play_timeout import TTS_AUTO_PLAY_TIMEOUT, TTS_AUTO_PLAY_YIELD_CPU_TIME from core.app.apps.base_app_queue_manager import AppQueueManager @@ -61,6 +58,9 @@ from core.base.tts import AppGeneratorTTSPublisher, AudioTrunk from core.ops.ops_trace_manager import TraceQueueManager from core.workflow.system_variables import build_system_variables from extensions.ext_database import db +from graphon.entities import WorkflowStartReason +from graphon.enums import WorkflowExecutionStatus +from graphon.runtime import GraphRuntimeState from models import Account from models.enums import CreatorUserRole from models.model import EndUser @@ -252,13 +252,8 @@ class WorkflowAppGenerateTaskPipeline(GraphRuntimeStateSupport): @contextmanager def _database_session(self): """Context manager for database sessions.""" - with Session(db.engine, expire_on_commit=False) as session: - try: - yield session - session.commit() - except Exception: - session.rollback() - raise + with sessionmaker(bind=db.engine, expire_on_commit=False).begin() as session: + yield session def _ensure_workflow_initialized(self): """Fluent validation for workflow state.""" @@ -687,15 +682,16 @@ class WorkflowAppGenerateTaskPipeline(GraphRuntimeStateSupport): def _save_workflow_app_log(self, *, session: Session, workflow_run_id: str | None): invoke_from = self._application_generate_entity.invoke_from - if invoke_from == InvokeFrom.SERVICE_API: - created_from = WorkflowAppLogCreatedFrom.SERVICE_API - elif invoke_from == InvokeFrom.EXPLORE: - created_from = WorkflowAppLogCreatedFrom.INSTALLED_APP - elif invoke_from == InvokeFrom.WEB_APP: - created_from = WorkflowAppLogCreatedFrom.WEB_APP - else: - # not save log for debugging - return + match invoke_from: + case InvokeFrom.SERVICE_API: + created_from = WorkflowAppLogCreatedFrom.SERVICE_API + case InvokeFrom.EXPLORE: + created_from = WorkflowAppLogCreatedFrom.INSTALLED_APP + case InvokeFrom.WEB_APP: + created_from = WorkflowAppLogCreatedFrom.WEB_APP + case InvokeFrom.DEBUGGER | InvokeFrom.TRIGGER | InvokeFrom.PUBLISHED_PIPELINE | InvokeFrom.VALIDATION: + # not save log for debugging + return if not workflow_run_id: return diff --git a/api/core/app/apps/workflow_app_runner.py b/api/core/app/apps/workflow_app_runner.py index caa6b82bab..047b54c86c 100644 --- a/api/core/app/apps/workflow_app_runner.py +++ b/api/core/app/apps/workflow_app_runner.py @@ -3,7 +3,52 @@ import time from collections.abc import Mapping, Sequence from typing import Any, cast -from graphon.entities import GraphInitParams +from pydantic import ValidationError + +from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom +from core.app.entities.agent_strategy import AgentStrategyInfo +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom, build_dify_run_context +from core.app.entities.queue_entities import ( + AppQueueEvent, + QueueAgentLogEvent, + QueueHumanInputFormFilledEvent, + QueueHumanInputFormTimeoutEvent, + QueueIterationCompletedEvent, + QueueIterationNextEvent, + QueueIterationStartEvent, + QueueLoopCompletedEvent, + QueueLoopNextEvent, + QueueLoopStartEvent, + QueueNodeExceptionEvent, + QueueNodeFailedEvent, + QueueNodeRetryEvent, + QueueNodeStartedEvent, + QueueNodeSucceededEvent, + QueueRetrieverResourcesEvent, + QueueTextChunkEvent, + QueueWorkflowFailedEvent, + QueueWorkflowPartialSuccessEvent, + QueueWorkflowPausedEvent, + QueueWorkflowStartedEvent, + QueueWorkflowSucceededEvent, +) +from core.rag.entities import RetrievalSourceMetadata +from core.workflow.node_factory import ( + DifyGraphInitContext, + DifyNodeFactory, + get_default_root_node_id, + resolve_workflow_node_class, +) +from core.workflow.system_variables import ( + build_bootstrap_variables, + default_system_variables, + get_node_creation_preload_selectors, + inject_default_system_variable_mappings, + preload_node_creation_variables, +) +from core.workflow.variable_pool_initializer import add_variables_to_pool +from core.workflow.workflow_entry import WorkflowEntry +from core.workflow.workflow_run_outputs import project_node_outputs_for_workflow_run from graphon.entities.graph_config import NodeConfigDictAdapter from graphon.entities.pause_reason import HumanInputRequired from graphon.graph import Graph @@ -37,47 +82,6 @@ from graphon.graph_events import ( ) from graphon.runtime import GraphRuntimeState, VariablePool from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader, load_into_variable_pool -from pydantic import ValidationError - -from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom -from core.app.entities.agent_strategy import AgentStrategyInfo -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom, build_dify_run_context -from core.app.entities.queue_entities import ( - AppQueueEvent, - QueueAgentLogEvent, - QueueHumanInputFormFilledEvent, - QueueHumanInputFormTimeoutEvent, - QueueIterationCompletedEvent, - QueueIterationNextEvent, - QueueIterationStartEvent, - QueueLoopCompletedEvent, - QueueLoopNextEvent, - QueueLoopStartEvent, - QueueNodeExceptionEvent, - QueueNodeFailedEvent, - QueueNodeRetryEvent, - QueueNodeStartedEvent, - QueueNodeSucceededEvent, - QueueRetrieverResourcesEvent, - QueueTextChunkEvent, - QueueWorkflowFailedEvent, - QueueWorkflowPartialSuccessEvent, - QueueWorkflowPausedEvent, - QueueWorkflowStartedEvent, - QueueWorkflowSucceededEvent, -) -from core.rag.entities import RetrievalSourceMetadata -from core.workflow.node_factory import DifyNodeFactory, get_default_root_node_id, resolve_workflow_node_class -from core.workflow.system_variables import ( - build_bootstrap_variables, - default_system_variables, - get_node_creation_preload_selectors, - inject_default_system_variable_mappings, - preload_node_creation_variables, -) -from core.workflow.variable_pool_initializer import add_variables_to_pool -from core.workflow.workflow_entry import WorkflowEntry -from core.workflow.workflow_run_outputs import project_node_outputs_for_workflow_run from models.workflow import Workflow from tasks.mail_human_input_delivery_task import dispatch_human_input_email_task @@ -127,24 +131,25 @@ class WorkflowBasedAppRunner: if not isinstance(graph_config.get("edges"), list): raise ValueError("edges in workflow graph must be a list") - # Create required parameters for Graph.init - graph_init_params = GraphInitParams( + # Create explicit graph init context for Graph.init. + run_context = build_dify_run_context( + tenant_id=tenant_id or "", + app_id=self._app_id, + user_id=user_id, + user_from=user_from, + invoke_from=invoke_from, + ) + graph_init_context = DifyGraphInitContext( workflow_id=workflow_id, graph_config=graph_config, - run_context=build_dify_run_context( - tenant_id=tenant_id or "", - app_id=self._app_id, - user_id=user_id, - user_from=user_from, - invoke_from=invoke_from, - ), + run_context=run_context, call_depth=0, ) # Use the provided graph_runtime_state for consistent state management - node_factory = DifyNodeFactory( - graph_init_params=graph_init_params, + node_factory = DifyNodeFactory.from_graph_init_context( + graph_init_context=graph_init_context, graph_runtime_state=graph_runtime_state, ) @@ -289,22 +294,23 @@ class WorkflowBasedAppRunner: typed_node_configs = [NodeConfigDictAdapter.validate_python(node) for node in node_configs] - # Create required parameters for Graph.init - graph_init_params = GraphInitParams( + # Create explicit graph init context for Graph.init. + run_context = build_dify_run_context( + tenant_id=workflow.tenant_id, + app_id=self._app_id, + user_id=user_id, + user_from=UserFrom.ACCOUNT, + invoke_from=InvokeFrom.DEBUGGER, + ) + graph_init_context = DifyGraphInitContext( workflow_id=workflow.id, graph_config=graph_config, - run_context=build_dify_run_context( - tenant_id=workflow.tenant_id, - app_id=self._app_id, - user_id=user_id, - user_from=UserFrom.ACCOUNT, - invoke_from=InvokeFrom.DEBUGGER, - ), + run_context=run_context, call_depth=0, ) - node_factory = DifyNodeFactory( - graph_init_params=graph_init_params, + node_factory = DifyNodeFactory.from_graph_init_context( + graph_init_context=graph_init_context, graph_runtime_state=graph_runtime_state, ) diff --git a/api/core/app/entities/app_invoke_entities.py b/api/core/app/entities/app_invoke_entities.py index 0cdbb5f50a..09992f4bbf 100644 --- a/api/core/app/entities/app_invoke_entities.py +++ b/api/core/app/entities/app_invoke_entities.py @@ -1,14 +1,14 @@ from collections.abc import Mapping, Sequence from enum import StrEnum -from typing import TYPE_CHECKING, Any, Optional +from typing import TYPE_CHECKING, Any -from graphon.file import File, FileUploadConfig -from graphon.model_runtime.entities.model_entities import AIModelEntity from pydantic import BaseModel, ConfigDict, Field, ValidationInfo, field_validator from constants import UUID_NIL from core.app.app_config.entities import EasyUIBasedAppConfig, WorkflowUIBasedAppConfig from core.entities.provider_configuration import ProviderModelBundle +from graphon.file import File, FileUploadConfig +from graphon.model_runtime.entities.model_entities import AIModelEntity if TYPE_CHECKING: from core.ops.ops_trace_manager import TraceQueueManager @@ -131,7 +131,7 @@ class AppGenerateEntity(BaseModel): extras: dict[str, Any] = Field(default_factory=dict) # tracing instance - trace_manager: Optional["TraceQueueManager"] = Field(default=None, exclude=True, repr=False) + trace_manager: "TraceQueueManager | None" = Field(default=None, exclude=True, repr=False) class EasyUIBasedAppGenerateEntity(AppGenerateEntity): diff --git a/api/core/app/entities/queue_entities.py b/api/core/app/entities/queue_entities.py index 482f995d8e..221b7fb058 100644 --- a/api/core/app/entities/queue_entities.py +++ b/api/core/app/entities/queue_entities.py @@ -3,14 +3,14 @@ from datetime import datetime from enum import StrEnum, auto from typing import Any -from graphon.entities import WorkflowStartReason -from graphon.entities.pause_reason import PauseReason -from graphon.enums import NodeType, WorkflowNodeExecutionMetadataKey -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk from pydantic import BaseModel, ConfigDict, Field from core.app.entities.agent_strategy import AgentStrategyInfo from core.rag.entities import RetrievalSourceMetadata +from graphon.entities import WorkflowStartReason +from graphon.entities.pause_reason import PauseReason +from graphon.enums import NodeType, WorkflowNodeExecutionMetadataKey +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk class QueueEvent(StrEnum): diff --git a/api/core/app/entities/task_entities.py b/api/core/app/entities/task_entities.py index 62df85b13f..6e4ca69cf0 100644 --- a/api/core/app/entities/task_entities.py +++ b/api/core/app/entities/task_entities.py @@ -2,14 +2,14 @@ from collections.abc import Mapping, Sequence from enum import StrEnum from typing import Any -from graphon.entities import WorkflowStartReason -from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage -from graphon.nodes.human_input.entities import FormInput, UserAction from pydantic import BaseModel, ConfigDict, Field from core.app.entities.agent_strategy import AgentStrategyInfo from core.rag.entities import RetrievalSourceMetadata +from graphon.entities import WorkflowStartReason +from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage +from graphon.nodes.human_input.entities import FormInput, UserAction class AnnotationReplyAccount(BaseModel): @@ -521,7 +521,7 @@ class IterationNodeStartStreamResponse(StreamResponse): node_type: str title: str created_at: int - extras: dict = Field(default_factory=dict) + extras: dict[str, Any] = Field(default_factory=dict) metadata: Mapping = {} inputs: Mapping = {} inputs_truncated: bool = False @@ -547,7 +547,7 @@ class IterationNodeNextStreamResponse(StreamResponse): title: str index: int created_at: int - extras: dict = Field(default_factory=dict) + extras: dict[str, Any] = Field(default_factory=dict) event: StreamEvent = StreamEvent.ITERATION_NEXT workflow_run_id: str @@ -571,7 +571,7 @@ class IterationNodeCompletedStreamResponse(StreamResponse): outputs: Mapping | None = None outputs_truncated: bool = False created_at: int - extras: dict | None = None + extras: dict[str, Any] | None = None inputs: Mapping | None = None inputs_truncated: bool = False status: WorkflowNodeExecutionStatus @@ -602,7 +602,7 @@ class LoopNodeStartStreamResponse(StreamResponse): node_type: str title: str created_at: int - extras: dict = Field(default_factory=dict) + extras: dict[str, Any] = Field(default_factory=dict) metadata: Mapping = {} inputs: Mapping = {} inputs_truncated: bool = False @@ -653,7 +653,7 @@ class LoopNodeCompletedStreamResponse(StreamResponse): outputs: Mapping | None = None outputs_truncated: bool = False created_at: int - extras: dict | None = None + extras: dict[str, Any] | None = None inputs: Mapping | None = None inputs_truncated: bool = False status: WorkflowNodeExecutionStatus diff --git a/api/core/app/features/hosting_moderation/hosting_moderation.py b/api/core/app/features/hosting_moderation/hosting_moderation.py index d2d2fea4fb..d59f5125e3 100644 --- a/api/core/app/features/hosting_moderation/hosting_moderation.py +++ b/api/core/app/features/hosting_moderation/hosting_moderation.py @@ -1,9 +1,8 @@ import logging -from graphon.model_runtime.entities.message_entities import PromptMessage - from core.app.entities.app_invoke_entities import EasyUIBasedAppGenerateEntity from core.helper import moderation +from graphon.model_runtime.entities.message_entities import PromptMessage logger = logging.getLogger(__name__) diff --git a/api/core/app/layers/conversation_variable_persist_layer.py b/api/core/app/layers/conversation_variable_persist_layer.py index e09869f5f8..d5e6b04a4a 100644 --- a/api/core/app/layers/conversation_variable_persist_layer.py +++ b/api/core/app/layers/conversation_variable_persist_layer.py @@ -9,11 +9,10 @@ scope updates that matter to chat applications. import logging -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.graph_events import GraphEngineEvent, NodeRunVariableUpdatedEvent - from core.workflow.system_variables import SystemVariableKey, get_system_text from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.graph_events import GraphEngineEvent, NodeRunVariableUpdatedEvent from services.conversation_variable_updater import ConversationVariableUpdater logger = logging.getLogger(__name__) diff --git a/api/core/app/layers/pause_state_persist_layer.py b/api/core/app/layers/pause_state_persist_layer.py index c027f42788..9811f9f830 100644 --- a/api/core/app/layers/pause_state_persist_layer.py +++ b/api/core/app/layers/pause_state_persist_layer.py @@ -1,14 +1,14 @@ from dataclasses import dataclass from typing import Annotated, Literal, Self -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.graph_events import GraphEngineEvent, GraphRunPausedEvent from pydantic import BaseModel, Field from sqlalchemy import Engine from sqlalchemy.orm import Session, sessionmaker from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, WorkflowAppGenerateEntity from core.workflow.system_variables import SystemVariableKey, get_system_text +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.graph_events import GraphEngineEvent, GraphRunPausedEvent from models.model import AppMode from repositories.api_workflow_run_repository import APIWorkflowRunRepository from repositories.factory import DifyAPIRepositoryFactory diff --git a/api/core/app/layers/timeslice_layer.py b/api/core/app/layers/timeslice_layer.py index 8c8daf8712..bb9fc1b6fa 100644 --- a/api/core/app/layers/timeslice_layer.py +++ b/api/core/app/layers/timeslice_layer.py @@ -3,10 +3,10 @@ import uuid from typing import ClassVar from apscheduler.schedulers.background import BackgroundScheduler # type: ignore + from graphon.graph_engine.entities.commands import CommandType, GraphEngineCommand from graphon.graph_engine.layers import GraphEngineLayer from graphon.graph_events import GraphEngineEvent - from services.workflow.entities import WorkflowScheduleCFSPlanEntity from services.workflow.scheduler import CFSPlanScheduler, SchedulerCommand diff --git a/api/core/app/layers/trigger_post_layer.py b/api/core/app/layers/trigger_post_layer.py index 77c7bec67e..b60fe82ffe 100644 --- a/api/core/app/layers/trigger_post_layer.py +++ b/api/core/app/layers/trigger_post_layer.py @@ -2,12 +2,12 @@ import logging from datetime import UTC, datetime from typing import Any, ClassVar -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.graph_events import GraphEngineEvent, GraphRunFailedEvent, GraphRunPausedEvent, GraphRunSucceededEvent from pydantic import TypeAdapter from core.db.session_factory import session_factory from core.workflow.system_variables import SystemVariableKey, get_system_text +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.graph_events import GraphEngineEvent, GraphRunFailedEvent, GraphRunPausedEvent, GraphRunSucceededEvent from models.enums import WorkflowTriggerStatus from repositories.sqlalchemy_workflow_trigger_log_repository import SQLAlchemyWorkflowTriggerLogRepository from tasks.workflow_cfs_scheduler.cfs_scheduler import AsyncWorkflowCFSPlanEntity diff --git a/api/core/app/llm/model_access.py b/api/core/app/llm/model_access.py index 278d0cb30b..c49c4eb0ac 100644 --- a/api/core/app/llm/model_access.py +++ b/api/core/app/llm/model_access.py @@ -2,16 +2,15 @@ from __future__ import annotations from typing import Any -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.nodes.llm.entities import ModelConfig -from graphon.nodes.llm.exc import LLMModeRequiredError, ModelNotExistError -from graphon.nodes.llm.protocols import CredentialsProvider - from core.app.entities.app_invoke_entities import DifyRunContext, ModelConfigWithCredentialsEntity from core.errors.error import ProviderTokenNotInitError from core.model_manager import ModelInstance, ModelManager from core.plugin.impl.model_runtime_factory import create_plugin_provider_manager from core.provider_manager import ProviderManager +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.nodes.llm.entities import ModelConfig +from graphon.nodes.llm.exc import LLMModeRequiredError, ModelNotExistError +from graphon.nodes.llm.protocols import CredentialsProvider class DifyCredentialsProvider: diff --git a/api/core/app/llm/quota.py b/api/core/app/llm/quota.py index 182f1b767d..b6039e1e4e 100644 --- a/api/core/app/llm/quota.py +++ b/api/core/app/llm/quota.py @@ -1,6 +1,5 @@ -from graphon.model_runtime.entities.llm_entities import LLMUsage from sqlalchemy import update -from sqlalchemy.orm import Session +from sqlalchemy.orm import sessionmaker from configs import dify_config from core.entities.model_entities import ModelStatus @@ -8,6 +7,7 @@ from core.entities.provider_entities import ProviderQuotaType, QuotaUnit from core.errors.error import QuotaExceededError from core.model_manager import ModelInstance from extensions.ext_database import db +from graphon.model_runtime.entities.llm_entities import LLMUsage from libs.datetime_utils import naive_utc_now from models.provider import Provider, ProviderType from models.provider_ids import ModelProviderID @@ -57,37 +57,37 @@ def deduct_llm_quota(*, tenant_id: str, model_instance: ModelInstance, usage: LL used_quota = 1 if used_quota is not None and system_configuration.current_quota_type is not None: - if system_configuration.current_quota_type == ProviderQuotaType.TRIAL: - from services.credit_pool_service import CreditPoolService + match system_configuration.current_quota_type: + case ProviderQuotaType.TRIAL: + from services.credit_pool_service import CreditPoolService - CreditPoolService.check_and_deduct_credits( - tenant_id=tenant_id, - credits_required=used_quota, - ) - elif system_configuration.current_quota_type == ProviderQuotaType.PAID: - from services.credit_pool_service import CreditPoolService - - CreditPoolService.check_and_deduct_credits( - tenant_id=tenant_id, - credits_required=used_quota, - pool_type="paid", - ) - else: - with Session(db.engine) as session: - stmt = ( - update(Provider) - .where( - Provider.tenant_id == tenant_id, - # TODO: Use provider name with prefix after the data migration. - Provider.provider_name == ModelProviderID(model_instance.provider).provider_name, - Provider.provider_type == ProviderType.SYSTEM.value, - Provider.quota_type == system_configuration.current_quota_type, - Provider.quota_limit > Provider.quota_used, - ) - .values( - quota_used=Provider.quota_used + used_quota, - last_used=naive_utc_now(), - ) + CreditPoolService.check_and_deduct_credits( + tenant_id=tenant_id, + credits_required=used_quota, ) - session.execute(stmt) - session.commit() + case ProviderQuotaType.PAID: + from services.credit_pool_service import CreditPoolService + + CreditPoolService.check_and_deduct_credits( + tenant_id=tenant_id, + credits_required=used_quota, + pool_type="paid", + ) + case ProviderQuotaType.FREE: + with sessionmaker(bind=db.engine).begin() as session: + stmt = ( + update(Provider) + .where( + Provider.tenant_id == tenant_id, + # TODO: Use provider name with prefix after the data migration. + Provider.provider_name == ModelProviderID(model_instance.provider).provider_name, + Provider.provider_type == ProviderType.SYSTEM.value, + Provider.quota_type == system_configuration.current_quota_type, + Provider.quota_limit > Provider.quota_used, + ) + .values( + quota_used=Provider.quota_used + used_quota, + last_used=naive_utc_now(), + ) + ) + session.execute(stmt) diff --git a/api/core/app/task_pipeline/based_generate_task_pipeline.py b/api/core/app/task_pipeline/based_generate_task_pipeline.py index 10b9c36d3e..9e688589db 100644 --- a/api/core/app/task_pipeline/based_generate_task_pipeline.py +++ b/api/core/app/task_pipeline/based_generate_task_pipeline.py @@ -1,7 +1,6 @@ import logging import time -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError from sqlalchemy import select from sqlalchemy.orm import Session @@ -18,6 +17,7 @@ from core.app.entities.task_entities import ( ) from core.errors.error import QuotaExceededError from core.moderation.output_moderation import ModerationRule, OutputModeration +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError from models.enums import MessageStatus from models.model import Message diff --git a/api/core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py b/api/core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py index 9df78a7830..dfe6133cb6 100644 --- a/api/core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py +++ b/api/core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py @@ -4,15 +4,8 @@ from collections.abc import Generator from threading import Thread from typing import Any, cast -from graphon.file import FileTransferMethod -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - TextPromptMessageContent, -) -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from sqlalchemy import select -from sqlalchemy.orm import Session +from sqlalchemy.orm import Session, sessionmaker from constants.tts_auto_play_timeout import TTS_AUTO_PLAY_TIMEOUT, TTS_AUTO_PLAY_YIELD_CPU_TIME from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom @@ -60,6 +53,13 @@ from core.prompt.utils.prompt_message_util import PromptMessageUtil from core.prompt.utils.prompt_template_parser import PromptTemplateParser from events.message_event import message_was_created from extensions.ext_database import db +from graphon.file import FileTransferMethod +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + TextPromptMessageContent, +) +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from libs.datetime_utils import naive_utc_now from models.model import AppMode, Conversation, Message, MessageAgentThought, MessageFile, UploadFile @@ -266,9 +266,8 @@ class EasyUIBasedGenerateTaskPipeline(BasedGenerateTaskPipeline): event = message.event if isinstance(event, QueueErrorEvent): - with Session(db.engine) as session: + with sessionmaker(bind=db.engine).begin() as session: err = self.handle_error(event=event, session=session, message_id=self._message_id) - session.commit() yield self.error_to_stream_response(err) break elif isinstance(event, QueueStopEvent | QueueMessageEndEvent): @@ -288,10 +287,9 @@ class EasyUIBasedGenerateTaskPipeline(BasedGenerateTaskPipeline): answer=output_moderation_answer ) - with Session(db.engine) as session: + with sessionmaker(bind=db.engine).begin() as session: # Save message self._save_message(session=session, trace_manager=trace_manager) - session.commit() message_end_resp = self._message_end_to_stream_response() yield message_end_resp elif isinstance(event, QueueRetrieverResourcesEvent): diff --git a/api/core/app/task_pipeline/message_file_utils.py b/api/core/app/task_pipeline/message_file_utils.py index b23a33923b..1dd713821f 100644 --- a/api/core/app/task_pipeline/message_file_utils.py +++ b/api/core/app/task_pipeline/message_file_utils.py @@ -1,9 +1,8 @@ from typing import TypedDict +from core.tools.signature import sign_tool_file from graphon.file import FileTransferMethod from graphon.file import helpers as file_helpers - -from core.tools.signature import sign_tool_file from models.model import MessageFile, UploadFile MAX_TOOL_FILE_EXTENSION_LENGTH = 10 @@ -40,41 +39,44 @@ def prepare_file_dict(message_file: MessageFile, upload_files_map: dict[str, Upl size = 0 extension = "" - if message_file.transfer_method == FileTransferMethod.REMOTE_URL: - url = message_file.url - if message_file.url: - filename = message_file.url.split("/")[-1].split("?")[0] - if "." in filename: - extension = "." + filename.rsplit(".", 1)[1] - elif message_file.transfer_method == FileTransferMethod.LOCAL_FILE: - if upload_file: - url = file_helpers.get_signed_file_url(upload_file_id=str(upload_file.id)) - filename = upload_file.name - mime_type = upload_file.mime_type or "application/octet-stream" - size = upload_file.size or 0 - extension = f".{upload_file.extension}" if upload_file.extension else "" - elif message_file.upload_file_id: - url = file_helpers.get_signed_file_url(upload_file_id=str(message_file.upload_file_id)) - elif message_file.transfer_method == FileTransferMethod.TOOL_FILE and message_file.url: - if message_file.url.startswith(("http://", "https://")): + match message_file.transfer_method: + case FileTransferMethod.REMOTE_URL: url = message_file.url - filename = message_file.url.split("/")[-1].split("?")[0] - if "." in filename: - extension = "." + filename.rsplit(".", 1)[1] - else: - url_parts = message_file.url.split("/") - if url_parts: - file_part = url_parts[-1].split("?")[0] - if "." in file_part: - tool_file_id, ext = file_part.rsplit(".", 1) - extension = f".{ext}" - if len(extension) > MAX_TOOL_FILE_EXTENSION_LENGTH: + if message_file.url: + filename = message_file.url.split("/")[-1].split("?")[0] + if "." in filename: + extension = "." + filename.rsplit(".", 1)[1] + case FileTransferMethod.LOCAL_FILE: + if upload_file: + url = file_helpers.get_signed_file_url(upload_file_id=str(upload_file.id)) + filename = upload_file.name + mime_type = upload_file.mime_type or "application/octet-stream" + size = upload_file.size or 0 + extension = f".{upload_file.extension}" if upload_file.extension else "" + elif message_file.upload_file_id: + url = file_helpers.get_signed_file_url(upload_file_id=str(message_file.upload_file_id)) + case FileTransferMethod.TOOL_FILE if message_file.url: + if message_file.url.startswith(("http://", "https://")): + url = message_file.url + filename = message_file.url.split("/")[-1].split("?")[0] + if "." in filename: + extension = "." + filename.rsplit(".", 1)[1] + else: + url_parts = message_file.url.split("/") + if url_parts: + file_part = url_parts[-1].split("?")[0] + if "." in file_part: + tool_file_id, ext = file_part.rsplit(".", 1) + extension = f".{ext}" + if len(extension) > MAX_TOOL_FILE_EXTENSION_LENGTH: + extension = ".bin" + else: + tool_file_id = file_part extension = ".bin" - else: - tool_file_id = file_part - extension = ".bin" - url = sign_tool_file(tool_file_id=tool_file_id, extension=extension) - filename = file_part + url = sign_tool_file(tool_file_id=tool_file_id, extension=extension) + filename = file_part + case FileTransferMethod.TOOL_FILE | FileTransferMethod.DATASOURCE_FILE: + pass transfer_method_value = message_file.transfer_method.value remote_url = message_file.url if message_file.transfer_method == FileTransferMethod.REMOTE_URL else "" diff --git a/api/core/app/workflow/file_runtime.py b/api/core/app/workflow/file_runtime.py index 8604235ef2..68e5e5f0c8 100644 --- a/api/core/app/workflow/file_runtime.py +++ b/api/core/app/workflow/file_runtime.py @@ -9,10 +9,6 @@ import urllib.parse from collections.abc import Generator from typing import TYPE_CHECKING, Literal -from graphon.file import FileTransferMethod -from graphon.file.protocols import HttpResponseProtocol, WorkflowFileRuntimeProtocol -from graphon.file.runtime import set_workflow_file_runtime - from configs import dify_config from core.app.file_access import DatabaseFileAccessController, FileAccessControllerProtocol from core.db.session_factory import session_factory @@ -20,6 +16,9 @@ from core.helper.ssrf_proxy import ssrf_proxy from core.tools.signature import sign_tool_file from core.workflow.file_reference import parse_file_reference from extensions.ext_storage import storage +from graphon.file import FileTransferMethod +from graphon.file.protocols import HttpResponseProtocol, WorkflowFileRuntimeProtocol +from graphon.file.runtime import set_workflow_file_runtime if TYPE_CHECKING: from graphon.file import File diff --git a/api/core/app/workflow/layers/llm_quota.py b/api/core/app/workflow/layers/llm_quota.py index c577ce0754..4a7918032e 100644 --- a/api/core/app/workflow/layers/llm_quota.py +++ b/api/core/app/workflow/layers/llm_quota.py @@ -7,17 +7,16 @@ This layer centralizes model-quota deduction outside node implementations. import logging from typing import TYPE_CHECKING, cast, final, override +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext +from core.app.llm import deduct_llm_quota, ensure_llm_quota_available +from core.errors.error import QuotaExceededError +from core.model_manager import ModelInstance from graphon.enums import BuiltinNodeTypes from graphon.graph_engine.entities.commands import AbortCommand, CommandType from graphon.graph_engine.layers import GraphEngineLayer from graphon.graph_events import GraphEngineEvent, GraphNodeEventBase, NodeRunSucceededEvent from graphon.nodes.base.node import Node -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext -from core.app.llm import deduct_llm_quota, ensure_llm_quota_available -from core.errors.error import QuotaExceededError -from core.model_manager import ModelInstance - if TYPE_CHECKING: from graphon.nodes.llm.node import LLMNode from graphon.nodes.parameter_extractor.parameter_extractor_node import ParameterExtractorNode diff --git a/api/core/app/workflow/layers/observability.py b/api/core/app/workflow/layers/observability.py index 99e8015c0b..8b5a5b9d7f 100644 --- a/api/core/app/workflow/layers/observability.py +++ b/api/core/app/workflow/layers/observability.py @@ -12,10 +12,6 @@ from contextvars import Token from dataclasses import dataclass from typing import cast, final, override -from graphon.enums import BuiltinNodeTypes, NodeType -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.graph_events import GraphNodeEventBase -from graphon.nodes.base.node import Node from opentelemetry import context as context_api from opentelemetry.trace import Span, SpanKind, Tracer, get_tracer, set_span_in_context @@ -28,6 +24,10 @@ from extensions.otel.parser import ( ToolNodeOTelParser, ) from extensions.otel.runtime import is_instrument_flag_enabled +from graphon.enums import BuiltinNodeTypes, NodeType +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.graph_events import GraphNodeEventBase +from graphon.nodes.base.node import Node logger = logging.getLogger(__name__) diff --git a/api/core/app/workflow/layers/persistence.py b/api/core/app/workflow/layers/persistence.py index ada065a943..87f005a250 100644 --- a/api/core/app/workflow/layers/persistence.py +++ b/api/core/app/workflow/layers/persistence.py @@ -14,6 +14,13 @@ from dataclasses import dataclass from datetime import datetime from typing import Any, Union +from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, WorkflowAppGenerateEntity +from core.ops.entities.trace_entity import TraceTaskName +from core.ops.ops_trace_manager import TraceQueueManager, TraceTask +from core.repositories.factory import WorkflowExecutionRepository, WorkflowNodeExecutionRepository +from core.workflow.system_variables import SystemVariableKey +from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID +from core.workflow.workflow_run_outputs import project_node_outputs_for_workflow_run from graphon.entities import WorkflowExecution, WorkflowNodeExecution from graphon.enums import ( WorkflowExecutionStatus, @@ -38,14 +45,6 @@ from graphon.graph_events import ( NodeRunSucceededEvent, ) from graphon.node_events import NodeRunResult - -from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, WorkflowAppGenerateEntity -from core.ops.entities.trace_entity import TraceTaskName -from core.ops.ops_trace_manager import TraceQueueManager, TraceTask -from core.repositories.factory import WorkflowExecutionRepository, WorkflowNodeExecutionRepository -from core.workflow.system_variables import SystemVariableKey -from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID -from core.workflow.workflow_run_outputs import project_node_outputs_for_workflow_run from libs.datetime_utils import naive_utc_now diff --git a/api/core/base/tts/app_generator_tts_publisher.py b/api/core/base/tts/app_generator_tts_publisher.py index 3d8a7a54f3..9e3c187210 100644 --- a/api/core/base/tts/app_generator_tts_publisher.py +++ b/api/core/base/tts/app_generator_tts_publisher.py @@ -6,9 +6,6 @@ import re import threading from collections.abc import Iterable -from graphon.model_runtime.entities.message_entities import TextPromptMessageContent -from graphon.model_runtime.entities.model_entities import ModelType - from core.app.entities.queue_entities import ( MessageQueueMessage, QueueAgentMessageEvent, @@ -18,6 +15,8 @@ from core.app.entities.queue_entities import ( WorkflowQueueMessage, ) from core.model_manager import ModelInstance, ModelManager +from graphon.model_runtime.entities.message_entities import TextPromptMessageContent +from graphon.model_runtime.entities.model_entities import ModelType class AudioTrunk: diff --git a/api/core/datasource/datasource_manager.py b/api/core/datasource/datasource_manager.py index a5297fa33a..dc831e5cac 100644 --- a/api/core/datasource/datasource_manager.py +++ b/api/core/datasource/datasource_manager.py @@ -3,9 +3,6 @@ from collections.abc import Generator from threading import Lock from typing import Any, cast -from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus -from graphon.file import File, FileTransferMethod, FileType, get_file_type_by_mime_type -from graphon.node_events import NodeRunResult, StreamChunkEvent, StreamCompletedEvent from sqlalchemy import select import contexts @@ -31,6 +28,9 @@ from core.plugin.impl.datasource import PluginDatasourceManager from core.workflow.file_reference import build_file_reference from core.workflow.nodes.datasource.entities import DatasourceParameter, OnlineDriveDownloadFileParam from factories import file_factory +from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus +from graphon.file import File, FileTransferMethod, FileType, get_file_type_by_mime_type +from graphon.node_events import NodeRunResult, StreamChunkEvent, StreamCompletedEvent from models.model import UploadFile from models.tools import ToolFile from services.datasource_provider_service import DatasourceProviderService diff --git a/api/core/datasource/entities/api_entities.py b/api/core/datasource/entities/api_entities.py index 14d1af2e8b..352e6bfd49 100644 --- a/api/core/datasource/entities/api_entities.py +++ b/api/core/datasource/entities/api_entities.py @@ -1,10 +1,10 @@ -from typing import Literal, Optional +from typing import Any, Literal, TypedDict -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field, field_validator from core.datasource.entities.datasource_entities import DatasourceParameter -from core.tools.entities.common_entities import I18nObject +from core.tools.entities.common_entities import I18nObject, I18nObjectDict +from graphon.model_runtime.utils.encoders import jsonable_encoder class DatasourceApiEntity(BaseModel): @@ -14,10 +14,27 @@ class DatasourceApiEntity(BaseModel): description: I18nObject parameters: list[DatasourceParameter] | None = None labels: list[str] = Field(default_factory=list) - output_schema: dict | None = None + output_schema: dict[str, Any] | None = None -ToolProviderTypeApiLiteral = Optional[Literal["builtin", "api", "workflow"]] +ToolProviderTypeApiLiteral = Literal["builtin", "api", "workflow"] | None + + +class DatasourceProviderApiEntityDict(TypedDict): + id: str + author: str + name: str + plugin_id: str | None + plugin_unique_identifier: str | None + description: I18nObjectDict + icon: str | dict + label: I18nObjectDict + type: str + team_credentials: dict[str, Any] | None + is_team_authorization: bool + allow_delete: bool + datasources: list[Any] + labels: list[str] class DatasourceProviderApiEntity(BaseModel): @@ -28,8 +45,8 @@ class DatasourceProviderApiEntity(BaseModel): icon: str | dict label: I18nObject # label type: str - masked_credentials: dict | None = None - original_credentials: dict | None = None + masked_credentials: dict[str, Any] | None = None + original_credentials: dict[str, Any] | None = None is_team_authorization: bool = False allow_delete: bool = True plugin_id: str | None = Field(default="", description="The plugin id of the datasource") @@ -42,7 +59,7 @@ class DatasourceProviderApiEntity(BaseModel): def convert_none_to_empty_list(cls, v): return v if v is not None else [] - def to_dict(self) -> dict: + def to_dict(self) -> DatasourceProviderApiEntityDict: # ------------- # overwrite datasource parameter types for temp fix datasources = jsonable_encoder(self.datasources) @@ -53,7 +70,7 @@ class DatasourceProviderApiEntity(BaseModel): parameter["type"] = "files" # ------------- - return { + result: DatasourceProviderApiEntityDict = { "id": self.id, "author": self.author, "name": self.name, @@ -69,3 +86,4 @@ class DatasourceProviderApiEntity(BaseModel): "datasources": datasources, "labels": self.labels, } + return result diff --git a/api/core/datasource/entities/datasource_entities.py b/api/core/datasource/entities/datasource_entities.py index f20bab53f0..443b503a69 100644 --- a/api/core/datasource/entities/datasource_entities.py +++ b/api/core/datasource/entities/datasource_entities.py @@ -2,7 +2,7 @@ from __future__ import annotations import enum from enum import StrEnum -from typing import Any +from typing import Any, TypedDict from pydantic import BaseModel, Field, ValidationInfo, field_validator from yarl import URL @@ -129,7 +129,7 @@ class DatasourceEntity(BaseModel): identity: DatasourceIdentity parameters: list[DatasourceParameter] = Field(default_factory=list) description: I18nObject = Field(..., description="The label of the datasource") - output_schema: dict | None = None + output_schema: dict[str, Any] | None = None @field_validator("parameters", mode="before") @classmethod @@ -179,6 +179,12 @@ class DatasourceProviderEntityWithPlugin(DatasourceProviderEntity): datasources: list[DatasourceEntity] = Field(default_factory=list) +class DatasourceInvokeMetaDict(TypedDict): + time_cost: float + error: str | None + tool_config: dict[str, Any] | None + + class DatasourceInvokeMeta(BaseModel): """ Datasource invoke meta @@ -186,7 +192,7 @@ class DatasourceInvokeMeta(BaseModel): time_cost: float = Field(..., description="The time cost of the tool invoke") error: str | None = None - tool_config: dict | None = None + tool_config: dict[str, Any] | None = None @classmethod def empty(cls) -> DatasourceInvokeMeta: @@ -202,12 +208,13 @@ class DatasourceInvokeMeta(BaseModel): """ return cls(time_cost=0.0, error=error, tool_config={}) - def to_dict(self) -> dict: - return { + def to_dict(self) -> DatasourceInvokeMetaDict: + result: DatasourceInvokeMetaDict = { "time_cost": self.time_cost, "error": self.error, "tool_config": self.tool_config, } + return result class DatasourceLabel(BaseModel): @@ -235,7 +242,7 @@ class OnlineDocumentPage(BaseModel): page_id: str = Field(..., description="The page id") page_name: str = Field(..., description="The page title") - page_icon: dict | None = Field(None, description="The page icon") + page_icon: dict[str, Any] | None = Field(None, description="The page icon") type: str = Field(..., description="The type of the page") last_edited_time: str = Field(..., description="The last edited time") parent_id: str | None = Field(None, description="The parent page id") @@ -294,7 +301,7 @@ class GetWebsiteCrawlRequest(BaseModel): Get website crawl request """ - crawl_parameters: dict = Field(..., description="The crawl parameters") + crawl_parameters: dict[str, Any] = Field(..., description="The crawl parameters") class WebSiteInfoDetail(BaseModel): @@ -351,7 +358,7 @@ class OnlineDriveFileBucket(BaseModel): bucket: str | None = Field(None, description="The file bucket") files: list[OnlineDriveFile] = Field(..., description="The file list") is_truncated: bool = Field(False, description="Whether the result is truncated") - next_page_parameters: dict | None = Field(None, description="Parameters for fetching the next page") + next_page_parameters: dict[str, Any] | None = Field(None, description="Parameters for fetching the next page") class OnlineDriveBrowseFilesRequest(BaseModel): @@ -362,7 +369,7 @@ class OnlineDriveBrowseFilesRequest(BaseModel): bucket: str | None = Field(None, description="The file bucket") prefix: str = Field(..., description="The parent folder ID") max_keys: int = Field(20, description="Page size for pagination") - next_page_parameters: dict | None = Field(None, description="Parameters for fetching the next page") + next_page_parameters: dict[str, Any] | None = Field(None, description="Parameters for fetching the next page") class OnlineDriveBrowseFilesResponse(BaseModel): diff --git a/api/core/datasource/utils/message_transformer.py b/api/core/datasource/utils/message_transformer.py index 04f15dee31..6a3f9e684a 100644 --- a/api/core/datasource/utils/message_transformer.py +++ b/api/core/datasource/utils/message_transformer.py @@ -2,11 +2,10 @@ import logging from collections.abc import Generator from mimetypes import guess_extension, guess_type -from graphon.file import File, FileTransferMethod, FileType - from core.datasource.entities.datasource_entities import DatasourceMessage from core.tools.tool_file_manager import ToolFileManager from core.workflow.file_reference import parse_file_reference +from graphon.file import File, FileTransferMethod, FileType from models.tools import ToolFile logger = logging.getLogger(__name__) @@ -71,8 +70,8 @@ class DatasourceFileMessageTransformer: if not isinstance(message.message, DatasourceMessage.BlobMessage): raise ValueError("unexpected message type") - # FIXME: should do a type check here. - assert isinstance(message.message.blob, bytes) + if not isinstance(message.message.blob, bytes): + raise TypeError(f"Expected blob to be bytes, got {type(message.message.blob).__name__}") tool_file_manager = ToolFileManager() blob_tool_file: ToolFile | None = tool_file_manager.create_file_by_raw( user_id=user_id, diff --git a/api/core/entities/execution_extra_content.py b/api/core/entities/execution_extra_content.py index d304c982cd..04ae193396 100644 --- a/api/core/entities/execution_extra_content.py +++ b/api/core/entities/execution_extra_content.py @@ -3,9 +3,9 @@ from __future__ import annotations from collections.abc import Mapping, Sequence from typing import Any, TypeAlias -from graphon.nodes.human_input.entities import FormInput, UserAction from pydantic import BaseModel, ConfigDict, Field +from graphon.nodes.human_input.entities import FormInput, UserAction from models.execution_extra_content import ExecutionContentType diff --git a/api/core/entities/knowledge_entities.py b/api/core/entities/knowledge_entities.py index b1ba3c3e2a..a13938f3fb 100644 --- a/api/core/entities/knowledge_entities.py +++ b/api/core/entities/knowledge_entities.py @@ -1,3 +1,5 @@ +from typing import Any + from pydantic import BaseModel, Field, field_validator @@ -37,7 +39,7 @@ class PipelineDocument(BaseModel): id: str position: int data_source_type: str - data_source_info: dict | None = None + data_source_info: dict[str, Any] | None = None name: str indexing_status: str error: str | None = None diff --git a/api/core/entities/mcp_provider.py b/api/core/entities/mcp_provider.py index a440829b46..bfa4f56915 100644 --- a/api/core/entities/mcp_provider.py +++ b/api/core/entities/mcp_provider.py @@ -6,7 +6,6 @@ from enum import StrEnum from typing import TYPE_CHECKING, Any from urllib.parse import urlparse -from graphon.file import helpers as file_helpers from pydantic import BaseModel from configs import dify_config @@ -16,6 +15,7 @@ from core.helper.provider_cache import NoOpProviderCredentialCache from core.mcp.types import OAuthClientInformation, OAuthClientMetadata, OAuthTokens from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolProviderType +from graphon.file import helpers as file_helpers if TYPE_CHECKING: from models.tools import MCPToolProvider diff --git a/api/core/entities/model_entities.py b/api/core/entities/model_entities.py index 84d95c38c6..e99a131500 100644 --- a/api/core/entities/model_entities.py +++ b/api/core/entities/model_entities.py @@ -1,10 +1,11 @@ from collections.abc import Sequence from enum import StrEnum, auto +from pydantic import BaseModel, ConfigDict + from graphon.model_runtime.entities.common_entities import I18nObject from graphon.model_runtime.entities.model_entities import ModelType, ProviderModel from graphon.model_runtime.entities.provider_entities import ProviderEntity -from pydantic import BaseModel, ConfigDict class ModelStatus(StrEnum): diff --git a/api/core/entities/provider_configuration.py b/api/core/entities/provider_configuration.py index f3b2c31465..1ab66cceee 100644 --- a/api/core/entities/provider_configuration.py +++ b/api/core/entities/provider_configuration.py @@ -6,17 +6,8 @@ import re from collections import defaultdict from collections.abc import Iterator, Sequence from json import JSONDecodeError +from typing import Any -from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType -from graphon.model_runtime.entities.provider_entities import ( - ConfigurateMethod, - CredentialFormSchema, - FormType, - ProviderEntity, -) -from graphon.model_runtime.model_providers.__base.ai_model import AIModel -from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory -from graphon.model_runtime.runtime import ModelRuntime from pydantic import BaseModel, ConfigDict, Field, PrivateAttr, model_validator from sqlalchemy import func, select from sqlalchemy.orm import Session @@ -33,6 +24,16 @@ from core.entities.provider_entities import ( from core.helper import encrypter from core.helper.model_provider_cache import ProviderCredentialsCache, ProviderCredentialsCacheType from core.plugin.impl.model_runtime_factory import create_plugin_model_provider_factory +from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType +from graphon.model_runtime.entities.provider_entities import ( + ConfigurateMethod, + CredentialFormSchema, + FormType, + ProviderEntity, +) +from graphon.model_runtime.model_providers.__base.ai_model import AIModel +from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory +from graphon.model_runtime.runtime import ModelRuntime from libs.datetime_utils import naive_utc_now from models.engine import db from models.enums import CredentialSourceType @@ -111,7 +112,7 @@ class ProviderConfiguration(BaseModel): return ModelProviderFactory(model_runtime=self._bound_model_runtime) return create_plugin_model_provider_factory(tenant_id=self.tenant_id) - def get_current_credentials(self, model_type: ModelType, model: str) -> dict | None: + def get_current_credentials(self, model_type: ModelType, model: str) -> dict[str, Any] | None: """ Get current credentials. @@ -233,7 +234,7 @@ class ProviderConfiguration(BaseModel): return session.execute(stmt).scalar_one_or_none() - def _get_specific_provider_credential(self, credential_id: str) -> dict | None: + def _get_specific_provider_credential(self, credential_id: str) -> dict[str, Any] | None: """ Get a specific provider credential by ID. :param credential_id: Credential ID @@ -297,7 +298,7 @@ class ProviderConfiguration(BaseModel): stmt = stmt.where(ProviderCredential.id != exclude_id) return session.execute(stmt).scalar_one_or_none() is not None - def get_provider_credential(self, credential_id: str | None = None) -> dict | None: + def get_provider_credential(self, credential_id: str | None = None) -> dict[str, Any] | None: """ Get provider credentials. @@ -317,7 +318,9 @@ class ProviderConfiguration(BaseModel): else [], ) - def validate_provider_credentials(self, credentials: dict, credential_id: str = "", session: Session | None = None): + def validate_provider_credentials( + self, credentials: dict[str, Any], credential_id: str = "", session: Session | None = None + ): """ Validate custom credentials. :param credentials: provider credentials @@ -447,7 +450,7 @@ class ProviderConfiguration(BaseModel): provider_names.append(model_provider_id.provider_name) return provider_names - def create_provider_credential(self, credentials: dict, credential_name: str | None): + def create_provider_credential(self, credentials: dict[str, Any], credential_name: str | None): """ Add custom provider credentials. :param credentials: provider credentials @@ -515,7 +518,7 @@ class ProviderConfiguration(BaseModel): def update_provider_credential( self, - credentials: dict, + credentials: dict[str, Any], credential_id: str, credential_name: str | None, ): @@ -760,7 +763,7 @@ class ProviderConfiguration(BaseModel): def _get_specific_custom_model_credential( self, model_type: ModelType, model: str, credential_id: str - ) -> dict | None: + ) -> dict[str, Any] | None: """ Get a specific provider credential by ID. :param credential_id: Credential ID @@ -832,7 +835,9 @@ class ProviderConfiguration(BaseModel): stmt = stmt.where(ProviderModelCredential.id != exclude_id) return session.execute(stmt).scalar_one_or_none() is not None - def get_custom_model_credential(self, model_type: ModelType, model: str, credential_id: str | None) -> dict | None: + def get_custom_model_credential( + self, model_type: ModelType, model: str, credential_id: str | None + ) -> dict[str, Any] | None: """ Get custom model credentials. @@ -872,7 +877,7 @@ class ProviderConfiguration(BaseModel): self, model_type: ModelType, model: str, - credentials: dict, + credentials: dict[str, Any], credential_id: str = "", session: Session | None = None, ): @@ -939,7 +944,7 @@ class ProviderConfiguration(BaseModel): return _validate(new_session) def create_custom_model_credential( - self, model_type: ModelType, model: str, credentials: dict, credential_name: str | None + self, model_type: ModelType, model: str, credentials: dict[str, Any], credential_name: str | None ) -> None: """ Create a custom model credential. @@ -1002,7 +1007,12 @@ class ProviderConfiguration(BaseModel): raise def update_custom_model_credential( - self, model_type: ModelType, model: str, credentials: dict, credential_name: str | None, credential_id: str + self, + model_type: ModelType, + model: str, + credentials: dict[str, Any], + credential_name: str | None, + credential_id: str, ) -> None: """ Update a custom model credential. @@ -1412,7 +1422,9 @@ class ProviderConfiguration(BaseModel): # Get model instance of LLM return model_provider_factory.get_model_type_instance(provider=self.provider.provider, model_type=model_type) - def get_model_schema(self, model_type: ModelType, model: str, credentials: dict | None) -> AIModelEntity | None: + def get_model_schema( + self, model_type: ModelType, model: str, credentials: dict[str, Any] | None + ) -> AIModelEntity | None: """ Get model schema """ @@ -1471,7 +1483,7 @@ class ProviderConfiguration(BaseModel): return secret_input_form_variables - def obfuscated_credentials(self, credentials: dict, credential_form_schemas: list[CredentialFormSchema]): + def obfuscated_credentials(self, credentials: dict[str, Any], credential_form_schemas: list[CredentialFormSchema]): """ Obfuscated credentials. diff --git a/api/core/entities/provider_entities.py b/api/core/entities/provider_entities.py index 2c8767a32b..72b29c2277 100644 --- a/api/core/entities/provider_entities.py +++ b/api/core/entities/provider_entities.py @@ -1,9 +1,8 @@ from __future__ import annotations from enum import StrEnum, auto -from typing import Union +from typing import Any, Union -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import BaseModel, ConfigDict, Field from core.entities.parameter_entities import ( @@ -13,6 +12,7 @@ from core.entities.parameter_entities import ( ToolSelectorScope, ) from core.tools.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import ModelType class ProviderQuotaType(StrEnum): @@ -88,7 +88,7 @@ class SystemConfiguration(BaseModel): enabled: bool current_quota_type: ProviderQuotaType | None = None quota_configurations: list[QuotaConfiguration] = [] - credentials: dict | None = None + credentials: dict[str, Any] | None = None class CustomProviderConfiguration(BaseModel): @@ -96,7 +96,7 @@ class CustomProviderConfiguration(BaseModel): Model class for provider custom configuration. """ - credentials: dict + credentials: dict[str, Any] current_credential_id: str | None = None current_credential_name: str | None = None available_credentials: list[CredentialConfiguration] = [] @@ -109,7 +109,7 @@ class CustomModelConfiguration(BaseModel): model: str model_type: ModelType - credentials: dict | None + credentials: dict[str, Any] | None current_credential_id: str | None = None current_credential_name: str | None = None available_model_credentials: list[CredentialConfiguration] = [] @@ -145,7 +145,7 @@ class ModelLoadBalancingConfiguration(BaseModel): id: str name: str - credentials: dict + credentials: dict[str, Any] credential_source_type: str | None = None credential_id: str | None = None diff --git a/api/core/extension/api_based_extension_requestor.py b/api/core/extension/api_based_extension_requestor.py index f9e6099049..01139d07e2 100644 --- a/api/core/extension/api_based_extension_requestor.py +++ b/api/core/extension/api_based_extension_requestor.py @@ -1,4 +1,4 @@ -from typing import cast +from typing import Any, cast import httpx @@ -14,7 +14,7 @@ class APIBasedExtensionRequestor: self.api_endpoint = api_endpoint self.api_key = api_key - def request(self, point: APIBasedExtensionPoint, params: dict): + def request(self, point: APIBasedExtensionPoint, params: dict[str, Any]) -> dict[str, Any]: """ Request the api. @@ -49,4 +49,4 @@ class APIBasedExtensionRequestor: if response.status_code != 200: raise ValueError(f"request error, status_code: {response.status_code}, content: {response.text[:100]}") - return cast(dict, response.json()) + return cast(dict[str, Any], response.json()) diff --git a/api/core/extension/extensible.py b/api/core/extension/extensible.py index c2789a7a35..c08e319aac 100644 --- a/api/core/extension/extensible.py +++ b/api/core/extension/extensible.py @@ -21,8 +21,8 @@ class ExtensionModule(StrEnum): class ModuleExtension(BaseModel): extension_class: Any | None = None name: str - label: dict | None = None - form_schema: list | None = None + label: dict[str, Any] | None = None + form_schema: list[dict[str, Any]] | None = None builtin: bool = True position: int | None = None @@ -32,9 +32,9 @@ class Extensible: name: str tenant_id: str - config: dict | None = None + config: dict[str, Any] | None = None - def __init__(self, tenant_id: str, config: dict | None = None): + def __init__(self, tenant_id: str, config: dict[str, Any] | None = None): self.tenant_id = tenant_id self.config = config diff --git a/api/core/external_data_tool/api/api.py b/api/core/external_data_tool/api/api.py index 564801f189..8ce068cfbb 100644 --- a/api/core/external_data_tool/api/api.py +++ b/api/core/external_data_tool/api/api.py @@ -1,3 +1,6 @@ +from collections.abc import Mapping +from typing import Any, TypedDict + from sqlalchemy import select from core.extension.api_based_extension_requestor import APIBasedExtensionRequestor @@ -7,6 +10,16 @@ from extensions.ext_database import db from models.api_based_extension import APIBasedExtension, APIBasedExtensionPoint +class ApiToolConfig(TypedDict, total=False): + """Expected config shape for ApiExternalDataTool. + + Not used directly in method signatures (base class accepts dict[str, Any]); + kept here to document the keys this tool reads from config. + """ + + api_based_extension_id: str + + class ApiExternalDataTool(ExternalDataTool): """ The api external data tool. @@ -16,7 +29,7 @@ class ApiExternalDataTool(ExternalDataTool): """the unique name of external data tool""" @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): """ Validate the incoming form config data. @@ -37,7 +50,7 @@ class ApiExternalDataTool(ExternalDataTool): if not api_based_extension: raise ValueError("api_based_extension_id is invalid") - def query(self, inputs: dict, query: str | None = None) -> str: + def query(self, inputs: Mapping[str, Any], query: str | None = None) -> str: """ Query the external data tool. diff --git a/api/core/external_data_tool/base.py b/api/core/external_data_tool/base.py index cbec2e4e42..12bea4e9e5 100644 --- a/api/core/external_data_tool/base.py +++ b/api/core/external_data_tool/base.py @@ -1,4 +1,6 @@ from abc import ABC, abstractmethod +from collections.abc import Mapping +from typing import Any from core.extension.extensible import Extensible, ExtensionModule @@ -15,14 +17,14 @@ class ExternalDataTool(Extensible, ABC): variable: str """the tool variable name of app tool""" - def __init__(self, tenant_id: str, app_id: str, variable: str, config: dict | None = None): + def __init__(self, tenant_id: str, app_id: str, variable: str, config: dict[str, Any] | None = None): super().__init__(tenant_id, config) self.app_id = app_id self.variable = variable @classmethod @abstractmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): """ Validate the incoming form config data. @@ -33,7 +35,7 @@ class ExternalDataTool(Extensible, ABC): raise NotImplementedError @abstractmethod - def query(self, inputs: dict, query: str | None = None) -> str: + def query(self, inputs: Mapping[str, Any], query: str | None = None) -> str: """ Query the external data tool. diff --git a/api/core/external_data_tool/factory.py b/api/core/external_data_tool/factory.py index 6c542d681b..f404aa7286 100644 --- a/api/core/external_data_tool/factory.py +++ b/api/core/external_data_tool/factory.py @@ -6,14 +6,14 @@ from extensions.ext_code_based_extension import code_based_extension class ExternalDataToolFactory: - def __init__(self, name: str, tenant_id: str, app_id: str, variable: str, config: dict): + def __init__(self, name: str, tenant_id: str, app_id: str, variable: str, config: dict[str, Any]): extension_class = code_based_extension.extension_class(ExtensionModule.EXTERNAL_DATA_TOOL, name) self.__extension_instance = extension_class( tenant_id=tenant_id, app_id=app_id, variable=variable, config=config ) @classmethod - def validate_config(cls, name: str, tenant_id: str, config: dict): + def validate_config(cls, name: str, tenant_id: str, config: dict[str, Any]) -> None: """ Validate the incoming form config data. diff --git a/api/core/helper/code_executor/code_executor.py b/api/core/helper/code_executor/code_executor.py index 35bfcfb6a5..951e065b2c 100644 --- a/api/core/helper/code_executor/code_executor.py +++ b/api/core/helper/code_executor/code_executor.py @@ -4,7 +4,6 @@ from threading import Lock from typing import Any import httpx -from graphon.nodes.code.entities import CodeLanguage from pydantic import BaseModel from yarl import URL @@ -14,6 +13,7 @@ from core.helper.code_executor.jinja2.jinja2_transformer import Jinja2TemplateTr from core.helper.code_executor.python3.python3_transformer import Python3TemplateTransformer from core.helper.code_executor.template_transformer import TemplateTransformer from core.helper.http_client_pooling import get_pooled_http_client +from graphon.nodes.code.entities import CodeLanguage logger = logging.getLogger(__name__) code_execution_endpoint_url = URL(str(dify_config.CODE_EXECUTION_ENDPOINT)) diff --git a/api/core/helper/model_provider_cache.py b/api/core/helper/model_provider_cache.py index 00fcfe0b80..10d79a8239 100644 --- a/api/core/helper/model_provider_cache.py +++ b/api/core/helper/model_provider_cache.py @@ -1,6 +1,7 @@ import json from enum import StrEnum from json import JSONDecodeError +from typing import Any from extensions.ext_redis import redis_client @@ -15,7 +16,7 @@ class ProviderCredentialsCache: def __init__(self, tenant_id: str, identity_id: str, cache_type: ProviderCredentialsCacheType): self.cache_key = f"{cache_type}_credentials:tenant_id:{tenant_id}:id:{identity_id}" - def get(self) -> dict | None: + def get(self) -> dict[str, Any] | None: """ Get cached model provider credentials. @@ -33,7 +34,7 @@ class ProviderCredentialsCache: else: return None - def set(self, credentials: dict): + def set(self, credentials: dict[str, Any]): """ Cache model provider credentials. diff --git a/api/core/helper/moderation.py b/api/core/helper/moderation.py index a1e782a094..dc37a36943 100644 --- a/api/core/helper/moderation.py +++ b/api/core/helper/moderation.py @@ -2,14 +2,13 @@ import logging import secrets from typing import cast -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.invoke import InvokeBadRequestError -from graphon.model_runtime.model_providers.__base.moderation_model import ModerationModel - from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.entities import DEFAULT_PLUGIN_ID from core.plugin.impl.model_runtime_factory import create_plugin_model_provider_factory from extensions.ext_hosting_provider import hosting_configuration +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.invoke import InvokeBadRequestError +from graphon.model_runtime.model_providers.__base.moderation_model import ModerationModel from models.provider import ProviderType logger = logging.getLogger(__name__) diff --git a/api/core/helper/provider_cache.py b/api/core/helper/provider_cache.py index ffb5148386..9f167ca49c 100644 --- a/api/core/helper/provider_cache.py +++ b/api/core/helper/provider_cache.py @@ -17,7 +17,7 @@ class ProviderCredentialsCache(ABC): """Generate cache key based on subclass implementation""" pass - def get(self) -> dict | None: + def get(self) -> dict[str, Any] | None: """Get cached provider credentials""" cached_credentials = redis_client.get(self.cache_key) if cached_credentials: @@ -71,7 +71,7 @@ class ToolProviderCredentialsCache(ProviderCredentialsCache): class NoOpProviderCredentialCache: """No-op provider credential cache""" - def get(self) -> dict | None: + def get(self) -> dict[str, Any] | None: """Get cached provider credentials""" return None diff --git a/api/core/helper/tool_parameter_cache.py b/api/core/helper/tool_parameter_cache.py index 54674d4ff6..bf5bf9af03 100644 --- a/api/core/helper/tool_parameter_cache.py +++ b/api/core/helper/tool_parameter_cache.py @@ -1,6 +1,7 @@ import json from enum import StrEnum from json import JSONDecodeError +from typing import Any from extensions.ext_redis import redis_client @@ -18,7 +19,7 @@ class ToolParameterCache: f":identity_id:{identity_id}" ) - def get(self) -> dict | None: + def get(self) -> dict[str, Any] | None: """ Get cached model provider credentials. @@ -36,7 +37,7 @@ class ToolParameterCache: else: return None - def set(self, parameters: dict): + def set(self, parameters: dict[str, Any]): """Cache model provider credentials.""" redis_client.setex(self.cache_key, 86400, json.dumps(parameters)) diff --git a/api/core/hosting_configuration.py b/api/core/hosting_configuration.py index 60f5434bc1..8bcb899b23 100644 --- a/api/core/hosting_configuration.py +++ b/api/core/hosting_configuration.py @@ -1,10 +1,12 @@ +from typing import Any + from flask import Flask -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import BaseModel from configs import dify_config from core.entities import DEFAULT_PLUGIN_ID from core.entities.provider_entities import ProviderQuotaType, QuotaUnit, RestrictModel +from graphon.model_runtime.entities.model_entities import ModelType class HostingQuota(BaseModel): @@ -28,7 +30,7 @@ class FreeHostingQuota(HostingQuota): class HostingProvider(BaseModel): enabled: bool = False - credentials: dict | None = None + credentials: dict[str, Any] | None = None quota_unit: QuotaUnit | None = None quotas: list[HostingQuota] = [] diff --git a/api/core/indexing_runner.py b/api/core/indexing_runner.py index b8d5ca2f50..b6e33396d1 100644 --- a/api/core/indexing_runner.py +++ b/api/core/indexing_runner.py @@ -9,7 +9,6 @@ from collections.abc import Mapping from typing import Any from flask import Flask, current_app -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import delete, func, select, update from sqlalchemy.orm.exc import ObjectDeletedError @@ -35,6 +34,7 @@ from core.tools.utils.web_reader_tool import get_image_upload_file_ids from extensions.ext_database import db from extensions.ext_redis import redis_client from extensions.ext_storage import storage +from graphon.model_runtime.entities.model_entities import ModelType from libs import helper from libs.datetime_utils import naive_utc_now from models import Account @@ -735,7 +735,9 @@ class IndexingRunner: @staticmethod def _update_document_index_status( - document_id: str, after_indexing_status: IndexingStatus, extra_update_params: dict | None = None + document_id: str, + after_indexing_status: IndexingStatus, + extra_update_params: Mapping[Any, Any] | None = None, ): """ Update the document indexing status. @@ -762,7 +764,7 @@ class IndexingRunner: db.session.commit() @staticmethod - def _update_segments_by_document(dataset_document_id: str, update_params: dict): + def _update_segments_by_document(dataset_document_id: str, update_params: Mapping[Any, Any]): """ Update the document segment by document id. """ diff --git a/api/core/llm_generator/llm_generator.py b/api/core/llm_generator/llm_generator.py index aa258c9f89..348526b0ef 100644 --- a/api/core/llm_generator/llm_generator.py +++ b/api/core/llm_generator/llm_generator.py @@ -2,14 +2,9 @@ import json import logging import re from collections.abc import Sequence -from typing import Protocol, TypedDict, cast +from typing import Any, Protocol, TypedDict, cast import json_repair -from graphon.enums import WorkflowNodeExecutionMetadataKey -from graphon.model_runtime.entities.llm_entities import LLMResult -from graphon.model_runtime.entities.message_entities import PromptMessage, SystemPromptMessage, UserPromptMessage -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError from sqlalchemy import select from core.app.app_config.entities import ModelConfig @@ -35,6 +30,11 @@ from core.ops.utils import measure_time from core.prompt.utils.prompt_template_parser import PromptTemplateParser from extensions.ext_database import db from extensions.ext_storage import storage +from graphon.enums import WorkflowNodeExecutionMetadataKey +from graphon.model_runtime.entities.llm_entities import LLMResult +from graphon.model_runtime.entities.message_entities import PromptMessage, SystemPromptMessage, UserPromptMessage +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError from models import App, Message, WorkflowNodeExecutionModel from models.workflow import Workflow @@ -533,7 +533,7 @@ class LLMGenerator: def __instruction_modify_common( tenant_id: str, model_config: ModelConfig, - last_run: dict | None, + last_run: dict[str, Any] | None, current: str | None, error_message: str | None, instruction: str, diff --git a/api/core/llm_generator/output_parser/structured_output.py b/api/core/llm_generator/output_parser/structured_output.py index a1710f11ac..d2e375626f 100644 --- a/api/core/llm_generator/output_parser/structured_output.py +++ b/api/core/llm_generator/output_parser/structured_output.py @@ -5,6 +5,11 @@ from enum import StrEnum from typing import Any, Literal, cast, overload import json_repair +from pydantic import TypeAdapter, ValidationError + +from core.llm_generator.output_parser.errors import OutputParserError +from core.llm_generator.prompts import STRUCTURED_OUTPUT_PROMPT +from core.model_manager import ModelInstance from graphon.model_runtime.callbacks.base_callback import Callback from graphon.model_runtime.entities.llm_entities import ( LLMResult, @@ -21,11 +26,6 @@ from graphon.model_runtime.entities.message_entities import ( TextPromptMessageContent, ) from graphon.model_runtime.entities.model_entities import AIModelEntity, ParameterRule -from pydantic import TypeAdapter, ValidationError - -from core.llm_generator.output_parser.errors import OutputParserError -from core.llm_generator.prompts import STRUCTURED_OUTPUT_PROMPT -from core.model_manager import ModelInstance class ResponseFormat(StrEnum): @@ -200,9 +200,9 @@ def _handle_native_json_schema( provider: str, model_schema: AIModelEntity, structured_output_schema: Mapping, - model_parameters: dict, + model_parameters: dict[str, Any], rules: list[ParameterRule], -): +) -> dict[str, Any]: """ Handle structured output for models with native JSON schema support. @@ -224,7 +224,7 @@ def _handle_native_json_schema( return model_parameters -def _set_response_format(model_parameters: dict, rules: list): +def _set_response_format(model_parameters: dict[str, Any], rules: list[ParameterRule]) -> None: """ Set the appropriate response format parameter based on model rules. @@ -326,7 +326,7 @@ def _prepare_schema_for_model(provider: str, model_schema: AIModelEntity, schema return {"schema": processed_schema, "name": "llm_response"} -def remove_additional_properties(schema: dict): +def remove_additional_properties(schema: dict[str, Any]) -> None: """ Remove additionalProperties fields from JSON schema. Used for models like Gemini that don't support this property. @@ -349,7 +349,7 @@ def remove_additional_properties(schema: dict): remove_additional_properties(item) -def convert_boolean_to_string(schema: dict): +def convert_boolean_to_string(schema: dict[str, Any]) -> None: """ Convert boolean type specifications to string in JSON schema. diff --git a/api/core/logging/structured_formatter.py b/api/core/logging/structured_formatter.py index 9baf6c4682..ae7be91c17 100644 --- a/api/core/logging/structured_formatter.py +++ b/api/core/logging/structured_formatter.py @@ -3,7 +3,7 @@ import logging import traceback from datetime import UTC, datetime -from typing import Any, TypedDict +from typing import Any, NotRequired, TypedDict import orjson @@ -16,6 +16,19 @@ class IdentityDict(TypedDict, total=False): user_type: str +class LogDict(TypedDict): + ts: str + severity: str + service: str + caller: str + message: str + trace_id: NotRequired[str] + span_id: NotRequired[str] + identity: NotRequired[IdentityDict] + attributes: NotRequired[dict[str, Any]] + stack_trace: NotRequired[str] + + class StructuredJSONFormatter(logging.Formatter): """ JSON log formatter following the specified schema: @@ -55,9 +68,9 @@ class StructuredJSONFormatter(logging.Formatter): return json.dumps(log_dict, default=str, ensure_ascii=False) - def _build_log_dict(self, record: logging.LogRecord) -> dict[str, Any]: + def _build_log_dict(self, record: logging.LogRecord) -> LogDict: # Core fields - log_dict: dict[str, Any] = { + log_dict: LogDict = { "ts": datetime.now(UTC).isoformat(timespec="milliseconds").replace("+00:00", "Z"), "severity": self.SEVERITY_MAP.get(record.levelno, "INFO"), "service": self._service_name, diff --git a/api/core/mcp/auth/auth_flow.py b/api/core/mcp/auth/auth_flow.py index d015769b54..1d8356acf6 100644 --- a/api/core/mcp/auth/auth_flow.py +++ b/api/core/mcp/auth/auth_flow.py @@ -146,7 +146,7 @@ def discover_protected_resource_metadata( return ProtectedResourceMetadata.model_validate(response.json()) elif response.status_code == 404: continue # Try next URL - except (RequestError, ValidationError): + except (RequestError, ValidationError, json.JSONDecodeError): continue # Try next URL return None @@ -166,7 +166,7 @@ def discover_oauth_authorization_server_metadata( return OAuthMetadata.model_validate(response.json()) elif response.status_code == 404: continue # Try next URL - except (RequestError, ValidationError): + except (RequestError, ValidationError, json.JSONDecodeError): continue # Try next URL return None @@ -276,7 +276,7 @@ def check_support_resource_discovery(server_url: str) -> tuple[bool, str]: else: return False, "" return False, "" - except RequestError: + except (RequestError, json.JSONDecodeError, IndexError): # Not support resource discovery, fall back to well-known OAuth metadata return False, "" diff --git a/api/core/mcp/auth_client.py b/api/core/mcp/auth_client.py index d8724b8de5..173913196e 100644 --- a/api/core/mcp/auth_client.py +++ b/api/core/mcp/auth_client.py @@ -122,7 +122,7 @@ class MCPClientWithAuthRetry(MCPClient): logger.exception("Authentication retry failed") raise MCPAuthError(f"Authentication retry failed: {e}") from e - def _execute_with_retry(self, func: Callable[..., Any], *args, **kwargs) -> Any: + def _execute_with_retry[**P, R](self, func: Callable[P, R], *args: P.args, **kwargs: P.kwargs) -> R: """ Execute a function with authentication retry logic. diff --git a/api/core/mcp/entities.py b/api/core/mcp/entities.py index d6d3a677c6..21edc86a57 100644 --- a/api/core/mcp/entities.py +++ b/api/core/mcp/entities.py @@ -1,6 +1,6 @@ from dataclasses import dataclass from enum import StrEnum -from typing import Any, TypeVar +from typing import Any from pydantic import BaseModel @@ -9,12 +9,9 @@ from core.mcp.types import LATEST_PROTOCOL_VERSION, OAuthClientInformation, OAut SUPPORTED_PROTOCOL_VERSIONS: list[str] = ["2024-11-05", "2025-03-26", LATEST_PROTOCOL_VERSION] -SessionT = TypeVar("SessionT", bound=BaseSession[Any, Any, Any, Any, Any]) -LifespanContextT = TypeVar("LifespanContextT") - @dataclass -class RequestContext[SessionT: BaseSession[Any, Any, Any, Any, Any], LifespanContextT]: +class RequestContext[SessionT: BaseSession, LifespanContextT]: request_id: RequestId meta: RequestParams.Meta | None session: SessionT diff --git a/api/core/mcp/server/streamable_http.py b/api/core/mcp/server/streamable_http.py index 8de002ae55..884610ca82 100644 --- a/api/core/mcp/server/streamable_http.py +++ b/api/core/mcp/server/streamable_http.py @@ -3,12 +3,11 @@ import logging from collections.abc import Mapping from typing import Any, NotRequired, TypedDict, cast -from graphon.variables.input_entities import VariableEntity, VariableEntityType - from configs import dify_config from core.app.entities.app_invoke_entities import InvokeFrom from core.app.features.rate_limiting.rate_limit import RateLimitGenerator from core.mcp import types as mcp_types +from graphon.variables.input_entities import VariableEntity, VariableEntityType from models.model import App, AppMCPServer, AppMode, EndUser from services.app_generate_service import AppGenerateService @@ -187,15 +186,16 @@ def build_parameter_schema( def prepare_tool_arguments(app: App, arguments: dict[str, Any]) -> ToolArgumentsDict: """Prepare arguments based on app mode""" - if app.mode == AppMode.WORKFLOW: - return {"inputs": arguments} - elif app.mode == AppMode.COMPLETION: - return {"query": "", "inputs": arguments} - else: - # Chat modes - create a copy to avoid modifying original dict - args_copy = arguments.copy() - query = args_copy.pop("query", "") - return {"query": query, "inputs": args_copy} + match app.mode: + case AppMode.WORKFLOW: + return {"inputs": arguments} + case AppMode.COMPLETION: + return {"query": "", "inputs": arguments} + case _: + # Chat modes - create a copy to avoid modifying original dict + args_copy = arguments.copy() + query = args_copy.pop("query", "") + return {"query": query, "inputs": args_copy} def extract_answer_from_response(app: App, response: Any) -> str: @@ -229,17 +229,13 @@ def process_streaming_response(response: RateLimitGenerator) -> str: def process_mapping_response(app: App, response: Mapping) -> str: """Process mapping response based on app mode""" - if app.mode in { - AppMode.ADVANCED_CHAT, - AppMode.COMPLETION, - AppMode.CHAT, - AppMode.AGENT_CHAT, - }: - return response.get("answer", "") - elif app.mode == AppMode.WORKFLOW: - return json.dumps(response["data"]["outputs"], ensure_ascii=False) - else: - raise ValueError("Invalid app mode: " + str(app.mode)) + match app.mode: + case AppMode.ADVANCED_CHAT | AppMode.COMPLETION | AppMode.CHAT | AppMode.AGENT_CHAT: + return response.get("answer", "") + case AppMode.WORKFLOW: + return json.dumps(response["data"]["outputs"], ensure_ascii=False) + case _: + raise ValueError("Invalid app mode: " + str(app.mode)) def convert_input_form_to_parameters( diff --git a/api/core/mcp/session/base_session.py b/api/core/mcp/session/base_session.py index 0b3aa79838..70d45b15c4 100644 --- a/api/core/mcp/session/base_session.py +++ b/api/core/mcp/session/base_session.py @@ -55,7 +55,7 @@ class RequestResponder[ReceiveRequestT: ClientRequest | ServerRequest, SendResul request: ReceiveRequestT _session: "BaseSession[Any, Any, SendResultT, ReceiveRequestT, Any]" - _on_complete: Callable[["RequestResponder[ReceiveRequestT, SendResultT]"], Any] + _on_complete: Callable[["RequestResponder[ReceiveRequestT, SendResultT]"], object] def __init__( self, @@ -63,7 +63,7 @@ class RequestResponder[ReceiveRequestT: ClientRequest | ServerRequest, SendResul request_meta: RequestParams.Meta | None, request: ReceiveRequestT, session: "BaseSession[Any, Any, SendResultT, ReceiveRequestT, Any]", - on_complete: Callable[["RequestResponder[ReceiveRequestT, SendResultT]"], Any], + on_complete: Callable[["RequestResponder[ReceiveRequestT, SendResultT]"], object], ): self.request_id = request_id self.request_meta = request_meta diff --git a/api/core/mcp/types.py b/api/core/mcp/types.py index 2653d20a7d..10e3082aa3 100644 --- a/api/core/mcp/types.py +++ b/api/core/mcp/types.py @@ -31,7 +31,6 @@ ProgressToken = str | int Cursor = str Role = Literal["user", "assistant"] RequestId = Annotated[int | str, Field(union_mode="left_to_right")] -type AnyFunction = Callable[..., Any] class RequestParams(BaseModel): diff --git a/api/core/mcp/utils.py b/api/core/mcp/utils.py index 7e35044176..7b5a7635f1 100644 --- a/api/core/mcp/utils.py +++ b/api/core/mcp/utils.py @@ -4,11 +4,11 @@ from contextlib import AbstractContextManager import httpx import httpx_sse -from graphon.model_runtime.utils.encoders import jsonable_encoder from httpx_sse import connect_sse from configs import dify_config from core.mcp.types import ErrorData, JSONRPCError +from graphon.model_runtime.utils.encoders import jsonable_encoder HTTP_REQUEST_NODE_SSL_VERIFY = dify_config.HTTP_REQUEST_NODE_SSL_VERIFY diff --git a/api/core/memory/token_buffer_memory.py b/api/core/memory/token_buffer_memory.py index 09c84538a9..d840ee213c 100644 --- a/api/core/memory/token_buffer_memory.py +++ b/api/core/memory/token_buffer_memory.py @@ -1,5 +1,14 @@ from collections.abc import Sequence +from sqlalchemy import select +from sqlalchemy.orm import sessionmaker + +from core.app.app_config.features.file_upload.manager import FileUploadConfigManager +from core.app.file_access import DatabaseFileAccessController +from core.model_manager import ModelInstance +from core.prompt.utils.extract_thread_messages import extract_thread_messages +from extensions.ext_database import db +from factories import file_factory from graphon.file import file_manager from graphon.model_runtime.entities import ( AssistantPromptMessage, @@ -10,15 +19,6 @@ from graphon.model_runtime.entities import ( UserPromptMessage, ) from graphon.model_runtime.entities.message_entities import PromptMessageContentUnionTypes -from sqlalchemy import select -from sqlalchemy.orm import sessionmaker - -from core.app.app_config.features.file_upload.manager import FileUploadConfigManager -from core.app.file_access import DatabaseFileAccessController -from core.model_manager import ModelInstance -from core.prompt.utils.extract_thread_messages import extract_thread_messages -from extensions.ext_database import db -from factories import file_factory from models.model import AppMode, Conversation, Message, MessageFile from models.workflow import Workflow from repositories.api_workflow_run_repository import APIWorkflowRunRepository @@ -61,27 +61,28 @@ class TokenBufferMemory: :param is_user_message: whether this is a user message :return: PromptMessage """ - if self.conversation.mode in {AppMode.AGENT_CHAT, AppMode.COMPLETION, AppMode.CHAT}: - file_extra_config = FileUploadConfigManager.convert(self.conversation.model_config) - elif self.conversation.mode in {AppMode.ADVANCED_CHAT, AppMode.WORKFLOW}: - app = self.conversation.app - if not app: - raise ValueError("App not found for conversation") + match self.conversation.mode: + case AppMode.AGENT_CHAT | AppMode.COMPLETION | AppMode.CHAT: + file_extra_config = FileUploadConfigManager.convert(self.conversation.model_config) + case AppMode.ADVANCED_CHAT | AppMode.WORKFLOW: + app = self.conversation.app + if not app: + raise ValueError("App not found for conversation") - if not message.workflow_run_id: - raise ValueError("Workflow run ID not found") + if not message.workflow_run_id: + raise ValueError("Workflow run ID not found") - workflow_run = self.workflow_run_repo.get_workflow_run_by_id( - tenant_id=app.tenant_id, app_id=app.id, run_id=message.workflow_run_id - ) - if not workflow_run: - raise ValueError(f"Workflow run not found: {message.workflow_run_id}") - workflow = db.session.scalar(select(Workflow).where(Workflow.id == workflow_run.workflow_id)) - if not workflow: - raise ValueError(f"Workflow not found: {workflow_run.workflow_id}") - file_extra_config = FileUploadConfigManager.convert(workflow.features_dict, is_vision=False) - else: - raise AssertionError(f"Invalid app mode: {self.conversation.mode}") + workflow_run = self.workflow_run_repo.get_workflow_run_by_id( + tenant_id=app.tenant_id, app_id=app.id, run_id=message.workflow_run_id + ) + if not workflow_run: + raise ValueError(f"Workflow run not found: {message.workflow_run_id}") + workflow = db.session.scalar(select(Workflow).where(Workflow.id == workflow_run.workflow_id)) + if not workflow: + raise ValueError(f"Workflow not found: {workflow_run.workflow_id}") + file_extra_config = FileUploadConfigManager.convert(workflow.features_dict, is_vision=False) + case _: + raise AssertionError(f"Invalid app mode: {self.conversation.mode}") detail = ImagePromptMessageContent.DETAIL.HIGH if file_extra_config and app_record: diff --git a/api/core/model_manager.py b/api/core/model_manager.py index 7a214777bc..d8d8dfedd8 100644 --- a/api/core/model_manager.py +++ b/api/core/model_manager.py @@ -2,20 +2,6 @@ import logging from collections.abc import Callable, Generator, Iterable, Mapping, Sequence from typing import IO, Any, Literal, Optional, Union, cast, overload -from graphon.model_runtime.callbacks.base_callback import Callback -from graphon.model_runtime.entities.llm_entities import LLMResult -from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageTool -from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelFeature, ModelType -from graphon.model_runtime.entities.rerank_entities import RerankResult -from graphon.model_runtime.entities.text_embedding_entities import EmbeddingResult -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeConnectionError, InvokeRateLimitError -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel -from graphon.model_runtime.model_providers.__base.moderation_model import ModerationModel -from graphon.model_runtime.model_providers.__base.rerank_model import RerankModel -from graphon.model_runtime.model_providers.__base.speech2text_model import Speech2TextModel -from graphon.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel -from graphon.model_runtime.model_providers.__base.tts_model import TTSModel - from configs import dify_config from core.entities import PluginCredentialType from core.entities.embedding_type import EmbeddingInputType @@ -25,6 +11,19 @@ from core.errors.error import ProviderTokenNotInitError from core.plugin.impl.model_runtime_factory import create_plugin_provider_manager from core.provider_manager import ProviderManager from extensions.ext_redis import redis_client +from graphon.model_runtime.callbacks.base_callback import Callback +from graphon.model_runtime.entities.llm_entities import LLMResult +from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageTool +from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelFeature, ModelType +from graphon.model_runtime.entities.rerank_entities import MultimodalRerankInput, RerankResult +from graphon.model_runtime.entities.text_embedding_entities import EmbeddingResult +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeConnectionError, InvokeRateLimitError +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel +from graphon.model_runtime.model_providers.__base.moderation_model import ModerationModel +from graphon.model_runtime.model_providers.__base.rerank_model import RerankModel +from graphon.model_runtime.model_providers.__base.speech2text_model import Speech2TextModel +from graphon.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel +from graphon.model_runtime.model_providers.__base.tts_model import TTSModel from models.provider import ProviderType logger = logging.getLogger(__name__) @@ -77,7 +76,7 @@ class ModelInstance: @staticmethod def _get_load_balancing_manager( - configuration: ProviderConfiguration, model_type: ModelType, model: str, credentials: dict + configuration: ProviderConfiguration, model_type: ModelType, model: str, credentials: dict[str, Any] ) -> Optional["LBModelManager"]: """ Get load balancing model credentials @@ -115,7 +114,7 @@ class ModelInstance: def invoke_llm( self, prompt_messages: Sequence[PromptMessage], - model_parameters: dict | None = None, + model_parameters: dict[str, Any] | None = None, tools: Sequence[PromptMessageTool] | None = None, stop: list[str] | None = None, stream: Literal[True] = True, @@ -126,7 +125,7 @@ class ModelInstance: def invoke_llm( self, prompt_messages: list[PromptMessage], - model_parameters: dict | None = None, + model_parameters: dict[str, Any] | None = None, tools: Sequence[PromptMessageTool] | None = None, stop: list[str] | None = None, stream: Literal[False] = False, @@ -137,7 +136,7 @@ class ModelInstance: def invoke_llm( self, prompt_messages: list[PromptMessage], - model_parameters: dict | None = None, + model_parameters: dict[str, Any] | None = None, tools: Sequence[PromptMessageTool] | None = None, stop: list[str] | None = None, stream: bool = True, @@ -147,7 +146,7 @@ class ModelInstance: def invoke_llm( self, prompt_messages: Sequence[PromptMessage], - model_parameters: dict | None = None, + model_parameters: dict[str, Any] | None = None, tools: Sequence[PromptMessageTool] | None = None, stop: Sequence[str] | None = None, stream: bool = True, @@ -172,10 +171,10 @@ class ModelInstance: function=self.model_type_instance.invoke, model=self.model_name, credentials=self.credentials, - prompt_messages=prompt_messages, + prompt_messages=list(prompt_messages), model_parameters=model_parameters, - tools=tools, - stop=stop, + tools=list(tools) if tools else None, + stop=list(stop) if stop else None, stream=stream, callbacks=callbacks, ), @@ -193,15 +192,12 @@ class ModelInstance: """ if not isinstance(self.model_type_instance, LargeLanguageModel): raise Exception("Model type instance is not LargeLanguageModel") - return cast( - int, - self._round_robin_invoke( - function=self.model_type_instance.get_num_tokens, - model=self.model_name, - credentials=self.credentials, - prompt_messages=prompt_messages, - tools=tools, - ), + return self._round_robin_invoke( + function=self.model_type_instance.get_num_tokens, + model=self.model_name, + credentials=self.credentials, + prompt_messages=list(prompt_messages), + tools=list(tools) if tools else None, ) def invoke_text_embedding( @@ -216,15 +212,12 @@ class ModelInstance: """ if not isinstance(self.model_type_instance, TextEmbeddingModel): raise Exception("Model type instance is not TextEmbeddingModel") - return cast( - EmbeddingResult, - self._round_robin_invoke( - function=self.model_type_instance.invoke, - model=self.model_name, - credentials=self.credentials, - texts=texts, - input_type=input_type, - ), + return self._round_robin_invoke( + function=self.model_type_instance.invoke, + model=self.model_name, + credentials=self.credentials, + texts=texts, + input_type=input_type, ) def invoke_multimodal_embedding( @@ -241,15 +234,12 @@ class ModelInstance: """ if not isinstance(self.model_type_instance, TextEmbeddingModel): raise Exception("Model type instance is not TextEmbeddingModel") - return cast( - EmbeddingResult, - self._round_robin_invoke( - function=self.model_type_instance.invoke, - model=self.model_name, - credentials=self.credentials, - multimodel_documents=multimodel_documents, - input_type=input_type, - ), + return self._round_robin_invoke( + function=self.model_type_instance.invoke, + model=self.model_name, + credentials=self.credentials, + multimodel_documents=multimodel_documents, + input_type=input_type, ) def get_text_embedding_num_tokens(self, texts: list[str]) -> list[int]: @@ -261,14 +251,11 @@ class ModelInstance: """ if not isinstance(self.model_type_instance, TextEmbeddingModel): raise Exception("Model type instance is not TextEmbeddingModel") - return cast( - list[int], - self._round_robin_invoke( - function=self.model_type_instance.get_num_tokens, - model=self.model_name, - credentials=self.credentials, - texts=texts, - ), + return self._round_robin_invoke( + function=self.model_type_instance.get_num_tokens, + model=self.model_name, + credentials=self.credentials, + texts=texts, ) def invoke_rerank( @@ -289,23 +276,20 @@ class ModelInstance: """ if not isinstance(self.model_type_instance, RerankModel): raise Exception("Model type instance is not RerankModel") - return cast( - RerankResult, - self._round_robin_invoke( - function=self.model_type_instance.invoke, - model=self.model_name, - credentials=self.credentials, - query=query, - docs=docs, - score_threshold=score_threshold, - top_n=top_n, - ), + return self._round_robin_invoke( + function=self.model_type_instance.invoke, + model=self.model_name, + credentials=self.credentials, + query=query, + docs=docs, + score_threshold=score_threshold, + top_n=top_n, ) def invoke_multimodal_rerank( self, - query: dict, - docs: list[dict], + query: MultimodalRerankInput, + docs: list[MultimodalRerankInput], score_threshold: float | None = None, top_n: int | None = None, ) -> RerankResult: @@ -320,17 +304,14 @@ class ModelInstance: """ if not isinstance(self.model_type_instance, RerankModel): raise Exception("Model type instance is not RerankModel") - return cast( - RerankResult, - self._round_robin_invoke( - function=self.model_type_instance.invoke_multimodal_rerank, - model=self.model_name, - credentials=self.credentials, - query=query, - docs=docs, - score_threshold=score_threshold, - top_n=top_n, - ), + return self._round_robin_invoke( + function=self.model_type_instance.invoke_multimodal_rerank, + model=self.model_name, + credentials=self.credentials, + query=query, + docs=docs, + score_threshold=score_threshold, + top_n=top_n, ) def invoke_moderation(self, text: str) -> bool: @@ -342,14 +323,11 @@ class ModelInstance: """ if not isinstance(self.model_type_instance, ModerationModel): raise Exception("Model type instance is not ModerationModel") - return cast( - bool, - self._round_robin_invoke( - function=self.model_type_instance.invoke, - model=self.model_name, - credentials=self.credentials, - text=text, - ), + return self._round_robin_invoke( + function=self.model_type_instance.invoke, + model=self.model_name, + credentials=self.credentials, + text=text, ) def invoke_speech2text(self, file: IO[bytes]) -> str: @@ -361,14 +339,11 @@ class ModelInstance: """ if not isinstance(self.model_type_instance, Speech2TextModel): raise Exception("Model type instance is not Speech2TextModel") - return cast( - str, - self._round_robin_invoke( - function=self.model_type_instance.invoke, - model=self.model_name, - credentials=self.credentials, - file=file, - ), + return self._round_robin_invoke( + function=self.model_type_instance.invoke, + model=self.model_name, + credentials=self.credentials, + file=file, ) def invoke_tts(self, content_text: str, voice: str = "") -> Iterable[bytes]: @@ -381,18 +356,15 @@ class ModelInstance: """ if not isinstance(self.model_type_instance, TTSModel): raise Exception("Model type instance is not TTSModel") - return cast( - Iterable[bytes], - self._round_robin_invoke( - function=self.model_type_instance.invoke, - model=self.model_name, - credentials=self.credentials, - content_text=content_text, - voice=voice, - ), + return self._round_robin_invoke( + function=self.model_type_instance.invoke, + model=self.model_name, + credentials=self.credentials, + content_text=content_text, + voice=voice, ) - def _round_robin_invoke(self, function: Callable[..., Any], *args, **kwargs): + def _round_robin_invoke[**P, R](self, function: Callable[P, R], *args: P.args, **kwargs: P.kwargs) -> R: """ Round-robin invoke :param function: function to invoke @@ -430,9 +402,8 @@ class ModelInstance: continue try: - if "credentials" in kwargs: - del kwargs["credentials"] - return function(*args, **kwargs, credentials=lb_config.credentials) + kwargs["credentials"] = lb_config.credentials + return function(*args, **kwargs) except InvokeRateLimitError as e: # expire in 60 seconds self.load_balancing_manager.cooldown(lb_config, expire=60) @@ -556,7 +527,7 @@ class LBModelManager: model_type: ModelType, model: str, load_balancing_configs: list[ModelLoadBalancingConfiguration], - managed_credentials: dict | None = None, + managed_credentials: dict[str, Any] | None = None, ): """ Load balancing model manager diff --git a/api/core/moderation/api/api.py b/api/core/moderation/api/api.py index 2d72b17a04..28165592fc 100644 --- a/api/core/moderation/api/api.py +++ b/api/core/moderation/api/api.py @@ -1,3 +1,5 @@ +from typing import Any + from pydantic import BaseModel, Field from sqlalchemy import select @@ -10,7 +12,7 @@ from models.api_based_extension import APIBasedExtension class ModerationInputParams(BaseModel): app_id: str = "" - inputs: dict = Field(default_factory=dict) + inputs: dict[str, Any] = Field(default_factory=dict) query: str = "" @@ -23,7 +25,7 @@ class ApiModeration(Moderation): name: str = "api" @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): """ Validate the incoming form config data. @@ -41,7 +43,7 @@ class ApiModeration(Moderation): if not extension: raise ValueError("API-based Extension not found. Please check it again.") - def moderation_for_inputs(self, inputs: dict, query: str = "") -> ModerationInputsResult: + def moderation_for_inputs(self, inputs: dict[str, Any], query: str = "") -> ModerationInputsResult: flagged = False preset_response = "" if self.config is None: @@ -73,7 +75,7 @@ class ApiModeration(Moderation): flagged=flagged, action=ModerationAction.DIRECT_OUTPUT, preset_response=preset_response ) - def _get_config_by_requestor(self, extension_point: APIBasedExtensionPoint, params: dict): + def _get_config_by_requestor(self, extension_point: APIBasedExtensionPoint, params: dict[str, Any]): if self.config is None: raise ValueError("The config is not set.") extension = self._get_api_based_extension(self.tenant_id, self.config.get("api_based_extension_id", "")) diff --git a/api/core/moderation/base.py b/api/core/moderation/base.py index 31dd0d5568..e090ee89ad 100644 --- a/api/core/moderation/base.py +++ b/api/core/moderation/base.py @@ -1,5 +1,6 @@ from abc import ABC, abstractmethod from enum import StrEnum, auto +from typing import Any from pydantic import BaseModel, Field @@ -15,7 +16,7 @@ class ModerationInputsResult(BaseModel): flagged: bool = False action: ModerationAction preset_response: str = "" - inputs: dict = Field(default_factory=dict) + inputs: dict[str, Any] = Field(default_factory=dict) query: str = "" @@ -33,13 +34,13 @@ class Moderation(Extensible, ABC): module: ExtensionModule = ExtensionModule.MODERATION - def __init__(self, app_id: str, tenant_id: str, config: dict | None = None): + def __init__(self, app_id: str, tenant_id: str, config: dict[str, Any] | None = None): super().__init__(tenant_id, config) self.app_id = app_id @classmethod @abstractmethod - def validate_config(cls, tenant_id: str, config: dict) -> None: + def validate_config(cls, tenant_id: str, config: dict[str, Any]) -> None: """ Validate the incoming form config data. @@ -50,7 +51,7 @@ class Moderation(Extensible, ABC): raise NotImplementedError @abstractmethod - def moderation_for_inputs(self, inputs: dict, query: str = "") -> ModerationInputsResult: + def moderation_for_inputs(self, inputs: dict[str, Any], query: str = "") -> ModerationInputsResult: """ Moderation for inputs. After the user inputs, this method will be called to perform sensitive content review @@ -75,7 +76,7 @@ class Moderation(Extensible, ABC): raise NotImplementedError @classmethod - def _validate_inputs_and_outputs_config(cls, config: dict, is_preset_response_required: bool): + def _validate_inputs_and_outputs_config(cls, config: dict[str, Any], is_preset_response_required: bool): # inputs_config inputs_config = config.get("inputs_config") if not isinstance(inputs_config, dict): diff --git a/api/core/moderation/factory.py b/api/core/moderation/factory.py index c2c8be6d6d..c22306ac94 100644 --- a/api/core/moderation/factory.py +++ b/api/core/moderation/factory.py @@ -1,3 +1,5 @@ +from typing import Any + from core.extension.extensible import ExtensionModule from core.moderation.base import Moderation, ModerationInputsResult, ModerationOutputsResult from extensions.ext_code_based_extension import code_based_extension @@ -6,12 +8,12 @@ from extensions.ext_code_based_extension import code_based_extension class ModerationFactory: __extension_instance: Moderation - def __init__(self, name: str, app_id: str, tenant_id: str, config: dict): + def __init__(self, name: str, app_id: str, tenant_id: str, config: dict[str, Any]): extension_class = code_based_extension.extension_class(ExtensionModule.MODERATION, name) self.__extension_instance = extension_class(app_id, tenant_id, config) @classmethod - def validate_config(cls, name: str, tenant_id: str, config: dict): + def validate_config(cls, name: str, tenant_id: str, config: dict[str, Any]): """ Validate the incoming form config data. @@ -24,7 +26,7 @@ class ModerationFactory: # FIXME: mypy error, try to fix it instead of using type: ignore extension_class.validate_config(tenant_id, config) # type: ignore - def moderation_for_inputs(self, inputs: dict, query: str = "") -> ModerationInputsResult: + def moderation_for_inputs(self, inputs: dict[str, Any], query: str = "") -> ModerationInputsResult: """ Moderation for inputs. After the user inputs, this method will be called to perform sensitive content review diff --git a/api/core/moderation/keywords/keywords.py b/api/core/moderation/keywords/keywords.py index 8d8d153743..7d80d3a53c 100644 --- a/api/core/moderation/keywords/keywords.py +++ b/api/core/moderation/keywords/keywords.py @@ -8,7 +8,7 @@ class KeywordsModeration(Moderation): name: str = "keywords" @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): """ Validate the incoming form config data. @@ -28,7 +28,7 @@ class KeywordsModeration(Moderation): if len(keywords_row_len) > 100: raise ValueError("the number of rows for the keywords must be less than 100") - def moderation_for_inputs(self, inputs: dict, query: str = "") -> ModerationInputsResult: + def moderation_for_inputs(self, inputs: dict[str, Any], query: str = "") -> ModerationInputsResult: flagged = False preset_response = "" if self.config is None: @@ -66,7 +66,7 @@ class KeywordsModeration(Moderation): flagged=flagged, action=ModerationAction.DIRECT_OUTPUT, preset_response=preset_response ) - def _is_violated(self, inputs: dict, keywords_list: list) -> bool: + def _is_violated(self, inputs: dict[str, Any], keywords_list: list[str]) -> bool: return any(self._check_keywords_in_value(keywords_list, value) for value in inputs.values()) def _check_keywords_in_value(self, keywords_list: Sequence[str], value: Any) -> bool: diff --git a/api/core/moderation/openai_moderation/openai_moderation.py b/api/core/moderation/openai_moderation/openai_moderation.py index dd038c77f1..6e6e94502c 100644 --- a/api/core/moderation/openai_moderation/openai_moderation.py +++ b/api/core/moderation/openai_moderation/openai_moderation.py @@ -1,14 +1,15 @@ -from graphon.model_runtime.entities.model_entities import ModelType +from typing import Any from core.model_manager import ModelManager from core.moderation.base import Moderation, ModerationAction, ModerationInputsResult, ModerationOutputsResult +from graphon.model_runtime.entities.model_entities import ModelType class OpenAIModeration(Moderation): name: str = "openai_moderation" @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): """ Validate the incoming form config data. @@ -18,7 +19,7 @@ class OpenAIModeration(Moderation): """ cls._validate_inputs_and_outputs_config(config, True) - def moderation_for_inputs(self, inputs: dict, query: str = "") -> ModerationInputsResult: + def moderation_for_inputs(self, inputs: dict[str, Any], query: str = "") -> ModerationInputsResult: flagged = False preset_response = "" if self.config is None: @@ -49,7 +50,7 @@ class OpenAIModeration(Moderation): flagged=flagged, action=ModerationAction.DIRECT_OUTPUT, preset_response=preset_response ) - def _is_violated(self, inputs: dict): + def _is_violated(self, inputs: dict[str, Any]): text = "\n".join(str(inputs.values())) model_manager = ModelManager.for_tenant(tenant_id=self.tenant_id) model_instance = model_manager.get_model_instance( diff --git a/api/core/ops/aliyun_trace/aliyun_trace.py b/api/core/ops/aliyun_trace/aliyun_trace.py index 70aaf2a07b..76e81242f4 100644 --- a/api/core/ops/aliyun_trace/aliyun_trace.py +++ b/api/core/ops/aliyun_trace/aliyun_trace.py @@ -1,8 +1,6 @@ import logging from collections.abc import Sequence -from graphon.entities import WorkflowNodeExecution -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from opentelemetry.trace import SpanKind from sqlalchemy.orm import sessionmaker @@ -60,6 +58,8 @@ from core.ops.entities.trace_entity import ( ) from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.entities import WorkflowNodeExecution +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from models import WorkflowNodeExecutionTriggeredFrom logger = logging.getLogger(__name__) diff --git a/api/core/ops/aliyun_trace/utils.py b/api/core/ops/aliyun_trace/utils.py index aa35ac74c2..2e02a186cc 100644 --- a/api/core/ops/aliyun_trace/utils.py +++ b/api/core/ops/aliyun_trace/utils.py @@ -2,8 +2,6 @@ import json from collections.abc import Mapping from typing import Any, TypedDict -from graphon.entities import WorkflowNodeExecution -from graphon.enums import WorkflowNodeExecutionStatus from opentelemetry.trace import Link, Status, StatusCode from core.ops.aliyun_trace.entities.semconv import ( @@ -17,6 +15,8 @@ from core.ops.aliyun_trace.entities.semconv import ( ) from core.rag.models.document import Document from extensions.ext_database import db +from graphon.entities import WorkflowNodeExecution +from graphon.enums import WorkflowNodeExecutionStatus from models import EndUser # Constants diff --git a/api/core/ops/arize_phoenix_trace/arize_phoenix_trace.py b/api/core/ops/arize_phoenix_trace/arize_phoenix_trace.py index 66933cea28..78516e1a22 100644 --- a/api/core/ops/arize_phoenix_trace/arize_phoenix_trace.py +++ b/api/core/ops/arize_phoenix_trace/arize_phoenix_trace.py @@ -6,7 +6,6 @@ from datetime import datetime, timedelta from typing import Any, Union, cast from urllib.parse import urlparse -from graphon.enums import WorkflowNodeExecutionStatus from openinference.semconv.trace import ( MessageAttributes, OpenInferenceMimeTypeValues, @@ -41,6 +40,7 @@ from core.ops.entities.trace_entity import ( from core.ops.utils import JSON_DICT_ADAPTER from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.enums import WorkflowNodeExecutionStatus from models.model import EndUser, MessageFile from models.workflow import WorkflowNodeExecutionTriggeredFrom @@ -778,7 +778,7 @@ class ArizePhoenixDataTrace(BaseTraceInstance): logger.info("[Arize/Phoenix] Failed to construct project URL: %s", str(e), exc_info=True) raise ValueError(f"[Arize/Phoenix] Failed to construct project URL: {str(e)}") - def _construct_llm_attributes(self, prompts: dict | list | str | None) -> dict[str, str]: + def _construct_llm_attributes(self, prompts: dict[str, Any] | list[Any] | str | None) -> dict[str, str]: """Construct LLM attributes with passed prompts for Arize/Phoenix.""" attributes: dict[str, str] = {} @@ -797,7 +797,9 @@ class ArizePhoenixDataTrace(BaseTraceInstance): path = f"{SpanAttributes.LLM_INPUT_MESSAGES}.{message_index}.{key}" set_attribute(path, value) - def set_tool_call_attributes(message_index: int, tool_index: int, tool_call: dict | object | None) -> None: + def set_tool_call_attributes( + message_index: int, tool_index: int, tool_call: dict[str, Any] | object | None + ) -> None: """Extract and assign tool call details safely.""" if not tool_call: return diff --git a/api/core/ops/langfuse_trace/langfuse_trace.py b/api/core/ops/langfuse_trace/langfuse_trace.py index 9be2ce1bdf..7eacc2be46 100644 --- a/api/core/ops/langfuse_trace/langfuse_trace.py +++ b/api/core/ops/langfuse_trace/langfuse_trace.py @@ -3,7 +3,6 @@ import os import uuid from datetime import UTC, datetime, timedelta -from graphon.enums import BuiltinNodeTypes from langfuse import Langfuse from langfuse.api import ( CreateGenerationBody, @@ -40,6 +39,7 @@ from core.ops.langfuse_trace.entities.langfuse_trace_entity import ( from core.ops.utils import filter_none_values from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.enums import BuiltinNodeTypes from models import EndUser, WorkflowNodeExecutionTriggeredFrom from models.enums import MessageStatus @@ -59,6 +59,24 @@ class LangFuseDataTrace(BaseTraceInstance): ) self.file_base_url = os.getenv("FILES_URL", "http://127.0.0.1:5001") + @staticmethod + def _get_completion_start_time( + start_time: datetime | None, time_to_first_token: float | int | None + ) -> datetime | None: + """Convert a relative TTFT value in seconds into Langfuse's absolute completion start time.""" + if start_time is None or time_to_first_token is None: + return None + + try: + ttft_seconds = float(time_to_first_token) + except (TypeError, ValueError): + return None + + if ttft_seconds < 0: + return None + + return start_time + timedelta(seconds=ttft_seconds) + def trace(self, trace_info: BaseTraceInfo): if isinstance(trace_info, WorkflowTraceInfo): self.workflow_trace(trace_info) @@ -189,10 +207,18 @@ class LangFuseDataTrace(BaseTraceInstance): total_token = metadata.get("total_tokens", 0) prompt_tokens = 0 completion_tokens = 0 + completion_start_time = None try: - usage_data = process_data.get("usage", {}) if "usage" in process_data else outputs.get("usage", {}) + usage_data = process_data.get("usage") + if not isinstance(usage_data, dict): + usage_data = outputs.get("usage") + if not isinstance(usage_data, dict): + usage_data = {} prompt_tokens = usage_data.get("prompt_tokens", 0) completion_tokens = usage_data.get("completion_tokens", 0) + completion_start_time = self._get_completion_start_time( + created_at, usage_data.get("time_to_first_token") + ) except Exception: logger.error("Failed to extract usage", exc_info=True) @@ -210,6 +236,7 @@ class LangFuseDataTrace(BaseTraceInstance): trace_id=trace_id, model=process_data.get("model_name"), start_time=created_at, + completion_start_time=completion_start_time, end_time=finished_at, input=inputs, output=outputs, @@ -290,11 +317,16 @@ class LangFuseDataTrace(BaseTraceInstance): unit=UnitEnum.TOKENS, totalCost=message_data.total_price, ) + completion_start_time = self._get_completion_start_time( + trace_info.start_time, + trace_info.gen_ai_server_time_to_first_token, + ) langfuse_generation_data = LangfuseGeneration( name="llm", trace_id=trace_id, start_time=trace_info.start_time, + completion_start_time=completion_start_time, end_time=trace_info.end_time, model=message_data.model_id, input=trace_info.inputs, diff --git a/api/core/ops/langsmith_trace/langsmith_trace.py b/api/core/ops/langsmith_trace/langsmith_trace.py index 490c64af84..d960038f15 100644 --- a/api/core/ops/langsmith_trace/langsmith_trace.py +++ b/api/core/ops/langsmith_trace/langsmith_trace.py @@ -4,7 +4,6 @@ import uuid from datetime import datetime, timedelta from typing import cast -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from langsmith import Client from langsmith.schemas import RunBase from sqlalchemy.orm import sessionmaker @@ -30,6 +29,7 @@ from core.ops.langsmith_trace.entities.langsmith_trace_entity import ( from core.ops.utils import filter_none_values, generate_dotted_order from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from models import EndUser, MessageFile, WorkflowNodeExecutionTriggeredFrom logger = logging.getLogger(__name__) diff --git a/api/core/ops/mlflow_trace/mlflow_trace.py b/api/core/ops/mlflow_trace/mlflow_trace.py index 3d8c1dd038..87fcaeabcc 100644 --- a/api/core/ops/mlflow_trace/mlflow_trace.py +++ b/api/core/ops/mlflow_trace/mlflow_trace.py @@ -4,7 +4,6 @@ from datetime import datetime, timedelta from typing import Any, cast import mlflow -from graphon.enums import BuiltinNodeTypes from mlflow.entities import Document, Span, SpanEvent, SpanStatusCode, SpanType from mlflow.tracing.constant import SpanAttributeKey, TokenUsageKey, TraceMetadataKey from mlflow.tracing.fluent import start_span_no_context, update_current_trace @@ -26,6 +25,7 @@ from core.ops.entities.trace_entity import ( ) from core.ops.utils import JSON_DICT_ADAPTER from extensions.ext_database import db +from graphon.enums import BuiltinNodeTypes from models import EndUser from models.workflow import WorkflowNodeExecutionModel @@ -242,7 +242,7 @@ class MLflowDataTrace(BaseTraceInstance): return inputs, attributes - def _parse_knowledge_retrieval_outputs(self, outputs: dict): + def _parse_knowledge_retrieval_outputs(self, outputs: dict[str, Any]): """Parse KR outputs and attributes from KR workflow node""" retrieved = outputs.get("result", []) @@ -319,7 +319,7 @@ class MLflowDataTrace(BaseTraceInstance): end_time_ns=datetime_to_nanoseconds(trace_info.end_time), ) - def _get_message_user_id(self, metadata: dict) -> str | None: + def _get_message_user_id(self, metadata: dict[str, Any]) -> str | None: if (end_user_id := metadata.get("from_end_user_id")) and ( end_user_data := db.session.get(EndUser, end_user_id) ): @@ -468,7 +468,7 @@ class MLflowDataTrace(BaseTraceInstance): } return node_type_mapping.get(node_type, "CHAIN") # type: ignore[arg-type,call-overload] - def _set_trace_metadata(self, span: Span, metadata: dict): + def _set_trace_metadata(self, span: Span, metadata: dict[str, Any]): token = None try: # NB: Set span in context such that we can use update_current_trace() API @@ -490,7 +490,7 @@ class MLflowDataTrace(BaseTraceInstance): return messages return prompts # Fallback to original format - def _parse_single_message(self, item: dict): + def _parse_single_message(self, item: dict[str, Any]): """Postprocess single message format to be standard chat message""" role = item.get("role", "user") msg = {"role": role, "content": item.get("text", "")} diff --git a/api/core/ops/opik_trace/opik_trace.py b/api/core/ops/opik_trace/opik_trace.py index 2215bdeb33..672efe45bd 100644 --- a/api/core/ops/opik_trace/opik_trace.py +++ b/api/core/ops/opik_trace/opik_trace.py @@ -3,9 +3,8 @@ import logging import os import uuid from datetime import datetime, timedelta -from typing import cast +from typing import Any, cast -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from opik import Opik, Trace from opik.id_helpers import uuid4_to_uuid7 from sqlalchemy.orm import sessionmaker @@ -25,6 +24,7 @@ from core.ops.entities.trace_entity import ( ) from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from models import EndUser, MessageFile, WorkflowNodeExecutionTriggeredFrom logger = logging.getLogger(__name__) @@ -436,7 +436,7 @@ class OpikDataTrace(BaseTraceInstance): self.add_span(span_data) - def add_trace(self, opik_trace_data: dict) -> Trace: + def add_trace(self, opik_trace_data: dict[str, Any]) -> Trace: try: trace = self.opik_client.trace(**opik_trace_data) logger.debug("Opik Trace created successfully") @@ -444,7 +444,7 @@ class OpikDataTrace(BaseTraceInstance): except Exception as e: raise ValueError(f"Opik Failed to create trace: {str(e)}") - def add_span(self, opik_span_data: dict): + def add_span(self, opik_span_data: dict[str, Any]): try: self.opik_client.span(**opik_span_data) logger.debug("Opik Span created successfully") diff --git a/api/core/ops/ops_trace_manager.py b/api/core/ops/ops_trace_manager.py index fd235faf80..cd63951537 100644 --- a/api/core/ops/ops_trace_manager.py +++ b/api/core/ops/ops_trace_manager.py @@ -324,7 +324,7 @@ class OpsTraceManager: @classmethod def encrypt_tracing_config( - cls, tenant_id: str, tracing_provider: str, tracing_config: dict, current_trace_config=None + cls, tenant_id: str, tracing_provider: str, tracing_config: dict[str, Any], current_trace_config=None ): """ Encrypt tracing config. @@ -363,7 +363,7 @@ class OpsTraceManager: return encrypted_config.model_dump() @classmethod - def decrypt_tracing_config(cls, tenant_id: str, tracing_provider: str, tracing_config: dict): + def decrypt_tracing_config(cls, tenant_id: str, tracing_provider: str, tracing_config: dict[str, Any]): """ Decrypt tracing config :param tenant_id: tenant id @@ -408,7 +408,7 @@ class OpsTraceManager: return dict(decrypted_config) @classmethod - def obfuscated_decrypt_token(cls, tracing_provider: str, decrypt_tracing_config: dict): + def obfuscated_decrypt_token(cls, tracing_provider: str, decrypt_tracing_config: dict[str, Any]): """ Decrypt tracing config :param tracing_provider: tracing provider @@ -581,7 +581,7 @@ class OpsTraceManager: return app_trace_config @staticmethod - def check_trace_config_is_effective(tracing_config: dict, tracing_provider: str): + def check_trace_config_is_effective(tracing_config: dict[str, Any], tracing_provider: str): """ Check trace config is effective :param tracing_config: tracing config @@ -596,7 +596,7 @@ class OpsTraceManager: return trace_instance(config).api_check() @staticmethod - def get_trace_config_project_key(tracing_config: dict, tracing_provider: str): + def get_trace_config_project_key(tracing_config: dict[str, Any], tracing_provider: str): """ get trace config is project key :param tracing_config: tracing config @@ -611,7 +611,7 @@ class OpsTraceManager: return trace_instance(config).get_project_key() @staticmethod - def get_trace_config_project_url(tracing_config: dict, tracing_provider: str): + def get_trace_config_project_url(tracing_config: dict[str, Any], tracing_provider: str): """ get trace config is project key :param tracing_config: tracing config @@ -1322,8 +1322,8 @@ class TraceTask: error=error, ) - def node_execution_trace(self, **kwargs) -> WorkflowNodeTraceInfo | dict: - node_data: dict = kwargs.get("node_execution_data", {}) + def node_execution_trace(self, **kwargs) -> WorkflowNodeTraceInfo | dict[str, Any]: + node_data: dict[str, Any] = kwargs.get("node_execution_data", {}) if not node_data: return {} @@ -1431,7 +1431,7 @@ class TraceTask: return node_trace return DraftNodeExecutionTrace(**node_trace.model_dump()) - def _extract_streaming_metrics(self, message_data) -> dict: + def _extract_streaming_metrics(self, message_data) -> dict[str, Any]: if not message_data.message_metadata: return {} diff --git a/api/core/ops/tencent_trace/span_builder.py b/api/core/ops/tencent_trace/span_builder.py index f79095d966..36878dc58f 100644 --- a/api/core/ops/tencent_trace/span_builder.py +++ b/api/core/ops/tencent_trace/span_builder.py @@ -6,8 +6,6 @@ import json import logging from datetime import datetime -from graphon.entities import WorkflowNodeExecution -from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus from opentelemetry.trace import Status, StatusCode from core.ops.entities.trace_entity import ( @@ -43,6 +41,8 @@ from core.ops.tencent_trace.entities.semconv import ( from core.ops.tencent_trace.entities.tencent_trace_entity import SpanData from core.ops.tencent_trace.utils import TencentTraceUtils from core.rag.models.document import Document +from graphon.entities import WorkflowNodeExecution +from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus logger = logging.getLogger(__name__) diff --git a/api/core/ops/tencent_trace/tencent_trace.py b/api/core/ops/tencent_trace/tencent_trace.py index 84f54d8a5a..d681b9da80 100644 --- a/api/core/ops/tencent_trace/tencent_trace.py +++ b/api/core/ops/tencent_trace/tencent_trace.py @@ -4,10 +4,6 @@ Tencent APM tracing implementation with separated concerns import logging -from graphon.entities.workflow_node_execution import ( - WorkflowNodeExecution, -) -from graphon.nodes import BuiltinNodeTypes from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker @@ -29,6 +25,10 @@ from core.ops.tencent_trace.span_builder import TencentSpanBuilder from core.ops.tencent_trace.utils import TencentTraceUtils from core.repositories import SQLAlchemyWorkflowNodeExecutionRepository from extensions.ext_database import db +from graphon.entities.workflow_node_execution import ( + WorkflowNodeExecution, +) +from graphon.nodes import BuiltinNodeTypes from models import Account, App, TenantAccountJoin, WorkflowNodeExecutionTriggeredFrom logger = logging.getLogger(__name__) diff --git a/api/core/ops/weave_trace/weave_trace.py b/api/core/ops/weave_trace/weave_trace.py index 8d9ba4694d..f79544f1c7 100644 --- a/api/core/ops/weave_trace/weave_trace.py +++ b/api/core/ops/weave_trace/weave_trace.py @@ -6,7 +6,6 @@ from typing import Any, cast import wandb import weave -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from sqlalchemy.orm import sessionmaker from weave.trace_server.trace_server_interface import ( CallEndReq, @@ -33,6 +32,7 @@ from core.ops.entities.trace_entity import ( from core.ops.weave_trace.entities.weave_trace_entity import WeaveTraceModel from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from models import EndUser, MessageFile, WorkflowNodeExecutionTriggeredFrom logger = logging.getLogger(__name__) diff --git a/api/core/plugin/backwards_invocation/app.py b/api/core/plugin/backwards_invocation/app.py index be11d2223c..c76cb865c3 100644 --- a/api/core/plugin/backwards_invocation/app.py +++ b/api/core/plugin/backwards_invocation/app.py @@ -1,6 +1,6 @@ import uuid from collections.abc import Generator, Mapping -from typing import Any, Union, cast +from typing import Any, cast from sqlalchemy import select from sqlalchemy.orm import Session @@ -72,17 +72,18 @@ class PluginAppBackwardsInvocation(BaseBackwardsInvocation): conversation_id = conversation_id or "" - if app.mode in {AppMode.ADVANCED_CHAT, AppMode.AGENT_CHAT, AppMode.CHAT}: - if not query: - raise ValueError("missing query") + match app.mode: + case AppMode.ADVANCED_CHAT | AppMode.AGENT_CHAT | AppMode.CHAT: + if not query: + raise ValueError("missing query") - return cls.invoke_chat_app(app, user, conversation_id, query, stream, inputs, files) - elif app.mode == AppMode.WORKFLOW: - return cls.invoke_workflow_app(app, user, stream, inputs, files) - elif app.mode == AppMode.COMPLETION: - return cls.invoke_completion_app(app, user, stream, inputs, files) - - raise ValueError("unexpected app type") + return cls.invoke_chat_app(app, user, conversation_id, query, stream, inputs, files) + case AppMode.WORKFLOW: + return cls.invoke_workflow_app(app, user, stream, inputs, files) + case AppMode.COMPLETION: + return cls.invoke_completion_app(app, user, stream, inputs, files) + case _: + raise ValueError("unexpected app type") @classmethod def invoke_chat_app( @@ -98,60 +99,61 @@ class PluginAppBackwardsInvocation(BaseBackwardsInvocation): """ invoke chat app """ - if app.mode == AppMode.ADVANCED_CHAT: - workflow = app.workflow - if not workflow: + match app.mode: + case AppMode.ADVANCED_CHAT: + workflow = app.workflow + if not workflow: + raise ValueError("unexpected app type") + + pause_config = PauseStateLayerConfig( + session_factory=db.engine, + state_owner_user_id=workflow.created_by, + ) + + return AdvancedChatAppGenerator().generate( + app_model=app, + workflow=workflow, + user=user, + args={ + "inputs": inputs, + "query": query, + "files": files, + "conversation_id": conversation_id, + }, + invoke_from=InvokeFrom.SERVICE_API, + workflow_run_id=str(uuid.uuid4()), + streaming=stream, + pause_state_config=pause_config, + ) + case AppMode.AGENT_CHAT: + return AgentChatAppGenerator().generate( + app_model=app, + user=user, + args={ + "inputs": inputs, + "query": query, + "files": files, + "conversation_id": conversation_id, + }, + invoke_from=InvokeFrom.SERVICE_API, + streaming=stream, + ) + case AppMode.CHAT: + return ChatAppGenerator().generate( + app_model=app, + user=user, + args={ + "inputs": inputs, + "query": query, + "files": files, + "conversation_id": conversation_id, + }, + invoke_from=InvokeFrom.SERVICE_API, + streaming=stream, + ) + case _: raise ValueError("unexpected app type") - pause_config = PauseStateLayerConfig( - session_factory=db.engine, - state_owner_user_id=workflow.created_by, - ) - - return AdvancedChatAppGenerator().generate( - app_model=app, - workflow=workflow, - user=user, - args={ - "inputs": inputs, - "query": query, - "files": files, - "conversation_id": conversation_id, - }, - invoke_from=InvokeFrom.SERVICE_API, - workflow_run_id=str(uuid.uuid4()), - streaming=stream, - pause_state_config=pause_config, - ) - elif app.mode == AppMode.AGENT_CHAT: - return AgentChatAppGenerator().generate( - app_model=app, - user=user, - args={ - "inputs": inputs, - "query": query, - "files": files, - "conversation_id": conversation_id, - }, - invoke_from=InvokeFrom.SERVICE_API, - streaming=stream, - ) - elif app.mode == AppMode.CHAT: - return ChatAppGenerator().generate( - app_model=app, - user=user, - args={ - "inputs": inputs, - "query": query, - "files": files, - "conversation_id": conversation_id, - }, - invoke_from=InvokeFrom.SERVICE_API, - streaming=stream, - ) - else: - raise ValueError("unexpected app type") - @classmethod def invoke_workflow_app( cls, @@ -205,7 +207,7 @@ class PluginAppBackwardsInvocation(BaseBackwardsInvocation): ) @classmethod - def _get_user(cls, user_id: str) -> Union[EndUser, Account]: + def _get_user(cls, user_id: str) -> EndUser | Account: """ get the user by user id """ diff --git a/api/core/plugin/backwards_invocation/model.py b/api/core/plugin/backwards_invocation/model.py index c715b9171c..c92438960a 100644 --- a/api/core/plugin/backwards_invocation/model.py +++ b/api/core/plugin/backwards_invocation/model.py @@ -1,20 +1,7 @@ import tempfile from binascii import hexlify, unhexlify from collections.abc import Generator - -from graphon.model_runtime.entities.llm_entities import ( - LLMResult, - LLMResultChunk, - LLMResultChunkDelta, - LLMResultChunkWithStructuredOutput, - LLMResultWithStructuredOutput, -) -from graphon.model_runtime.entities.message_entities import ( - PromptMessage, - SystemPromptMessage, - UserPromptMessage, -) -from graphon.model_runtime.entities.model_entities import ModelType +from typing import Any from core.app.llm import deduct_llm_quota from core.llm_generator.output_parser.structured_output import invoke_llm_with_structured_output @@ -32,6 +19,19 @@ from core.plugin.entities.request import ( ) from core.tools.entities.tool_entities import ToolProviderType from core.tools.utils.model_invocation_utils import ModelInvocationUtils +from graphon.model_runtime.entities.llm_entities import ( + LLMResult, + LLMResultChunk, + LLMResultChunkDelta, + LLMResultChunkWithStructuredOutput, + LLMResultWithStructuredOutput, +) +from graphon.model_runtime.entities.message_entities import ( + PromptMessage, + SystemPromptMessage, + UserPromptMessage, +) +from graphon.model_runtime.entities.model_entities import ModelType from models.account import Tenant @@ -226,7 +226,7 @@ class PluginModelBackwardsInvocation(BaseBackwardsInvocation): # invoke model response = model_instance.invoke_tts(content_text=payload.content_text, voice=payload.voice) - def handle() -> Generator[dict, None, None]: + def handle() -> Generator[dict[str, Any], None, None]: for chunk in response: yield {"result": hexlify(chunk).decode("utf-8")} diff --git a/api/core/plugin/backwards_invocation/node.py b/api/core/plugin/backwards_invocation/node.py index 9478997494..9550e49992 100644 --- a/api/core/plugin/backwards_invocation/node.py +++ b/api/core/plugin/backwards_invocation/node.py @@ -1,3 +1,4 @@ +from core.plugin.backwards_invocation.base import BaseBackwardsInvocation from graphon.enums import BuiltinNodeTypes from graphon.nodes.llm.entities import ModelConfig as LLMModelConfig from graphon.nodes.parameter_extractor.entities import ( @@ -8,8 +9,6 @@ from graphon.nodes.question_classifier.entities import ( ClassConfig, QuestionClassifierNodeData, ) - -from core.plugin.backwards_invocation.base import BaseBackwardsInvocation from services.workflow_service import WorkflowService diff --git a/api/core/plugin/entities/endpoint.py b/api/core/plugin/entities/endpoint.py index e5bca140f8..6419963668 100644 --- a/api/core/plugin/entities/endpoint.py +++ b/api/core/plugin/entities/endpoint.py @@ -1,4 +1,5 @@ from datetime import datetime +from typing import Any from pydantic import BaseModel, Field, model_validator @@ -31,7 +32,7 @@ class EndpointEntity(BasePluginEntity): entity of an endpoint """ - settings: dict + settings: dict[str, Any] tenant_id: str plugin_id: str expired_at: datetime diff --git a/api/core/plugin/entities/marketplace.py b/api/core/plugin/entities/marketplace.py index 2177e8af90..03398873e3 100644 --- a/api/core/plugin/entities/marketplace.py +++ b/api/core/plugin/entities/marketplace.py @@ -1,10 +1,12 @@ -from graphon.model_runtime.entities.provider_entities import ProviderEntity +from typing import Any + from pydantic import BaseModel, Field, computed_field, model_validator from core.plugin.entities.endpoint import EndpointProviderDeclaration from core.plugin.entities.plugin import PluginResourceRequirements from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolProviderEntity +from graphon.model_runtime.entities.provider_entities import ProviderEntity class MarketplacePluginDeclaration(BaseModel): @@ -40,7 +42,7 @@ class MarketplacePluginDeclaration(BaseModel): @model_validator(mode="before") @classmethod - def transform_declaration(cls, data: dict): + def transform_declaration(cls, data: dict[str, Any]) -> dict[str, Any]: if "endpoint" in data and not data["endpoint"]: del data["endpoint"] if "model" in data and not data["model"]: diff --git a/api/core/plugin/entities/plugin.py b/api/core/plugin/entities/plugin.py index b095b4998d..89e0e8881c 100644 --- a/api/core/plugin/entities/plugin.py +++ b/api/core/plugin/entities/plugin.py @@ -3,7 +3,6 @@ from collections.abc import Mapping from enum import StrEnum, auto from typing import Any -from graphon.model_runtime.entities.provider_entities import ProviderEntity from packaging.version import InvalidVersion, Version from pydantic import BaseModel, Field, field_validator, model_validator @@ -14,6 +13,7 @@ from core.plugin.entities.endpoint import EndpointProviderDeclaration from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolProviderEntity from core.trigger.entities.entities import TriggerProviderEntity +from graphon.model_runtime.entities.provider_entities import ProviderEntity class PluginInstallationSource(StrEnum): @@ -123,7 +123,7 @@ class PluginDeclaration(BaseModel): @model_validator(mode="before") @classmethod - def validate_category(cls, values: dict): + def validate_category(cls, values: dict[str, Any]) -> dict[str, Any]: # auto detect category if values.get("tool"): values["category"] = PluginCategory.Tool diff --git a/api/core/plugin/entities/plugin_daemon.py b/api/core/plugin/entities/plugin_daemon.py index b57180690e..257638ad77 100644 --- a/api/core/plugin/entities/plugin_daemon.py +++ b/api/core/plugin/entities/plugin_daemon.py @@ -6,8 +6,6 @@ from datetime import datetime from enum import StrEnum from typing import Any -from graphon.model_runtime.entities.model_entities import AIModelEntity -from graphon.model_runtime.entities.provider_entities import ProviderEntity from pydantic import BaseModel, ConfigDict, Field from core.agent.plugin_entities import AgentProviderEntityWithPlugin @@ -18,6 +16,8 @@ from core.plugin.entities.plugin import PluginDeclaration, PluginEntity from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolProviderEntityWithPlugin from core.trigger.entities.entities import TriggerProviderEntity +from graphon.model_runtime.entities.model_entities import AIModelEntity +from graphon.model_runtime.entities.provider_entities import ProviderEntity class PluginDaemonBasicResponse[T: BaseModel | dict | list | bool | str](BaseModel): @@ -73,7 +73,7 @@ class PluginBasicBooleanResponse(BaseModel): """ result: bool - credentials: dict | None = None + credentials: dict[str, Any] | None = None class PluginModelSchemaEntity(BaseModel): diff --git a/api/core/plugin/entities/request.py b/api/core/plugin/entities/request.py index 059f3fa9be..1474883204 100644 --- a/api/core/plugin/entities/request.py +++ b/api/core/plugin/entities/request.py @@ -4,6 +4,10 @@ from collections.abc import Mapping from typing import Any, Literal from flask import Response +from pydantic import BaseModel, ConfigDict, Field, field_validator + +from core.entities.provider_entities import BasicProviderConfig +from core.plugin.utils.http_parser import deserialize_response from graphon.model_runtime.entities.message_entities import ( AssistantPromptMessage, PromptMessage, @@ -21,10 +25,6 @@ from graphon.nodes.parameter_extractor.entities import ( from graphon.nodes.question_classifier.entities import ( ClassConfig, ) -from pydantic import BaseModel, ConfigDict, Field, field_validator - -from core.entities.provider_entities import BasicProviderConfig -from core.plugin.utils.http_parser import deserialize_response class InvokeCredentials(BaseModel): @@ -49,7 +49,7 @@ class RequestInvokeTool(BaseModel): tool_type: Literal["builtin", "workflow", "api", "mcp"] provider: str tool: str - tool_parameters: dict + tool_parameters: dict[str, Any] credential_id: str | None = None @@ -209,7 +209,7 @@ class RequestInvokeEncrypt(BaseModel): opt: Literal["encrypt", "decrypt", "clear"] namespace: Literal["endpoint"] identity: str - data: dict = Field(default_factory=dict) + data: dict[str, Any] = Field(default_factory=dict) config: list[BasicProviderConfig] = Field(default_factory=list) diff --git a/api/core/plugin/impl/base.py b/api/core/plugin/impl/base.py index 7f36560b49..9ee8469892 100644 --- a/api/core/plugin/impl/base.py +++ b/api/core/plugin/impl/base.py @@ -5,14 +5,6 @@ from collections.abc import Callable, Generator from typing import Any, cast import httpx -from graphon.model_runtime.errors.invoke import ( - InvokeAuthorizationError, - InvokeBadRequestError, - InvokeConnectionError, - InvokeRateLimitError, - InvokeServerUnavailableError, -) -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from pydantic import BaseModel from yarl import URL @@ -37,6 +29,14 @@ from core.trigger.errors import ( TriggerPluginInvokeError, TriggerProviderCredentialValidationError, ) +from graphon.model_runtime.errors.invoke import ( + InvokeAuthorizationError, + InvokeBadRequestError, + InvokeConnectionError, + InvokeRateLimitError, + InvokeServerUnavailableError, +) +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError plugin_daemon_inner_api_baseurl = URL(str(dify_config.PLUGIN_DAEMON_URL)) _plugin_daemon_timeout_config = cast( diff --git a/api/core/plugin/impl/datasource.py b/api/core/plugin/impl/datasource.py index ce1ef71494..56c08addba 100644 --- a/api/core/plugin/impl/datasource.py +++ b/api/core/plugin/impl/datasource.py @@ -26,7 +26,7 @@ class PluginDatasourceManager(BasePluginClient): Fetch datasource providers for the given tenant. """ - def transformer(json_response: dict[str, Any]) -> dict: + def transformer(json_response: dict[str, Any]) -> dict[str, Any]: if json_response.get("data"): for provider in json_response.get("data", []): declaration = provider.get("declaration", {}) or {} @@ -68,7 +68,7 @@ class PluginDatasourceManager(BasePluginClient): Fetch datasource providers for the given tenant. """ - def transformer(json_response: dict[str, Any]) -> dict: + def transformer(json_response: dict[str, Any]) -> dict[str, Any]: if json_response.get("data"): for provider in json_response.get("data", []): declaration = provider.get("declaration", {}) or {} @@ -110,7 +110,7 @@ class PluginDatasourceManager(BasePluginClient): tool_provider_id = DatasourceProviderID(provider_id) - def transformer(json_response: dict[str, Any]) -> dict: + def transformer(json_response: dict[str, Any]) -> dict[str, Any]: data = json_response.get("data") if data: for datasource in data.get("declaration", {}).get("datasources", []): diff --git a/api/core/plugin/impl/endpoint.py b/api/core/plugin/impl/endpoint.py index 2db5185a2c..b335b42763 100644 --- a/api/core/plugin/impl/endpoint.py +++ b/api/core/plugin/impl/endpoint.py @@ -1,3 +1,5 @@ +from typing import Any + from core.plugin.entities.endpoint import EndpointEntityWithInstance from core.plugin.impl.base import BasePluginClient from core.plugin.impl.exc import PluginDaemonInternalServerError @@ -5,7 +7,12 @@ from core.plugin.impl.exc import PluginDaemonInternalServerError class PluginEndpointClient(BasePluginClient): def create_endpoint( - self, tenant_id: str, user_id: str, plugin_unique_identifier: str, name: str, settings: dict + self, + tenant_id: str, + user_id: str, + plugin_unique_identifier: str, + name: str, + settings: dict[str, Any], ) -> bool: """ Create an endpoint for the given plugin. @@ -49,7 +56,9 @@ class PluginEndpointClient(BasePluginClient): params={"plugin_id": plugin_id, "page": page, "page_size": page_size}, ) - def update_endpoint(self, tenant_id: str, user_id: str, endpoint_id: str, name: str, settings: dict): + def update_endpoint( + self, tenant_id: str, user_id: str, endpoint_id: str, name: str, settings: dict[str, Any] + ) -> bool: """ Update the settings of the given endpoint. """ diff --git a/api/core/plugin/impl/model.py b/api/core/plugin/impl/model.py index 1e38c24717..47608bdfa6 100644 --- a/api/core/plugin/impl/model.py +++ b/api/core/plugin/impl/model.py @@ -2,13 +2,6 @@ import binascii from collections.abc import Generator, Sequence from typing import IO, Any -from graphon.model_runtime.entities.llm_entities import LLMResultChunk -from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageTool -from graphon.model_runtime.entities.model_entities import AIModelEntity -from graphon.model_runtime.entities.rerank_entities import MultimodalRerankInput, RerankResult -from graphon.model_runtime.entities.text_embedding_entities import EmbeddingResult -from graphon.model_runtime.utils.encoders import jsonable_encoder - from core.plugin.entities.plugin_daemon import ( PluginBasicBooleanResponse, PluginDaemonInnerError, @@ -20,6 +13,12 @@ from core.plugin.entities.plugin_daemon import ( PluginVoicesResponse, ) from core.plugin.impl.base import BasePluginClient +from graphon.model_runtime.entities.llm_entities import LLMResultChunk +from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageTool +from graphon.model_runtime.entities.model_entities import AIModelEntity +from graphon.model_runtime.entities.rerank_entities import MultimodalRerankInput, RerankResult +from graphon.model_runtime.entities.text_embedding_entities import EmbeddingResult +from graphon.model_runtime.utils.encoders import jsonable_encoder class PluginModelClient(BasePluginClient): @@ -50,7 +49,7 @@ class PluginModelClient(BasePluginClient): provider: str, model_type: str, model: str, - credentials: dict, + credentials: dict[str, Any], ) -> AIModelEntity | None: """ Get model schema @@ -80,7 +79,7 @@ class PluginModelClient(BasePluginClient): return None def validate_provider_credentials( - self, tenant_id: str, user_id: str | None, plugin_id: str, provider: str, credentials: dict + self, tenant_id: str, user_id: str | None, plugin_id: str, provider: str, credentials: dict[str, Any] ) -> bool: """ validate the credentials of the provider @@ -118,7 +117,7 @@ class PluginModelClient(BasePluginClient): provider: str, model_type: str, model: str, - credentials: dict, + credentials: dict[str, Any], ) -> bool: """ validate the credentials of the provider @@ -157,9 +156,9 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], prompt_messages: list[PromptMessage], - model_parameters: dict | None = None, + model_parameters: dict[str, Any] | None = None, tools: list[PromptMessageTool] | None = None, stop: list[str] | None = None, stream: bool = True, @@ -206,7 +205,7 @@ class PluginModelClient(BasePluginClient): provider: str, model_type: str, model: str, - credentials: dict, + credentials: dict[str, Any], prompt_messages: list[PromptMessage], tools: list[PromptMessageTool] | None = None, ) -> int: @@ -248,7 +247,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], texts: list[str], input_type: str, ) -> EmbeddingResult: @@ -290,7 +289,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], documents: list[dict], input_type: str, ) -> EmbeddingResult: @@ -332,7 +331,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], texts: list[str], ) -> list[int]: """ @@ -372,7 +371,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], query: str, docs: list[str], score_threshold: float | None = None, @@ -418,7 +417,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], query: MultimodalRerankInput, docs: list[MultimodalRerankInput], score_threshold: float | None = None, @@ -463,7 +462,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], content_text: str, voice: str, ) -> Generator[bytes, None, None]: @@ -508,7 +507,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], language: str | None = None, ): """ @@ -552,7 +551,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], file: IO[bytes], ) -> str: """ @@ -592,7 +591,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], text: str, ) -> bool: """ diff --git a/api/core/plugin/impl/model_runtime.py b/api/core/plugin/impl/model_runtime.py index 22c846b6de..e3fba4ef3a 100644 --- a/api/core/plugin/impl/model_runtime.py +++ b/api/core/plugin/impl/model_runtime.py @@ -6,13 +6,6 @@ from collections.abc import Generator, Iterable, Sequence from threading import Lock from typing import IO, Any, Union -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk -from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageTool -from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelType -from graphon.model_runtime.entities.provider_entities import ProviderEntity -from graphon.model_runtime.entities.rerank_entities import MultimodalRerankInput, RerankResult -from graphon.model_runtime.entities.text_embedding_entities import EmbeddingInputType, EmbeddingResult -from graphon.model_runtime.runtime import ModelRuntime from pydantic import ValidationError from redis import RedisError @@ -21,6 +14,13 @@ from core.plugin.entities.plugin_daemon import PluginModelProviderEntity from core.plugin.impl.asset import PluginAssetManager from core.plugin.impl.model import PluginModelClient from extensions.ext_redis import redis_client +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk +from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageTool +from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelType +from graphon.model_runtime.entities.provider_entities import ProviderEntity +from graphon.model_runtime.entities.rerank_entities import MultimodalRerankInput, RerankResult +from graphon.model_runtime.entities.text_embedding_entities import EmbeddingInputType, EmbeddingResult +from graphon.model_runtime.runtime import ModelRuntime from models.provider_ids import ModelProviderID logger = logging.getLogger(__name__) diff --git a/api/core/plugin/impl/model_runtime_factory.py b/api/core/plugin/impl/model_runtime_factory.py index 4b29a6fc56..35abd2ae8c 100644 --- a/api/core/plugin/impl/model_runtime_factory.py +++ b/api/core/plugin/impl/model_runtime_factory.py @@ -2,9 +2,8 @@ from __future__ import annotations from typing import TYPE_CHECKING -from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory - from core.plugin.impl.model import PluginModelClient +from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory if TYPE_CHECKING: from core.model_manager import ModelManager diff --git a/api/core/plugin/impl/plugin.py b/api/core/plugin/impl/plugin.py index ec4858ae2e..8a7175bb51 100644 --- a/api/core/plugin/impl/plugin.py +++ b/api/core/plugin/impl/plugin.py @@ -1,4 +1,5 @@ from collections.abc import Sequence +from typing import Any from requests import HTTPError @@ -209,7 +210,10 @@ class PluginInstaller(BasePluginClient): "GET", f"plugin/{tenant_id}/management/decode/from_identifier", PluginDecodeResponse, - params={"plugin_unique_identifier": plugin_unique_identifier}, + params={ + "plugin_unique_identifier": plugin_unique_identifier, + "PluginUniqueIdentifier": plugin_unique_identifier, # compat with daemon <= 0.5.4 + }, ) def fetch_plugin_installation_by_ids( @@ -260,7 +264,7 @@ class PluginInstaller(BasePluginClient): original_plugin_unique_identifier: str, new_plugin_unique_identifier: str, source: PluginInstallationSource, - meta: dict, + meta: dict[str, Any], ) -> PluginInstallTaskStartResponse: """ Upgrade a plugin. diff --git a/api/core/plugin/utils/converter.py b/api/core/plugin/utils/converter.py index 90350f8400..12d8e282b2 100644 --- a/api/core/plugin/utils/converter.py +++ b/api/core/plugin/utils/converter.py @@ -1,8 +1,7 @@ from typing import Any -from graphon.file import File - from core.tools.entities.tool_entities import ToolSelector +from graphon.file import File def convert_parameters_to_plugin_format(parameters: dict[str, Any]) -> dict[str, Any]: diff --git a/api/core/prompt/advanced_prompt_transform.py b/api/core/prompt/advanced_prompt_transform.py index 19b5e9223a..24e05ef865 100644 --- a/api/core/prompt/advanced_prompt_transform.py +++ b/api/core/prompt/advanced_prompt_transform.py @@ -1,6 +1,13 @@ from collections.abc import Mapping, Sequence from typing import cast +from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity +from core.helper.code_executor.jinja2.jinja2_formatter import Jinja2Formatter +from core.memory.token_buffer_memory import TokenBufferMemory +from core.model_manager import ModelInstance +from core.prompt.entities.advanced_prompt_entities import ChatModelMessage, CompletionModelPromptTemplate, MemoryConfig +from core.prompt.prompt_transform import PromptTransform +from core.prompt.utils.prompt_template_parser import PromptTemplateParser from graphon.file import File, file_manager from graphon.model_runtime.entities import ( AssistantPromptMessage, @@ -13,14 +20,6 @@ from graphon.model_runtime.entities import ( from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes from graphon.runtime import VariablePool -from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity -from core.helper.code_executor.jinja2.jinja2_formatter import Jinja2Formatter -from core.memory.token_buffer_memory import TokenBufferMemory -from core.model_manager import ModelInstance -from core.prompt.entities.advanced_prompt_entities import ChatModelMessage, CompletionModelPromptTemplate, MemoryConfig -from core.prompt.prompt_transform import PromptTransform -from core.prompt.utils.prompt_template_parser import PromptTemplateParser - class AdvancedPromptTransform(PromptTransform): """ diff --git a/api/core/prompt/agent_history_prompt_transform.py b/api/core/prompt/agent_history_prompt_transform.py index 9be70199b7..8f1d51f08a 100644 --- a/api/core/prompt/agent_history_prompt_transform.py +++ b/api/core/prompt/agent_history_prompt_transform.py @@ -1,17 +1,16 @@ from typing import cast -from graphon.model_runtime.entities.message_entities import ( - PromptMessage, - SystemPromptMessage, - UserPromptMessage, -) -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel - from core.app.entities.app_invoke_entities import ( ModelConfigWithCredentialsEntity, ) from core.memory.token_buffer_memory import TokenBufferMemory from core.prompt.prompt_transform import PromptTransform +from graphon.model_runtime.entities.message_entities import ( + PromptMessage, + SystemPromptMessage, + UserPromptMessage, +) +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel class AgentHistoryPromptTransform(PromptTransform): diff --git a/api/core/prompt/prompt_transform.py b/api/core/prompt/prompt_transform.py index 4539ae9f11..6ff2f44cdc 100644 --- a/api/core/prompt/prompt_transform.py +++ b/api/core/prompt/prompt_transform.py @@ -1,12 +1,11 @@ from typing import Any -from graphon.model_runtime.entities.message_entities import PromptMessage -from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelPropertyKey - from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.memory.token_buffer_memory import TokenBufferMemory from core.model_manager import ModelInstance from core.prompt.entities.advanced_prompt_entities import MemoryConfig +from graphon.model_runtime.entities.message_entities import PromptMessage +from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelPropertyKey class PromptTransform: diff --git a/api/core/prompt/simple_prompt_transform.py b/api/core/prompt/simple_prompt_transform.py index c706353ffe..1665bdeb52 100644 --- a/api/core/prompt/simple_prompt_transform.py +++ b/api/core/prompt/simple_prompt_transform.py @@ -2,8 +2,14 @@ import json import os from collections.abc import Mapping, Sequence from enum import StrEnum, auto -from typing import TYPE_CHECKING, Any, cast +from typing import TYPE_CHECKING, Any, TypedDict, cast +from core.app.app_config.entities import PromptTemplateEntity +from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity +from core.memory.token_buffer_memory import TokenBufferMemory +from core.prompt.entities.advanced_prompt_entities import MemoryConfig +from core.prompt.prompt_transform import PromptTransform +from core.prompt.utils.prompt_template_parser import PromptTemplateParser from graphon.file import file_manager from graphon.model_runtime.entities.message_entities import ( ImagePromptMessageContent, @@ -13,13 +19,6 @@ from graphon.model_runtime.entities.message_entities import ( TextPromptMessageContent, UserPromptMessage, ) - -from core.app.app_config.entities import PromptTemplateEntity -from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity -from core.memory.token_buffer_memory import TokenBufferMemory -from core.prompt.entities.advanced_prompt_entities import MemoryConfig -from core.prompt.prompt_transform import PromptTransform -from core.prompt.utils.prompt_template_parser import PromptTemplateParser from models.model import AppMode if TYPE_CHECKING: @@ -34,6 +33,13 @@ class ModelMode(StrEnum): prompt_file_contents: dict[str, Any] = {} +class PromptTemplateConfigDict(TypedDict): + prompt_template: PromptTemplateParser + custom_variable_keys: list[str] + special_variable_keys: list[str] + prompt_rules: dict[str, Any] + + class SimplePromptTransform(PromptTransform): """ Simple Prompt Transform for Chatbot App Basic Mode. @@ -89,11 +95,11 @@ class SimplePromptTransform(PromptTransform): app_mode: AppMode, model_config: ModelConfigWithCredentialsEntity, pre_prompt: str, - inputs: dict, + inputs: dict[str, Any], query: str | None = None, context: str | None = None, histories: str | None = None, - ) -> tuple[str, dict]: + ) -> tuple[str, dict[str, Any]]: # get prompt template prompt_template_config = self.get_prompt_template( app_mode=app_mode, @@ -105,18 +111,13 @@ class SimplePromptTransform(PromptTransform): with_memory_prompt=histories is not None, ) - custom_variable_keys_obj = prompt_template_config["custom_variable_keys"] - special_variable_keys_obj = prompt_template_config["special_variable_keys"] + custom_variable_keys = prompt_template_config["custom_variable_keys"] + if not isinstance(custom_variable_keys, list): + raise TypeError(f"Expected list for custom_variable_keys, got {type(custom_variable_keys)}") - # Type check for custom_variable_keys - if not isinstance(custom_variable_keys_obj, list): - raise TypeError(f"Expected list for custom_variable_keys, got {type(custom_variable_keys_obj)}") - custom_variable_keys = cast(list[str], custom_variable_keys_obj) - - # Type check for special_variable_keys - if not isinstance(special_variable_keys_obj, list): - raise TypeError(f"Expected list for special_variable_keys, got {type(special_variable_keys_obj)}") - special_variable_keys = cast(list[str], special_variable_keys_obj) + special_variable_keys = prompt_template_config["special_variable_keys"] + if not isinstance(special_variable_keys, list): + raise TypeError(f"Expected list for special_variable_keys, got {type(special_variable_keys)}") variables = {k: inputs[k] for k in custom_variable_keys if k in inputs} @@ -150,7 +151,7 @@ class SimplePromptTransform(PromptTransform): has_context: bool, query_in_prompt: bool, with_memory_prompt: bool = False, - ) -> dict[str, object]: + ) -> PromptTemplateConfigDict: prompt_rules = self._get_prompt_rule(app_mode=app_mode, provider=provider, model=model) custom_variable_keys: list[str] = [] @@ -173,18 +174,19 @@ class SimplePromptTransform(PromptTransform): prompt += prompt_rules.get("query_prompt", "{{#query#}}") special_variable_keys.append("#query#") - return { + result: PromptTemplateConfigDict = { "prompt_template": PromptTemplateParser(template=prompt), "custom_variable_keys": custom_variable_keys, "special_variable_keys": special_variable_keys, "prompt_rules": prompt_rules, } + return result def _get_chat_model_prompt_messages( self, app_mode: AppMode, pre_prompt: str, - inputs: dict, + inputs: dict[str, Any], query: str, context: str | None, files: Sequence["File"], @@ -231,7 +233,7 @@ class SimplePromptTransform(PromptTransform): self, app_mode: AppMode, pre_prompt: str, - inputs: dict, + inputs: dict[str, Any], query: str, context: str | None, files: Sequence["File"], @@ -310,7 +312,7 @@ class SimplePromptTransform(PromptTransform): return prompt_message - def _get_prompt_rule(self, app_mode: AppMode, provider: str, model: str): + def _get_prompt_rule(self, app_mode: AppMode, provider: str, model: str) -> dict[str, Any]: """ Get simple prompt rule. :param app_mode: app mode @@ -322,7 +324,7 @@ class SimplePromptTransform(PromptTransform): # Check if the prompt file is already loaded if prompt_file_name in prompt_file_contents: - return cast(dict, prompt_file_contents[prompt_file_name]) + return cast(dict[str, Any], prompt_file_contents[prompt_file_name]) # Get the absolute path of the subdirectory prompt_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "prompt_templates") @@ -335,7 +337,7 @@ class SimplePromptTransform(PromptTransform): # Store the content of the prompt file prompt_file_contents[prompt_file_name] = content - return cast(dict, content) + return cast(dict[str, Any], content) def _prompt_file_name(self, app_mode: AppMode, provider: str, model: str) -> str: # baichuan diff --git a/api/core/prompt/utils/prompt_message_util.py b/api/core/prompt/utils/prompt_message_util.py index dbda749925..ba76eb0c4e 100644 --- a/api/core/prompt/utils/prompt_message_util.py +++ b/api/core/prompt/utils/prompt_message_util.py @@ -1,6 +1,7 @@ from collections.abc import Sequence from typing import Any, cast +from core.prompt.simple_prompt_transform import ModelMode from graphon.model_runtime.entities import ( AssistantPromptMessage, AudioPromptMessageContent, @@ -11,8 +12,6 @@ from graphon.model_runtime.entities import ( TextPromptMessageContent, ) -from core.prompt.simple_prompt_transform import ModelMode - class PromptMessageUtil: @staticmethod diff --git a/api/core/provider_manager.py b/api/core/provider_manager.py index 552de66f8b..c3bbe8fc09 100644 --- a/api/core/provider_manager.py +++ b/api/core/provider_manager.py @@ -6,14 +6,6 @@ from collections.abc import Sequence from json import JSONDecodeError from typing import TYPE_CHECKING, Any -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.entities.provider_entities import ( - ConfigurateMethod, - CredentialFormSchema, - FormType, - ProviderEntity, -) -from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory from pydantic import TypeAdapter from sqlalchemy import select from sqlalchemy.exc import IntegrityError @@ -41,6 +33,14 @@ from core.helper.position_helper import is_filtered from extensions import ext_hosting_provider from extensions.ext_database import db from extensions.ext_redis import redis_client +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.entities.provider_entities import ( + ConfigurateMethod, + CredentialFormSchema, + FormType, + ProviderEntity, +) +from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory from models.provider import ( LoadBalancingModelConfig, Provider, @@ -856,7 +856,7 @@ class ProviderManager: secret_variables: list[str], cache_type: ProviderCredentialsCacheType, is_provider: bool = False, - ) -> dict: + ) -> dict[str, Any]: """Get and decrypt credentials with caching.""" credentials_cache = ProviderCredentialsCache( tenant_id=tenant_id, @@ -961,36 +961,37 @@ class ProviderManager: raise ValueError("quota_used is None") if provider_record.quota_limit is None: raise ValueError("quota_limit is None") - if provider_quota.quota_type == ProviderQuotaType.TRIAL and trail_pool is not None: - quota_configuration = QuotaConfiguration( - quota_type=provider_quota.quota_type, - quota_unit=provider_hosting_configuration.quota_unit or QuotaUnit.TOKENS, - quota_used=trail_pool.quota_used, - quota_limit=trail_pool.quota_limit, - is_valid=trail_pool.quota_limit > trail_pool.quota_used or trail_pool.quota_limit == -1, - restrict_models=provider_quota.restrict_models, - ) + match provider_quota.quota_type: + case ProviderQuotaType.TRIAL if trail_pool is not None: + quota_configuration = QuotaConfiguration( + quota_type=provider_quota.quota_type, + quota_unit=provider_hosting_configuration.quota_unit or QuotaUnit.TOKENS, + quota_used=trail_pool.quota_used, + quota_limit=trail_pool.quota_limit, + is_valid=trail_pool.quota_limit > trail_pool.quota_used or trail_pool.quota_limit == -1, + restrict_models=provider_quota.restrict_models, + ) - elif provider_quota.quota_type == ProviderQuotaType.PAID and paid_pool is not None: - quota_configuration = QuotaConfiguration( - quota_type=provider_quota.quota_type, - quota_unit=provider_hosting_configuration.quota_unit or QuotaUnit.TOKENS, - quota_used=paid_pool.quota_used, - quota_limit=paid_pool.quota_limit, - is_valid=paid_pool.quota_limit > paid_pool.quota_used or paid_pool.quota_limit == -1, - restrict_models=provider_quota.restrict_models, - ) + case ProviderQuotaType.PAID if paid_pool is not None: + quota_configuration = QuotaConfiguration( + quota_type=provider_quota.quota_type, + quota_unit=provider_hosting_configuration.quota_unit or QuotaUnit.TOKENS, + quota_used=paid_pool.quota_used, + quota_limit=paid_pool.quota_limit, + is_valid=paid_pool.quota_limit > paid_pool.quota_used or paid_pool.quota_limit == -1, + restrict_models=provider_quota.restrict_models, + ) - else: - quota_configuration = QuotaConfiguration( - quota_type=provider_quota.quota_type, - quota_unit=provider_hosting_configuration.quota_unit or QuotaUnit.TOKENS, - quota_used=provider_record.quota_used, - quota_limit=provider_record.quota_limit, - is_valid=provider_record.quota_limit > provider_record.quota_used - or provider_record.quota_limit == -1, - restrict_models=provider_quota.restrict_models, - ) + case _: + quota_configuration = QuotaConfiguration( + quota_type=provider_quota.quota_type, + quota_unit=provider_hosting_configuration.quota_unit or QuotaUnit.TOKENS, + quota_used=provider_record.quota_used, + quota_limit=provider_record.quota_limit, + is_valid=provider_record.quota_limit > provider_record.quota_used + or provider_record.quota_limit == -1, + restrict_models=provider_quota.restrict_models, + ) quota_configurations.append(quota_configuration) diff --git a/api/core/rag/data_post_processor/data_post_processor.py b/api/core/rag/data_post_processor/data_post_processor.py index 9ce91f52ff..ca530748ed 100644 --- a/api/core/rag/data_post_processor/data_post_processor.py +++ b/api/core/rag/data_post_processor/data_post_processor.py @@ -1,8 +1,5 @@ from typing import TypedDict -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError - from core.model_manager import ModelInstance, ModelManager from core.rag.data_post_processor.reorder import ReorderRunner from core.rag.index_processor.constant.query_type import QueryType @@ -11,6 +8,8 @@ from core.rag.rerank.entity.weight import KeywordSetting, VectorSetting, Weights from core.rag.rerank.rerank_base import BaseRerankRunner from core.rag.rerank.rerank_factory import RerankRunnerFactory from core.rag.rerank.rerank_type import RerankMode +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError class RerankingModelDict(TypedDict): diff --git a/api/core/rag/datasource/retrieval_service.py b/api/core/rag/datasource/retrieval_service.py index c1654ac130..7e71d67ec0 100644 --- a/api/core/rag/datasource/retrieval_service.py +++ b/api/core/rag/datasource/retrieval_service.py @@ -4,7 +4,6 @@ from concurrent.futures import ThreadPoolExecutor from typing import Any, NotRequired, TypedDict from flask import Flask, current_app -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import select from sqlalchemy.orm import Session, load_only @@ -24,6 +23,7 @@ from core.rag.rerank.rerank_type import RerankMode from core.rag.retrieval.retrieval_methods import RetrievalMethod from core.tools.signature import sign_upload_file from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelType from models.dataset import ( ChildChunk, Dataset, @@ -174,8 +174,8 @@ class RetrievalService: cls, dataset_id: str, query: str, - external_retrieval_model: dict | None = None, - metadata_filtering_conditions: dict | None = None, + external_retrieval_model: dict[str, Any] | None = None, + metadata_filtering_conditions: dict[str, Any] | None = None, ): stmt = select(Dataset).where(Dataset.id == dataset_id) dataset = db.session.scalar(stmt) @@ -195,6 +195,23 @@ class RetrievalService: ) return all_documents + @classmethod + def _filter_documents_by_vector_score_threshold( + cls, documents: list[Document], score_threshold: float | None + ) -> list[Document]: + """Keep documents whose stored retrieval score meets the threshold. + + Used when hybrid search skips early vector thresholding but no rerank + runner applies a threshold afterward (same rule as ``calculate_vector_score``). + """ + if score_threshold is None: + return documents + return [ + document + for document in documents + if document.metadata and document.metadata.get("score", 0) >= score_threshold + ] + @classmethod def _deduplicate_documents(cls, documents: list[Document]) -> list[Document]: """Deduplicate documents in O(n) while preserving first-seen order. @@ -294,13 +311,20 @@ class RetrievalService: vector = Vector(dataset=dataset) documents = [] + # Hybrid search merges keyword / full-text / vector hits and then reranks + # (weighted fusion or reranking model). Applying the user score threshold at + # vector retrieval time uses embedding similarity, which is not comparable to + # reranked or fused scores and incorrectly drops high-quality chunks (#35233). + embedding_score_threshold = ( + 0.0 if retrieval_method == RetrievalMethod.HYBRID_SEARCH else score_threshold + ) if query_type == QueryType.TEXT_QUERY: documents.extend( vector.search_by_vector( query, search_type="similarity_score_threshold", top_k=top_k, - score_threshold=score_threshold, + score_threshold=embedding_score_threshold, filter={"group_id": [dataset.id]}, document_ids_filter=document_ids_filter, ) @@ -312,7 +336,7 @@ class RetrievalService: vector.search_by_file( file_id=query, top_k=top_k, - score_threshold=score_threshold, + score_threshold=embedding_score_threshold, filter={"group_id": [dataset.id]}, document_ids_filter=document_ids_filter, ) @@ -844,6 +868,10 @@ class RetrievalService: top_n=top_k, query_type=QueryType.TEXT_QUERY if query else QueryType.IMAGE_QUERY, ) + if not data_post_processor.rerank_runner and score_threshold: + all_documents_item = self._filter_documents_by_vector_score_threshold( + all_documents_item, score_threshold + ) all_documents.extend(all_documents_item) diff --git a/api/core/rag/datasource/vdb/vector_backend_registry.py b/api/core/rag/datasource/vdb/vector_backend_registry.py new file mode 100644 index 0000000000..15f4357caf --- /dev/null +++ b/api/core/rag/datasource/vdb/vector_backend_registry.py @@ -0,0 +1,87 @@ +"""Vector store backend discovery. + +Backends live in workspace packages under ``api/packages/dify-vdb-*/src/dify_vdb_*``. Each package +declares third-party dependencies and registers ``importlib`` entry points in group +``dify.vector_backends`` (see each package's ``pyproject.toml``). + +Shared types and the :class:`~core.rag.datasource.vdb.vector_factory.AbstractVectorFactory` protocol +remain in this package (``vector_base``, ``vector_factory``, ``vector_type``, ``field``). + +Optional **built-in** targets in ``_BUILTIN_VECTOR_FACTORY_TARGETS`` (normally empty) load without a +distribution; entry points take precedence when both exist. + +After changing packages, run ``uv sync`` so installed dist-info entry points match ``pyproject.toml``. +""" + +from __future__ import annotations + +import importlib +import logging +from importlib.metadata import entry_points +from typing import TYPE_CHECKING + +if TYPE_CHECKING: + from core.rag.datasource.vdb.vector_factory import AbstractVectorFactory + +logger = logging.getLogger(__name__) + +_VECTOR_FACTORY_CACHE: dict[str, type[AbstractVectorFactory]] = {} + +# module_path:class_name — optional fallback when no distribution registers the backend. +_BUILTIN_VECTOR_FACTORY_TARGETS: dict[str, str] = {} + + +def clear_vector_factory_cache() -> None: + """Drop lazily loaded factories (for tests or plugin reload).""" + _VECTOR_FACTORY_CACHE.clear() + + +def _vector_backend_entry_points(): + return entry_points().select(group="dify.vector_backends") + + +def _load_plugin_factory(vector_type: str) -> type[AbstractVectorFactory] | None: + for ep in _vector_backend_entry_points(): + if ep.name != vector_type: + continue + try: + loaded = ep.load() + except Exception: + logger.exception("Failed to load vector backend entry point %s", ep.name) + raise + return loaded # type: ignore[return-value] + return None + + +def _unsupported(vector_type: str) -> ValueError: + installed = sorted(ep.name for ep in _vector_backend_entry_points()) + available_msg = f" Installed backends: {', '.join(installed)}." if installed else " No backends installed." + return ValueError( + f"Vector store {vector_type!r} is not supported.{available_msg} " + "Install a plugin (uv sync --group vdb-all, or vdb- per api/pyproject.toml), " + "or register a dify.vector_backends entry point." + ) + + +def _load_builtin_factory(vector_type: str) -> type[AbstractVectorFactory]: + target = _BUILTIN_VECTOR_FACTORY_TARGETS.get(vector_type) + if not target: + raise _unsupported(vector_type) + module_path, _, attr = target.partition(":") + module = importlib.import_module(module_path) + return getattr(module, attr) # type: ignore[no-any-return] + + +def get_vector_factory_class(vector_type: str) -> type[AbstractVectorFactory]: + """Resolve :class:`AbstractVectorFactory` for a :class:`~VectorType` string value.""" + if vector_type in _VECTOR_FACTORY_CACHE: + return _VECTOR_FACTORY_CACHE[vector_type] + + plugin_cls = _load_plugin_factory(vector_type) + if plugin_cls is not None: + _VECTOR_FACTORY_CACHE[vector_type] = plugin_cls + return plugin_cls + + cls = _load_builtin_factory(vector_type) + _VECTOR_FACTORY_CACHE[vector_type] = cls + return cls diff --git a/api/core/rag/datasource/vdb/vector_factory.py b/api/core/rag/datasource/vdb/vector_factory.py index 0ef88e1010..59d7f3c3c4 100644 --- a/api/core/rag/datasource/vdb/vector_factory.py +++ b/api/core/rag/datasource/vdb/vector_factory.py @@ -4,11 +4,11 @@ import time from abc import ABC, abstractmethod from typing import Any -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import select from configs import dify_config from core.model_manager import ModelManager +from core.rag.datasource.vdb.vector_backend_registry import get_vector_factory_class from core.rag.datasource.vdb.vector_base import BaseVector, VectorIndexStructDict from core.rag.datasource.vdb.vector_type import VectorType from core.rag.embedding.cached_embedding import CacheEmbedding @@ -18,6 +18,7 @@ from core.rag.models.document import Document from extensions.ext_database import db from extensions.ext_redis import redis_client from extensions.ext_storage import storage +from graphon.model_runtime.entities.model_entities import ModelType from models.dataset import Dataset, Whitelist from models.model import UploadFile @@ -41,7 +42,23 @@ class AbstractVectorFactory(ABC): class Vector: def __init__(self, dataset: Dataset, attributes: list | None = None): if attributes is None: - attributes = ["doc_id", "dataset_id", "document_id", "doc_hash", "doc_type"] + # `is_summary` and `original_chunk_id` are stored on summary vectors + # by `SummaryIndexService` and read back by `RetrievalService` to + # route summary hits through their original parent chunks. They + # must be listed here so vector backends that use this list as an + # explicit return-properties projection (notably Weaviate) actually + # return those fields; without them, summary hits silently + # collapse into `is_summary = False` branches and the summary + # retrieval path is a no-op. See #34884. + attributes = [ + "doc_id", + "dataset_id", + "document_id", + "doc_hash", + "doc_type", + "is_summary", + "original_chunk_id", + ] self._dataset = dataset self._embeddings = self._get_embeddings() self._attributes = attributes @@ -69,137 +86,7 @@ class Vector: @staticmethod def get_vector_factory(vector_type: str) -> type[AbstractVectorFactory]: - match vector_type: - case VectorType.CHROMA: - from core.rag.datasource.vdb.chroma.chroma_vector import ChromaVectorFactory - - return ChromaVectorFactory - case VectorType.MILVUS: - from core.rag.datasource.vdb.milvus.milvus_vector import MilvusVectorFactory - - return MilvusVectorFactory - case VectorType.ALIBABACLOUD_MYSQL: - from core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector import ( - AlibabaCloudMySQLVectorFactory, - ) - - return AlibabaCloudMySQLVectorFactory - case VectorType.MYSCALE: - from core.rag.datasource.vdb.myscale.myscale_vector import MyScaleVectorFactory - - return MyScaleVectorFactory - case VectorType.PGVECTOR: - from core.rag.datasource.vdb.pgvector.pgvector import PGVectorFactory - - return PGVectorFactory - case VectorType.VASTBASE: - from core.rag.datasource.vdb.pyvastbase.vastbase_vector import VastbaseVectorFactory - - return VastbaseVectorFactory - case VectorType.PGVECTO_RS: - from core.rag.datasource.vdb.pgvecto_rs.pgvecto_rs import PGVectoRSFactory - - return PGVectoRSFactory - case VectorType.QDRANT: - from core.rag.datasource.vdb.qdrant.qdrant_vector import QdrantVectorFactory - - return QdrantVectorFactory - case VectorType.RELYT: - from core.rag.datasource.vdb.relyt.relyt_vector import RelytVectorFactory - - return RelytVectorFactory - case VectorType.ELASTICSEARCH: - from core.rag.datasource.vdb.elasticsearch.elasticsearch_vector import ElasticSearchVectorFactory - - return ElasticSearchVectorFactory - case VectorType.ELASTICSEARCH_JA: - from core.rag.datasource.vdb.elasticsearch.elasticsearch_ja_vector import ( - ElasticSearchJaVectorFactory, - ) - - return ElasticSearchJaVectorFactory - case VectorType.TIDB_VECTOR: - from core.rag.datasource.vdb.tidb_vector.tidb_vector import TiDBVectorFactory - - return TiDBVectorFactory - case VectorType.WEAVIATE: - from core.rag.datasource.vdb.weaviate.weaviate_vector import WeaviateVectorFactory - - return WeaviateVectorFactory - case VectorType.TENCENT: - from core.rag.datasource.vdb.tencent.tencent_vector import TencentVectorFactory - - return TencentVectorFactory - case VectorType.ORACLE: - from core.rag.datasource.vdb.oracle.oraclevector import OracleVectorFactory - - return OracleVectorFactory - case VectorType.OPENSEARCH: - from core.rag.datasource.vdb.opensearch.opensearch_vector import OpenSearchVectorFactory - - return OpenSearchVectorFactory - case VectorType.ANALYTICDB: - from core.rag.datasource.vdb.analyticdb.analyticdb_vector import AnalyticdbVectorFactory - - return AnalyticdbVectorFactory - case VectorType.COUCHBASE: - from core.rag.datasource.vdb.couchbase.couchbase_vector import CouchbaseVectorFactory - - return CouchbaseVectorFactory - case VectorType.BAIDU: - from core.rag.datasource.vdb.baidu.baidu_vector import BaiduVectorFactory - - return BaiduVectorFactory - case VectorType.VIKINGDB: - from core.rag.datasource.vdb.vikingdb.vikingdb_vector import VikingDBVectorFactory - - return VikingDBVectorFactory - case VectorType.UPSTASH: - from core.rag.datasource.vdb.upstash.upstash_vector import UpstashVectorFactory - - return UpstashVectorFactory - case VectorType.TIDB_ON_QDRANT: - from core.rag.datasource.vdb.tidb_on_qdrant.tidb_on_qdrant_vector import TidbOnQdrantVectorFactory - - return TidbOnQdrantVectorFactory - case VectorType.LINDORM: - from core.rag.datasource.vdb.lindorm.lindorm_vector import LindormVectorStoreFactory - - return LindormVectorStoreFactory - case VectorType.OCEANBASE | VectorType.SEEKDB: - from core.rag.datasource.vdb.oceanbase.oceanbase_vector import OceanBaseVectorFactory - - return OceanBaseVectorFactory - case VectorType.OPENGAUSS: - from core.rag.datasource.vdb.opengauss.opengauss import OpenGaussFactory - - return OpenGaussFactory - case VectorType.TABLESTORE: - from core.rag.datasource.vdb.tablestore.tablestore_vector import TableStoreVectorFactory - - return TableStoreVectorFactory - case VectorType.HUAWEI_CLOUD: - from core.rag.datasource.vdb.huawei.huawei_cloud_vector import HuaweiCloudVectorFactory - - return HuaweiCloudVectorFactory - case VectorType.MATRIXONE: - from core.rag.datasource.vdb.matrixone.matrixone_vector import MatrixoneVectorFactory - - return MatrixoneVectorFactory - case VectorType.CLICKZETTA: - from core.rag.datasource.vdb.clickzetta.clickzetta_vector import ClickzettaVectorFactory - - return ClickzettaVectorFactory - case VectorType.IRIS: - from core.rag.datasource.vdb.iris.iris_vector import IrisVectorFactory - - return IrisVectorFactory - case VectorType.HOLOGRES: - from core.rag.datasource.vdb.hologres.hologres_vector import HologresVectorFactory - - return HologresVectorFactory - case _: - raise ValueError(f"Vector store {vector_type} is not supported.") + return get_vector_factory_class(vector_type) def create(self, texts: list | None = None, **kwargs): if texts: diff --git a/api/tests/integration_tests/vdb/test_vector_store.py b/api/core/rag/datasource/vdb/vector_integration_test_support.py similarity index 83% rename from api/tests/integration_tests/vdb/test_vector_store.py rename to api/core/rag/datasource/vdb/vector_integration_test_support.py index a033443cf8..3148b7d5c1 100644 --- a/api/tests/integration_tests/vdb/test_vector_store.py +++ b/api/core/rag/datasource/vdb/vector_integration_test_support.py @@ -1,10 +1,19 @@ +"""Shared helpers for vector DB integration tests (used by workspace packages under ``api/packages``). + +:class:`AbstractVectorTest` and helper functions live here so package tests can import +``core.rag.datasource.vdb.vector_integration_test_support`` without relying on the +``tests.*`` package. + +The ``setup_mock_redis`` fixture lives in ``api/packages/conftest.py`` and is +auto-discovered by pytest for all package tests. +""" + import uuid -from unittest.mock import MagicMock import pytest +from core.rag.datasource.vdb.vector_base import BaseVector from core.rag.models.document import Document -from extensions import ext_redis from models.dataset import Dataset @@ -25,24 +34,10 @@ def get_example_document(doc_id: str) -> Document: return doc -@pytest.fixture -def setup_mock_redis(): - # get - ext_redis.redis_client.get = MagicMock(return_value=None) - - # set - ext_redis.redis_client.set = MagicMock(return_value=None) - - # lock - mock_redis_lock = MagicMock() - mock_redis_lock.__enter__ = MagicMock() - mock_redis_lock.__exit__ = MagicMock() - ext_redis.redis_client.lock = mock_redis_lock - - class AbstractVectorTest: + vector: BaseVector + def __init__(self): - self.vector = None self.dataset_id = str(uuid.uuid4()) self.collection_name = Dataset.gen_collection_name_by_id(self.dataset_id) + "_test" self.example_doc_id = str(uuid.uuid4()) diff --git a/api/core/rag/docstore/dataset_docstore.py b/api/core/rag/docstore/dataset_docstore.py index 40f45953af..f4699f6869 100644 --- a/api/core/rag/docstore/dataset_docstore.py +++ b/api/core/rag/docstore/dataset_docstore.py @@ -3,13 +3,13 @@ from __future__ import annotations from collections.abc import Sequence from typing import Any -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import delete, func, select from core.model_manager import ModelManager from core.rag.index_processor.constant.index_type import IndexTechniqueType from core.rag.models.document import AttachmentDocument, Document from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelType from models.dataset import ChildChunk, Dataset, DocumentSegment, SegmentAttachmentBinding @@ -244,7 +244,7 @@ class DatasetDocumentStore: return document_segment def add_multimodel_documents_binding(self, segment_id: str, multimodel_documents: list[AttachmentDocument] | None): - if multimodel_documents: + if multimodel_documents and self._document_id is not None: for multimodel_document in multimodel_documents: binding = SegmentAttachmentBinding( tenant_id=self._dataset.tenant_id, diff --git a/api/core/rag/embedding/cached_embedding.py b/api/core/rag/embedding/cached_embedding.py index 8d1c0da392..4926f44f16 100644 --- a/api/core/rag/embedding/cached_embedding.py +++ b/api/core/rag/embedding/cached_embedding.py @@ -4,8 +4,6 @@ import pickle from typing import Any, cast import numpy as np -from graphon.model_runtime.entities.model_entities import ModelPropertyKey -from graphon.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel from sqlalchemy import select from sqlalchemy.exc import IntegrityError @@ -15,6 +13,8 @@ from core.model_manager import ModelInstance from core.rag.embedding.embedding_base import Embeddings from extensions.ext_database import db from extensions.ext_redis import redis_client +from graphon.model_runtime.entities.model_entities import ModelPropertyKey +from graphon.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel from libs import helper from models.dataset import Embedding @@ -106,7 +106,7 @@ class CacheEmbedding(Embeddings): return text_embeddings - def embed_multimodal_documents(self, multimodel_documents: list[dict]) -> list[list[float]]: + def embed_multimodal_documents(self, multimodel_documents: list[dict[str, Any]]) -> list[list[float]]: """Embed file documents.""" # use doc embedding cache or store if not exists multimodel_embeddings: list[Any] = [None for _ in range(len(multimodel_documents))] @@ -232,7 +232,7 @@ class CacheEmbedding(Embeddings): return embedding_results # type: ignore - def embed_multimodal_query(self, multimodel_document: dict) -> list[float]: + def embed_multimodal_query(self, multimodel_document: dict[str, Any]) -> list[float]: """Embed multimodal documents.""" # use doc embedding cache or store if not exists file_id = multimodel_document["file_id"] diff --git a/api/core/rag/embedding/embedding_base.py b/api/core/rag/embedding/embedding_base.py index 1be55bda80..7ae5c09ab7 100644 --- a/api/core/rag/embedding/embedding_base.py +++ b/api/core/rag/embedding/embedding_base.py @@ -1,4 +1,5 @@ from abc import ABC, abstractmethod +from typing import Any class Embeddings(ABC): @@ -10,7 +11,7 @@ class Embeddings(ABC): raise NotImplementedError @abstractmethod - def embed_multimodal_documents(self, multimodel_documents: list[dict]) -> list[list[float]]: + def embed_multimodal_documents(self, multimodel_documents: list[dict[str, Any]]) -> list[list[float]]: """Embed file documents.""" raise NotImplementedError @@ -20,7 +21,7 @@ class Embeddings(ABC): raise NotImplementedError @abstractmethod - def embed_multimodal_query(self, multimodel_document: dict) -> list[float]: + def embed_multimodal_query(self, multimodel_document: dict[str, Any]) -> list[float]: """Embed multimodal query.""" raise NotImplementedError diff --git a/api/core/rag/entities/__init__.py b/api/core/rag/entities/__init__.py index 63c6708704..373b68894b 100644 --- a/api/core/rag/entities/__init__.py +++ b/api/core/rag/entities/__init__.py @@ -4,7 +4,12 @@ from core.rag.entities.event import DatasourceCompletedEvent, DatasourceErrorEve from core.rag.entities.index_entities import EconomySetting, EmbeddingSetting, IndexMethod from core.rag.entities.metadata_entities import Condition, MetadataFilteringCondition, SupportedComparisonOperator from core.rag.entities.processing_entities import ParentMode, PreProcessingRule, Rule, Segmentation -from core.rag.entities.retrieval_settings import KeywordSetting, VectorSetting, WeightedScoreConfig +from core.rag.entities.retrieval_settings import ( + KeywordSetting, + RerankingModelConfig, + VectorSetting, + WeightedScoreConfig, +) __all__ = [ "Condition", @@ -19,6 +24,7 @@ __all__ = [ "MetadataFilteringCondition", "ParentMode", "PreProcessingRule", + "RerankingModelConfig", "RetrievalSourceMetadata", "Rule", "Segmentation", diff --git a/api/core/rag/entities/retrieval_settings.py b/api/core/rag/entities/retrieval_settings.py index a0c6512c9c..8d40ab68fd 100644 --- a/api/core/rag/entities/retrieval_settings.py +++ b/api/core/rag/entities/retrieval_settings.py @@ -1,4 +1,27 @@ -from pydantic import BaseModel +from pydantic import BaseModel, ConfigDict, Field + + +class RerankingModelConfig(BaseModel): + """ + Canonical reranking model configuration. + + Accepts both naming conventions: + - reranking_provider_name / reranking_model_name (services layer) + - provider / model (workflow layer via validation_alias) + """ + + model_config = ConfigDict(populate_by_name=True) + + reranking_provider_name: str = Field(validation_alias="provider") + reranking_model_name: str = Field(validation_alias="model") + + @property + def provider(self) -> str: + return self.reranking_provider_name + + @property + def model(self) -> str: + return self.reranking_model_name class VectorSetting(BaseModel): diff --git a/api/core/rag/extractor/csv_extractor.py b/api/core/rag/extractor/csv_extractor.py index 3bfae9d6bd..19bc9cec84 100644 --- a/api/core/rag/extractor/csv_extractor.py +++ b/api/core/rag/extractor/csv_extractor.py @@ -1,6 +1,7 @@ """Abstract interface for document loader implementations.""" import csv +from typing import Any import pandas as pd @@ -23,7 +24,7 @@ class CSVExtractor(BaseExtractor): encoding: str | None = None, autodetect_encoding: bool = False, source_column: str | None = None, - csv_args: dict | None = None, + csv_args: dict[str, Any] | None = None, ): """Initialize with file path.""" self._file_path = file_path diff --git a/api/core/rag/extractor/extract_processor.py b/api/core/rag/extractor/extract_processor.py index 449be6a448..fbd2a6db93 100644 --- a/api/core/rag/extractor/extract_processor.py +++ b/api/core/rag/extractor/extract_processor.py @@ -95,9 +95,9 @@ class ExtractProcessor: ) -> list[Document]: if extract_setting.datasource_type == DatasourceType.FILE: with tempfile.TemporaryDirectory() as temp_dir: + upload_file = extract_setting.upload_file if not file_path: - assert extract_setting.upload_file is not None, "upload_file is required" - upload_file: UploadFile = extract_setting.upload_file + assert upload_file is not None, "upload_file is required" suffix = Path(upload_file.key).suffix # FIXME mypy: Cannot determine type of 'tempfile._get_candidate_names' better not use it here file_path = f"{temp_dir}/{next(tempfile._get_candidate_names())}{suffix}" # type: ignore @@ -113,6 +113,7 @@ class ExtractProcessor: if file_extension in {".xlsx", ".xls"}: extractor = ExcelExtractor(file_path) elif file_extension == ".pdf": + assert upload_file is not None extractor = PdfExtractor(file_path, upload_file.tenant_id, upload_file.created_by) elif file_extension in {".md", ".markdown", ".mdx"}: extractor = ( @@ -123,6 +124,7 @@ class ExtractProcessor: elif file_extension in {".htm", ".html"}: extractor = HtmlExtractor(file_path) elif file_extension == ".docx": + assert upload_file is not None extractor = WordExtractor(file_path, upload_file.tenant_id, upload_file.created_by) elif file_extension == ".doc": extractor = UnstructuredWordExtractor(file_path, unstructured_api_url, unstructured_api_key) @@ -149,12 +151,14 @@ class ExtractProcessor: if file_extension in {".xlsx", ".xls"}: extractor = ExcelExtractor(file_path) elif file_extension == ".pdf": + assert upload_file is not None extractor = PdfExtractor(file_path, upload_file.tenant_id, upload_file.created_by) elif file_extension in {".md", ".markdown", ".mdx"}: extractor = MarkdownExtractor(file_path, autodetect_encoding=True) elif file_extension in {".htm", ".html"}: extractor = HtmlExtractor(file_path) elif file_extension == ".docx": + assert upload_file is not None extractor = WordExtractor(file_path, upload_file.tenant_id, upload_file.created_by) elif file_extension == ".csv": extractor = CSVExtractor(file_path, autodetect_encoding=True) diff --git a/api/core/rag/extractor/firecrawl/firecrawl_app.py b/api/core/rag/extractor/firecrawl/firecrawl_app.py index 89bdd56a6c..556158cf00 100644 --- a/api/core/rag/extractor/firecrawl/firecrawl_app.py +++ b/api/core/rag/extractor/firecrawl/firecrawl_app.py @@ -174,21 +174,25 @@ class FirecrawlApp: return f"{self.base_url.rstrip('/')}/{path.lstrip('/')}" def _post_request(self, url, data, headers, retries=3, backoff_factor=0.5) -> httpx.Response: + response: httpx.Response | None = None for attempt in range(retries): response = httpx.post(url, headers=headers, json=data) if response.status_code == 502: time.sleep(backoff_factor * (2**attempt)) else: return response + assert response is not None, "retries must be at least 1" return response def _get_request(self, url, headers, retries=3, backoff_factor=0.5) -> httpx.Response: + response: httpx.Response | None = None for attempt in range(retries): response = httpx.get(url, headers=headers) if response.status_code == 502: time.sleep(backoff_factor * (2**attempt)) else: return response + assert response is not None, "retries must be at least 1" return response def _handle_error(self, response, action): diff --git a/api/core/rag/extractor/unstructured/unstructured_doc_extractor.py b/api/core/rag/extractor/unstructured/unstructured_doc_extractor.py index 7dd8beaa46..f9fbfbc409 100644 --- a/api/core/rag/extractor/unstructured/unstructured_doc_extractor.py +++ b/api/core/rag/extractor/unstructured/unstructured_doc_extractor.py @@ -19,12 +19,15 @@ class UnstructuredWordExtractor(BaseExtractor): def extract(self) -> list[Document]: from unstructured.__version__ import __version__ as __unstructured_version__ - from unstructured.file_utils.filetype import FileType, detect_filetype + from unstructured.file_utils.filetype import ( # pyright: ignore[reportPrivateImportUsage] + FileType, + detect_filetype, + ) unstructured_version = tuple(int(x) for x in __unstructured_version__.split(".")) # check the file extension try: - import magic # noqa: F401 + import magic # noqa: F401 # pyright: ignore[reportUnusedImport] is_doc = detect_filetype(self._file_path) == FileType.DOC except ImportError: diff --git a/api/core/rag/extractor/watercrawl/client.py b/api/core/rag/extractor/watercrawl/client.py index 7b4a388df9..d1ce142dbd 100644 --- a/api/core/rag/extractor/watercrawl/client.py +++ b/api/core/rag/extractor/watercrawl/client.py @@ -54,8 +54,8 @@ class BaseAPIClient: self, method: str, endpoint: str, - query_params: dict | None = None, - data: dict | None = None, + query_params: dict[str, Any] | None = None, + data: dict[str, Any] | None = None, **kwargs, ) -> Response: stream = kwargs.pop("stream", False) @@ -66,19 +66,25 @@ class BaseAPIClient: return self.session.request(method, url, params=query_params, json=data, **kwargs) - def _get(self, endpoint: str, query_params: dict | None = None, **kwargs): + def _get(self, endpoint: str, query_params: dict[str, Any] | None = None, **kwargs): return self._request("GET", endpoint, query_params=query_params, **kwargs) - def _post(self, endpoint: str, query_params: dict | None = None, data: dict | None = None, **kwargs): + def _post( + self, endpoint: str, query_params: dict[str, Any] | None = None, data: dict[str, Any] | None = None, **kwargs + ): return self._request("POST", endpoint, query_params=query_params, data=data, **kwargs) - def _put(self, endpoint: str, query_params: dict | None = None, data: dict | None = None, **kwargs): + def _put( + self, endpoint: str, query_params: dict[str, Any] | None = None, data: dict[str, Any] | None = None, **kwargs + ): return self._request("PUT", endpoint, query_params=query_params, data=data, **kwargs) - def _delete(self, endpoint: str, query_params: dict | None = None, **kwargs): + def _delete(self, endpoint: str, query_params: dict[str, Any] | None = None, **kwargs): return self._request("DELETE", endpoint, query_params=query_params, **kwargs) - def _patch(self, endpoint: str, query_params: dict | None = None, data: dict | None = None, **kwargs): + def _patch( + self, endpoint: str, query_params: dict[str, Any] | None = None, data: dict[str, Any] | None = None, **kwargs + ): return self._request("PATCH", endpoint, query_params=query_params, data=data, **kwargs) @@ -99,7 +105,7 @@ class WaterCrawlAPIClient(BaseAPIClient): finally: response.close() - def process_response(self, response: Response) -> dict | bytes | list | None | Generator: + def process_response(self, response: Response) -> dict[str, Any] | bytes | list[Any] | None | Generator: if response.status_code == 401: raise WaterCrawlAuthenticationError(response) @@ -186,7 +192,7 @@ class WaterCrawlAPIClient(BaseAPIClient): yield from generator def get_crawl_request_results( - self, item_id: str, page: int = 1, page_size: int = 25, query_params: dict | None = None + self, item_id: str, page: int = 1, page_size: int = 25, query_params: dict[str, Any] | None = None ): query_params = query_params or {} query_params.update({"page": page or 1, "page_size": page_size or 25}) @@ -210,7 +216,7 @@ class WaterCrawlAPIClient(BaseAPIClient): if event_data["type"] == "result": return event_data["data"] - def download_result(self, result_object: dict): + def download_result(self, result_object: dict[str, Any]): response = httpx.get(result_object["result"], timeout=None) try: response.raise_for_status() diff --git a/api/core/rag/extractor/watercrawl/provider.py b/api/core/rag/extractor/watercrawl/provider.py index 2a9403eda0..ae7bebcb9b 100644 --- a/api/core/rag/extractor/watercrawl/provider.py +++ b/api/core/rag/extractor/watercrawl/provider.py @@ -120,7 +120,7 @@ class WaterCrawlProvider: } def _get_results( - self, crawl_request_id: str, query_params: dict | None = None + self, crawl_request_id: str, query_params: dict[str, Any] | None = None ) -> Generator[WatercrawlDocumentData, None, None]: page = 0 page_size = 100 diff --git a/api/core/rag/index_processor/index_processor.py b/api/core/rag/index_processor/index_processor.py index 813a84cbbd..aded5315bd 100644 --- a/api/core/rag/index_processor/index_processor.py +++ b/api/core/rag/index_processor/index_processor.py @@ -6,7 +6,7 @@ from collections.abc import Mapping from typing import Any from flask import current_app -from sqlalchemy import delete, func, select +from sqlalchemy import delete, func, select, update from core.db.session_factory import session_factory from core.rag.index_processor.constant.index_type import IndexTechniqueType @@ -63,11 +63,11 @@ class IndexProcessor: summary_index_setting: SummaryIndexSettingDict | None = None, ) -> IndexingResultDict: with session_factory.create_session() as session: - document = session.query(Document).filter_by(id=document_id).first() + document = session.scalar(select(Document).where(Document.id == document_id).limit(1)) if not document: raise KnowledgeIndexNodeError(f"Document {document_id} not found.") - dataset = session.query(Dataset).filter_by(id=dataset_id).first() + dataset = session.scalar(select(Dataset).where(Dataset.id == dataset_id).limit(1)) if not dataset: raise KnowledgeIndexNodeError(f"Dataset {dataset_id} not found.") @@ -104,12 +104,12 @@ class IndexProcessor: document.indexing_status = "completed" document.completed_at = datetime.datetime.now(datetime.UTC).replace(tzinfo=None) document.word_count = ( - session.query(func.sum(DocumentSegment.word_count)) - .where( - DocumentSegment.document_id == document_id, - DocumentSegment.dataset_id == dataset_id, + session.scalar( + select(func.sum(DocumentSegment.word_count)).where( + DocumentSegment.document_id == document_id, + DocumentSegment.dataset_id == dataset_id, + ) ) - .scalar() ) or 0 # Update need_summary based on dataset's summary_index_setting if summary_index_setting and summary_index_setting.get("enable") is True: @@ -118,15 +118,17 @@ class IndexProcessor: document.need_summary = False session.add(document) # update document segment status - session.query(DocumentSegment).where( - DocumentSegment.document_id == document_id, - DocumentSegment.dataset_id == dataset_id, - ).update( - { - DocumentSegment.status: "completed", - DocumentSegment.enabled: True, - DocumentSegment.completed_at: datetime.datetime.now(datetime.UTC).replace(tzinfo=None), - } + session.execute( + update(DocumentSegment) + .where( + DocumentSegment.document_id == document_id, + DocumentSegment.dataset_id == dataset_id, + ) + .values( + status="completed", + enabled=True, + completed_at=datetime.datetime.now(datetime.UTC).replace(tzinfo=None), + ) ) result: IndexingResultDict = { @@ -151,11 +153,11 @@ class IndexProcessor: doc_language = None with session_factory.create_session() as session: if document_id: - document = session.query(Document).filter_by(id=document_id).first() + document = session.scalar(select(Document).where(Document.id == document_id).limit(1)) else: document = None - dataset = session.query(Dataset).filter_by(id=dataset_id).first() + dataset = session.scalar(select(Dataset).where(Dataset.id == dataset_id).limit(1)) if not dataset: raise KnowledgeIndexNodeError(f"Dataset {dataset_id} not found.") diff --git a/api/core/rag/index_processor/processor/paragraph_index_processor.py b/api/core/rag/index_processor/processor/paragraph_index_processor.py index 4a731bf277..f8242efe31 100644 --- a/api/core/rag/index_processor/processor/paragraph_index_processor.py +++ b/api/core/rag/index_processor/processor/paragraph_index_processor.py @@ -3,21 +3,10 @@ import logging import re import uuid -from collections.abc import Mapping -from typing import Any, cast +from typing import Any, TypedDict, cast logger = logging.getLogger(__name__) -from graphon.file import File, FileTransferMethod, FileType, file_manager -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage -from graphon.model_runtime.entities.message_entities import ( - ImagePromptMessageContent, - PromptMessage, - PromptMessageContentUnionTypes, - TextPromptMessageContent, - UserPromptMessage, -) -from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType from sqlalchemy import select from core.app.file_access import DatabaseFileAccessController @@ -44,6 +33,16 @@ from core.tools.utils.text_processing_utils import remove_leading_symbols from core.workflow.file_reference import build_file_reference from extensions.ext_database import db from factories.file_factory import build_from_mapping +from graphon.file import File, FileTransferMethod, FileType, file_manager +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage +from graphon.model_runtime.entities.message_entities import ( + ImagePromptMessageContent, + PromptMessage, + PromptMessageContentUnionTypes, + TextPromptMessageContent, + UserPromptMessage, +) +from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType from libs import helper from models import UploadFile from models.account import Account @@ -55,6 +54,12 @@ from services.summary_index_service import SummaryIndexService _file_access_controller = DatabaseFileAccessController() +class ParagraphFormatPreviewDict(TypedDict): + chunk_structure: str + preview: list[dict[str, Any]] + total_segments: int + + class ParagraphIndexProcessor(BaseIndexProcessor): def extract(self, extract_setting: ExtractSetting, **kwargs) -> list[Document]: text_docs = ExtractProcessor.extract( @@ -266,16 +271,17 @@ class ParagraphIndexProcessor(BaseIndexProcessor): keyword = Keyword(dataset) keyword.add_texts(documents) - def format_preview(self, chunks: Any) -> Mapping[str, Any]: + def format_preview(self, chunks: Any) -> ParagraphFormatPreviewDict: if isinstance(chunks, list): preview = [] for content in chunks: preview.append({"content": content}) - return { + result: ParagraphFormatPreviewDict = { "chunk_structure": IndexStructureType.PARAGRAPH_INDEX, "preview": preview, "total_segments": len(chunks), } + return result else: raise ValueError("Chunks is not a list") diff --git a/api/core/rag/index_processor/processor/parent_child_index_processor.py b/api/core/rag/index_processor/processor/parent_child_index_processor.py index 53596b5de8..ba277d5018 100644 --- a/api/core/rag/index_processor/processor/parent_child_index_processor.py +++ b/api/core/rag/index_processor/processor/parent_child_index_processor.py @@ -3,8 +3,7 @@ import json import logging import uuid -from collections.abc import Mapping -from typing import Any +from typing import Any, TypedDict from sqlalchemy import delete, select @@ -36,6 +35,13 @@ from services.summary_index_service import SummaryIndexService logger = logging.getLogger(__name__) +class ParentChildFormatPreviewDict(TypedDict): + chunk_structure: str + parent_mode: str + preview: list[dict[str, Any]] + total_segments: int + + class ParentChildIndexProcessor(BaseIndexProcessor): def extract(self, extract_setting: ExtractSetting, **kwargs) -> list[Document]: text_docs = ExtractProcessor.extract( @@ -153,14 +159,12 @@ class ParentChildIndexProcessor(BaseIndexProcessor): if node_ids: # Find segments by index_node_id with session_factory.create_session() as session: - segments = ( - session.query(DocumentSegment) - .filter( + segments = session.scalars( + select(DocumentSegment).where( DocumentSegment.dataset_id == dataset.id, DocumentSegment.index_node_id.in_(node_ids), ) - .all() - ) + ).all() segment_ids = [segment.id for segment in segments] if segment_ids: SummaryIndexService.delete_summaries_for_segments(dataset, segment_ids) @@ -351,17 +355,18 @@ class ParentChildIndexProcessor(BaseIndexProcessor): if all_multimodal_documents and dataset.is_multimodal: vector.create_multimodal(all_multimodal_documents) - def format_preview(self, chunks: Any) -> Mapping[str, Any]: + def format_preview(self, chunks: Any) -> ParentChildFormatPreviewDict: parent_childs = ParentChildStructureChunk.model_validate(chunks) preview = [] for parent_child in parent_childs.parent_child_chunks: preview.append({"content": parent_child.parent_content, "child_chunks": parent_child.child_contents}) - return { + result: ParentChildFormatPreviewDict = { "chunk_structure": IndexStructureType.PARENT_CHILD_INDEX, "parent_mode": parent_childs.parent_mode, "preview": preview, "total_segments": len(parent_childs.parent_child_chunks), } + return result def generate_summary_preview( self, diff --git a/api/core/rag/index_processor/processor/qa_index_processor.py b/api/core/rag/index_processor/processor/qa_index_processor.py index 273ea0f852..d3f311b08e 100644 --- a/api/core/rag/index_processor/processor/qa_index_processor.py +++ b/api/core/rag/index_processor/processor/qa_index_processor.py @@ -4,11 +4,11 @@ import logging import re import threading import uuid -from collections.abc import Mapping -from typing import Any +from typing import Any, TypedDict import pandas as pd from flask import Flask, current_app +from sqlalchemy import select from werkzeug.datastructures import FileStorage from core.db.session_factory import session_factory @@ -36,6 +36,12 @@ from services.summary_index_service import SummaryIndexService logger = logging.getLogger(__name__) +class QAFormatPreviewDict(TypedDict): + chunk_structure: str + qa_preview: list[dict[str, Any]] + total_segments: int + + class QAIndexProcessor(BaseIndexProcessor): def extract(self, extract_setting: ExtractSetting, **kwargs) -> list[Document]: text_docs = ExtractProcessor.extract( @@ -158,14 +164,12 @@ class QAIndexProcessor(BaseIndexProcessor): if node_ids: # Find segments by index_node_id with session_factory.create_session() as session: - segments = ( - session.query(DocumentSegment) - .filter( + segments = session.scalars( + select(DocumentSegment).where( DocumentSegment.dataset_id == dataset.id, DocumentSegment.index_node_id.in_(node_ids), ) - .all() - ) + ).all() segment_ids = [segment.id for segment in segments] if segment_ids: SummaryIndexService.delete_summaries_for_segments(dataset, segment_ids) @@ -230,16 +234,17 @@ class QAIndexProcessor(BaseIndexProcessor): else: raise ValueError("Indexing technique must be high quality.") - def format_preview(self, chunks: Any) -> Mapping[str, Any]: + def format_preview(self, chunks: Any) -> QAFormatPreviewDict: qa_chunks = QAStructureChunk.model_validate(chunks) preview = [] for qa_chunk in qa_chunks.qa_chunks: preview.append({"question": qa_chunk.question, "answer": qa_chunk.answer}) - return { + result: QAFormatPreviewDict = { "chunk_structure": IndexStructureType.QA_INDEX, "qa_preview": preview, "total_segments": len(qa_chunks.qa_chunks), } + return result def generate_summary_preview( self, diff --git a/api/core/rag/models/document.py b/api/core/rag/models/document.py index 087736d0b0..4ebf095904 100644 --- a/api/core/rag/models/document.py +++ b/api/core/rag/models/document.py @@ -2,9 +2,10 @@ from abc import ABC, abstractmethod from collections.abc import Sequence from typing import Any -from graphon.file import File from pydantic import BaseModel, Field +from graphon.file import File + class ChildDocument(BaseModel): """Class for storing a piece of text and associated metadata.""" diff --git a/api/core/rag/rerank/rerank_model.py b/api/core/rag/rerank/rerank_model.py index 8283be19f9..bce08f998f 100644 --- a/api/core/rag/rerank/rerank_model.py +++ b/api/core/rag/rerank/rerank_model.py @@ -1,8 +1,5 @@ import base64 -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.entities.rerank_entities import RerankResult - from core.model_manager import ModelInstance, ModelManager from core.rag.index_processor.constant.doc_type import DocType from core.rag.index_processor.constant.query_type import QueryType @@ -10,6 +7,8 @@ from core.rag.models.document import Document from core.rag.rerank.rerank_base import BaseRerankRunner from extensions.ext_database import db from extensions.ext_storage import storage +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.entities.rerank_entities import MultimodalRerankInput, RerankResult from models.model import UploadFile @@ -123,7 +122,7 @@ class RerankModelRunner(BaseRerankRunner): :param query_type: query type :return: rerank result """ - docs = [] + docs: list[MultimodalRerankInput] = [] doc_ids = set() unique_documents = [] for document in documents: @@ -138,26 +137,28 @@ class RerankModelRunner(BaseRerankRunner): if upload_file: blob = storage.load_once(upload_file.key) document_file_base64 = base64.b64encode(blob).decode() - document_file_dict = { - "content": document_file_base64, - "content_type": document.metadata["doc_type"], - } - docs.append(document_file_dict) + docs.append( + MultimodalRerankInput( + content=document_file_base64, + content_type=document.metadata["doc_type"], + ) + ) else: - document_text_dict = { - "content": document.page_content, - "content_type": document.metadata.get("doc_type") or DocType.TEXT, - } - docs.append(document_text_dict) + docs.append( + MultimodalRerankInput( + content=document.page_content, + content_type=document.metadata.get("doc_type") or DocType.TEXT, + ) + ) doc_ids.add(document.metadata["doc_id"]) unique_documents.append(document) elif document.provider == "external": if document not in unique_documents: docs.append( - { - "content": document.page_content, - "content_type": document.metadata.get("doc_type") or DocType.TEXT, - } + MultimodalRerankInput( + content=document.page_content, + content_type=document.metadata.get("doc_type") or DocType.TEXT, + ) ) unique_documents.append(document) @@ -171,12 +172,12 @@ class RerankModelRunner(BaseRerankRunner): if upload_file: blob = storage.load_once(upload_file.key) file_query = base64.b64encode(blob).decode() - file_query_dict = { - "content": file_query, - "content_type": DocType.IMAGE, - } + file_query_input = MultimodalRerankInput( + content=file_query, + content_type=DocType.IMAGE, + ) rerank_result = self.rerank_model_instance.invoke_multimodal_rerank( - query=file_query_dict, docs=docs, score_threshold=score_threshold, top_n=top_n + query=file_query_input, docs=docs, score_threshold=score_threshold, top_n=top_n ) return rerank_result, unique_documents else: diff --git a/api/core/rag/rerank/weight_rerank.py b/api/core/rag/rerank/weight_rerank.py index 49123e13d0..d0732b269a 100644 --- a/api/core/rag/rerank/weight_rerank.py +++ b/api/core/rag/rerank/weight_rerank.py @@ -2,7 +2,6 @@ import math from collections import Counter import numpy as np -from graphon.model_runtime.entities.model_entities import ModelType from core.model_manager import ModelManager from core.rag.datasource.keyword.jieba.jieba_keyword_table_handler import JiebaKeywordTableHandler @@ -12,6 +11,7 @@ from core.rag.index_processor.constant.query_type import QueryType from core.rag.models.document import Document from core.rag.rerank.entity.weight import VectorSetting, Weights from core.rag.rerank.rerank_base import BaseRerankRunner +from graphon.model_runtime.entities.model_entities import ModelType class WeightRerankRunner(BaseRerankRunner): diff --git a/api/core/rag/retrieval/dataset_retrieval.py b/api/core/rag/retrieval/dataset_retrieval.py index 4e9b53b83e..1453fe020b 100644 --- a/api/core/rag/retrieval/dataset_retrieval.py +++ b/api/core/rag/retrieval/dataset_retrieval.py @@ -9,13 +9,8 @@ from collections.abc import Generator, Mapping from typing import Any, Union, cast from flask import Flask, current_app -from graphon.file import File, FileTransferMethod, FileType -from graphon.model_runtime.entities.llm_entities import LLMMode, LLMResult, LLMUsage -from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageRole, PromptMessageTool -from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel -from sqlalchemy import and_, func, literal, or_, select -from sqlalchemy.orm import Session +from sqlalchemy import and_, func, literal, or_, select, update +from sqlalchemy.orm import sessionmaker from core.app.app_config.entities import ( DatasetEntity, @@ -69,6 +64,11 @@ from core.workflow.nodes.knowledge_retrieval.retrieval import ( ) from extensions.ext_database import db from extensions.ext_redis import redis_client +from graphon.file import File, FileTransferMethod, FileType +from graphon.model_runtime.entities.llm_entities import LLMMode, LLMResult, LLMUsage +from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageRole, PromptMessageTool +from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from libs.helper import parse_uuid_str_or_none from libs.json_in_md_parser import parse_and_check_json_markdown from models import UploadFile @@ -276,8 +276,8 @@ class DatasetRetrieval: document_ids = [i.segment.document_id for i in records] with session_factory.create_session() as session: - datasets = session.query(Dataset).where(Dataset.id.in_(dataset_ids)).all() - documents = session.query(DatasetDocument).where(DatasetDocument.id.in_(document_ids)).all() + datasets = session.scalars(select(Dataset).where(Dataset.id.in_(dataset_ids))).all() + documents = session.scalars(select(DatasetDocument).where(DatasetDocument.id.in_(document_ids))).all() dataset_map = {i.id: i for i in datasets} document_map = {i.id: i for i in documents} @@ -875,7 +875,11 @@ class DatasetRetrieval: return retrieval_resource_list def _on_retrieval_end( - self, flask_app: Flask, documents: list[Document], message_id: str | None = None, timer: dict | None = None + self, + flask_app: Flask, + documents: list[Document], + message_id: str | None = None, + timer: dict[str, Any] | None = None, ): """Handle retrieval end.""" with flask_app.app_context(): @@ -884,7 +888,7 @@ class DatasetRetrieval: self._send_trace_task(message_id, documents, timer) return - with Session(db.engine) as session: + with sessionmaker(bind=db.engine).begin() as session: # Collect all document_ids and batch fetch DatasetDocuments document_ids = { doc.metadata["document_id"] @@ -971,15 +975,16 @@ class DatasetRetrieval: # Batch update hit_count for all segments if segment_ids_to_update: - session.query(DocumentSegment).where(DocumentSegment.id.in_(segment_ids_to_update)).update( - {DocumentSegment.hit_count: DocumentSegment.hit_count + 1}, - synchronize_session=False, + session.execute( + update(DocumentSegment) + .where(DocumentSegment.id.in_(segment_ids_to_update)) + .values(hit_count=DocumentSegment.hit_count + 1) + .execution_options(synchronize_session=False) ) - session.commit() self._send_trace_task(message_id, documents, timer) - def _send_trace_task(self, message_id: str | None, documents: list[Document], timer: dict | None): + def _send_trace_task(self, message_id: str | None, documents: list[Document], timer: dict[str, Any] | None): """Send trace task if trace manager is available.""" trace_manager: TraceQueueManager | None = ( self.application_generate_entity.trace_manager if self.application_generate_entity else None @@ -1141,7 +1146,7 @@ class DatasetRetrieval: invoke_from: InvokeFrom, hit_callback: DatasetIndexToolCallbackHandler, user_id: str, - inputs: dict, + inputs: dict[str, Any], ) -> list[DatasetRetrieverBaseTool] | None: """ A dataset tool is a tool that can be used to retrieve information from a dataset @@ -1336,7 +1341,7 @@ class DatasetRetrieval: metadata_filtering_mode: str, metadata_model_config: ModelConfig, metadata_filtering_conditions: MetadataFilteringCondition | None, - inputs: dict, + inputs: dict[str, Any], ) -> tuple[dict[str, list[str]] | None, MetadataFilteringCondition | None]: document_query = select(DatasetDocument).where( DatasetDocument.dataset_id.in_(dataset_ids), @@ -1416,7 +1421,7 @@ class DatasetRetrieval: metadata_filter_document_ids[document.dataset_id].append(document.id) # type: ignore return metadata_filter_document_ids, metadata_condition - def _replace_metadata_filter_value(self, text: str, inputs: dict) -> str: + def _replace_metadata_filter_value(self, text: str, inputs: dict[str, Any]) -> str: if not inputs: return text @@ -1823,7 +1828,7 @@ class DatasetRetrieval: def _get_available_datasets(self, tenant_id: str, dataset_ids: list[str]) -> list[Dataset]: with session_factory.create_session() as session: subquery = ( - session.query(DocumentModel.dataset_id, func.count(DocumentModel.id).label("available_document_count")) + select(DocumentModel.dataset_id, func.count(DocumentModel.id).label("available_document_count")) .where( DocumentModel.indexing_status == "completed", DocumentModel.enabled == True, @@ -1835,13 +1840,12 @@ class DatasetRetrieval: .subquery() ) - results = ( - session.query(Dataset) + results = session.scalars( + select(Dataset) .outerjoin(subquery, Dataset.id == subquery.c.dataset_id) .where(Dataset.tenant_id == tenant_id, Dataset.id.in_(dataset_ids)) .where((subquery.c.available_document_count > 0) | (Dataset.provider == "external")) - .all() - ) + ).all() available_datasets = [] for dataset in results: diff --git a/api/core/rag/retrieval/output_parser/react_output.py b/api/core/rag/retrieval/output_parser/react_output.py index 9a14d41716..29abae4280 100644 --- a/api/core/rag/retrieval/output_parser/react_output.py +++ b/api/core/rag/retrieval/output_parser/react_output.py @@ -1,7 +1,7 @@ from __future__ import annotations from dataclasses import dataclass -from typing import NamedTuple, Union +from typing import Any, NamedTuple, Union @dataclass @@ -10,7 +10,7 @@ class ReactAction: tool: str """The name of the Tool to execute.""" - tool_input: Union[str, dict] + tool_input: Union[str, dict[str, Any]] """The input to pass in to the Tool.""" log: str """Additional information to log about the action.""" @@ -19,7 +19,7 @@ class ReactAction: class ReactFinish(NamedTuple): """The final return value of an ReactFinish.""" - return_values: dict + return_values: dict[str, Any] """Dictionary of return values.""" log: str """Additional information to log about the return value""" diff --git a/api/core/rag/retrieval/router/multi_dataset_function_call_router.py b/api/core/rag/retrieval/router/multi_dataset_function_call_router.py index dce7b6226c..e617a9660e 100644 --- a/api/core/rag/retrieval/router/multi_dataset_function_call_router.py +++ b/api/core/rag/retrieval/router/multi_dataset_function_call_router.py @@ -1,10 +1,9 @@ from typing import Union -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage -from graphon.model_runtime.entities.message_entities import PromptMessageTool, SystemPromptMessage, UserPromptMessage - from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.model_manager import ModelInstance +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage +from graphon.model_runtime.entities.message_entities import PromptMessageTool, SystemPromptMessage, UserPromptMessage class FunctionCallMultiDatasetRouter: diff --git a/api/core/rag/retrieval/router/multi_dataset_react_route.py b/api/core/rag/retrieval/router/multi_dataset_react_route.py index dd280cdf6a..21a9d04f7f 100644 --- a/api/core/rag/retrieval/router/multi_dataset_react_route.py +++ b/api/core/rag/retrieval/router/multi_dataset_react_route.py @@ -1,9 +1,5 @@ from collections.abc import Generator, Sequence -from typing import Union - -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage -from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageRole, PromptMessageTool -from graphon.model_runtime.entities.model_entities import ModelType +from typing import Any, Union from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.app.llm import deduct_llm_quota @@ -12,6 +8,9 @@ from core.prompt.advanced_prompt_transform import AdvancedPromptTransform from core.prompt.entities.advanced_prompt_entities import ChatModelMessage, CompletionModelPromptTemplate from core.rag.retrieval.output_parser.react_output import ReactAction from core.rag.retrieval.output_parser.structured_chat import StructuredChatOutputParser +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage +from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageRole, PromptMessageTool +from graphon.model_runtime.entities.model_entities import ModelType PREFIX = """Respond to the human as helpfully and accurately as possible. You have access to the following tools:""" @@ -139,7 +138,7 @@ class ReactMultiDatasetRouter: def _invoke_llm( self, - completion_param: dict, + completion_param: dict[str, Any], model_instance: ModelInstance, prompt_messages: list[PromptMessage], stop: list[str], diff --git a/api/core/rag/splitter/fixed_text_splitter.py b/api/core/rag/splitter/fixed_text_splitter.py index 3383c7f3bd..2581c354dd 100644 --- a/api/core/rag/splitter/fixed_text_splitter.py +++ b/api/core/rag/splitter/fixed_text_splitter.py @@ -7,10 +7,9 @@ import re from collections.abc import Collection from typing import Any, Literal -from graphon.model_runtime.model_providers.__base.tokenizers.gpt2_tokenizer import GPT2Tokenizer - from core.model_manager import ModelInstance from core.rag.splitter.text_splitter import RecursiveCharacterTextSplitter +from graphon.model_runtime.model_providers.__base.tokenizers.gpt2_tokenizer import GPT2Tokenizer class EnhanceRecursiveCharacterTextSplitter(RecursiveCharacterTextSplitter): diff --git a/api/core/rag/splitter/text_splitter.py b/api/core/rag/splitter/text_splitter.py index 8977611f93..7f2117e2dd 100644 --- a/api/core/rag/splitter/text_splitter.py +++ b/api/core/rag/splitter/text_splitter.py @@ -63,7 +63,7 @@ class TextSplitter(BaseDocumentTransformer, ABC): def split_text(self, text: str) -> list[str]: """Split text into multiple components.""" - def create_documents(self, texts: list[str], metadatas: list[dict] | None = None) -> list[Document]: + def create_documents(self, texts: list[str], metadatas: list[dict[str, Any]] | None = None) -> list[Document]: """Create documents from a list of texts.""" _metadatas = metadatas or [{}] * len(texts) documents = [] diff --git a/api/core/rag/summary_index/summary_index.py b/api/core/rag/summary_index/summary_index.py index 6f120bd471..bff5f85dec 100644 --- a/api/core/rag/summary_index/summary_index.py +++ b/api/core/rag/summary_index/summary_index.py @@ -1,6 +1,8 @@ import concurrent.futures import logging +from sqlalchemy import select + from core.db.session_factory import session_factory from core.rag.index_processor.constant.index_type import IndexTechniqueType from core.rag.index_processor.index_processor_base import SummaryIndexSettingDict @@ -21,7 +23,7 @@ class SummaryIndex: ) -> None: if is_preview: with session_factory.create_session() as session: - dataset = session.query(Dataset).filter_by(id=dataset_id).first() + dataset = session.scalar(select(Dataset).where(Dataset.id == dataset_id).limit(1)) if not dataset or dataset.indexing_technique != IndexTechniqueType.HIGH_QUALITY: return @@ -34,32 +36,31 @@ class SummaryIndex: if not document_id: return - document = session.query(Document).filter_by(id=document_id).first() + document = session.scalar(select(Document).where(Document.id == document_id).limit(1)) # Skip qa_model documents if document is None or document.doc_form == "qa_model": return - query = session.query(DocumentSegment).filter_by( - dataset_id=dataset_id, - document_id=document_id, - status="completed", - enabled=True, - ) - segments = query.all() + segments = session.scalars( + select(DocumentSegment).where( + DocumentSegment.dataset_id == dataset_id, + DocumentSegment.document_id == document_id, + DocumentSegment.status == "completed", + DocumentSegment.enabled == True, + ) + ).all() segment_ids = [segment.id for segment in segments] if not segment_ids: return - existing_summaries = ( - session.query(DocumentSegmentSummary) - .filter( + existing_summaries = session.scalars( + select(DocumentSegmentSummary).where( DocumentSegmentSummary.chunk_id.in_(segment_ids), DocumentSegmentSummary.dataset_id == dataset_id, DocumentSegmentSummary.status == "completed", ) - .all() - ) + ).all() completed_summary_segment_ids = {i.chunk_id for i in existing_summaries} # Preview mode should process segments that are MISSING completed summaries pending_segment_ids = [sid for sid in segment_ids if sid not in completed_summary_segment_ids] @@ -73,7 +74,7 @@ class SummaryIndex: def process_segment(segment_id: str) -> None: """Process a single segment in a thread with a fresh DB session.""" with session_factory.create_session() as session: - segment = session.query(DocumentSegment).filter_by(id=segment_id).first() + segment = session.scalar(select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1)) if segment is None: return try: diff --git a/api/core/repositories/celery_workflow_execution_repository.py b/api/core/repositories/celery_workflow_execution_repository.py index b07c63fdf0..e87d1cd6b2 100644 --- a/api/core/repositories/celery_workflow_execution_repository.py +++ b/api/core/repositories/celery_workflow_execution_repository.py @@ -7,11 +7,11 @@ providing improved performance by offloading database operations to background w import logging -from graphon.entities import WorkflowExecution from sqlalchemy.engine import Engine from sqlalchemy.orm import sessionmaker from core.repositories.factory import WorkflowExecutionRepository +from graphon.entities import WorkflowExecution from libs.helper import extract_tenant_id from models import Account, CreatorUserRole, EndUser from models.enums import WorkflowRunTriggeredFrom diff --git a/api/core/repositories/celery_workflow_node_execution_repository.py b/api/core/repositories/celery_workflow_node_execution_repository.py index cdb3af01a8..2451563317 100644 --- a/api/core/repositories/celery_workflow_node_execution_repository.py +++ b/api/core/repositories/celery_workflow_node_execution_repository.py @@ -8,7 +8,6 @@ providing improved performance by offloading database operations to background w import logging from collections.abc import Sequence -from graphon.entities import WorkflowNodeExecution from sqlalchemy.engine import Engine from sqlalchemy.orm import sessionmaker @@ -16,6 +15,7 @@ from core.repositories.factory import ( OrderConfig, WorkflowNodeExecutionRepository, ) +from graphon.entities import WorkflowNodeExecution from libs.helper import extract_tenant_id from models import Account, CreatorUserRole, EndUser from models.workflow import WorkflowNodeExecutionTriggeredFrom diff --git a/api/core/repositories/factory.py b/api/core/repositories/factory.py index ce3ad15759..4e83e70799 100644 --- a/api/core/repositories/factory.py +++ b/api/core/repositories/factory.py @@ -9,11 +9,11 @@ from collections.abc import Sequence from dataclasses import dataclass from typing import Literal, Protocol -from graphon.entities import WorkflowExecution, WorkflowNodeExecution from sqlalchemy.engine import Engine from sqlalchemy.orm import sessionmaker from configs import dify_config +from graphon.entities import WorkflowExecution, WorkflowNodeExecution from libs.module_loading import import_string from models import Account, EndUser from models.enums import WorkflowRunTriggeredFrom diff --git a/api/core/repositories/human_input_repository.py b/api/core/repositories/human_input_repository.py index 72d9394149..02625e242f 100644 --- a/api/core/repositories/human_input_repository.py +++ b/api/core/repositories/human_input_repository.py @@ -4,8 +4,6 @@ from collections.abc import Mapping, Sequence from datetime import datetime from typing import Any, Protocol -from graphon.nodes.human_input.entities import FormDefinition, HumanInputNodeData -from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from sqlalchemy import select from sqlalchemy.orm import Session, selectinload @@ -19,6 +17,8 @@ from core.workflow.human_input_compat import ( InteractiveSurfaceDeliveryMethod, is_human_input_webapp_enabled, ) +from graphon.nodes.human_input.entities import FormDefinition, HumanInputNodeData +from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from libs.datetime_utils import naive_utc_now from libs.uuid_utils import uuidv7 from models.account import Account, TenantAccountJoin diff --git a/api/core/repositories/sqlalchemy_workflow_execution_repository.py b/api/core/repositories/sqlalchemy_workflow_execution_repository.py index d74cc8f231..6be3902317 100644 --- a/api/core/repositories/sqlalchemy_workflow_execution_repository.py +++ b/api/core/repositories/sqlalchemy_workflow_execution_repository.py @@ -5,13 +5,13 @@ SQLAlchemy implementation of the WorkflowExecutionRepository. import json import logging -from graphon.entities import WorkflowExecution -from graphon.enums import WorkflowExecutionStatus, WorkflowType -from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from sqlalchemy.engine import Engine from sqlalchemy.orm import sessionmaker from core.repositories.factory import WorkflowExecutionRepository +from graphon.entities import WorkflowExecution +from graphon.enums import WorkflowExecutionStatus, WorkflowType +from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from libs.helper import extract_tenant_id from models import ( Account, diff --git a/api/core/repositories/sqlalchemy_workflow_node_execution_repository.py b/api/core/repositories/sqlalchemy_workflow_node_execution_repository.py index 13e885672a..b036687bc9 100644 --- a/api/core/repositories/sqlalchemy_workflow_node_execution_repository.py +++ b/api/core/repositories/sqlalchemy_workflow_node_execution_repository.py @@ -10,10 +10,6 @@ from concurrent.futures import ThreadPoolExecutor from typing import Any import psycopg2.errors -from graphon.entities import WorkflowNodeExecution -from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus -from graphon.model_runtime.utils.encoders import jsonable_encoder -from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from sqlalchemy import UnaryExpression, asc, desc, select from sqlalchemy.engine import Engine from sqlalchemy.exc import IntegrityError @@ -23,6 +19,10 @@ from tenacity import before_sleep_log, retry, retry_if_exception, stop_after_att from configs import dify_config from core.repositories.factory import OrderConfig, WorkflowNodeExecutionRepository from extensions.ext_storage import storage +from graphon.entities import WorkflowNodeExecution +from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus +from graphon.model_runtime.utils.encoders import jsonable_encoder +from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from libs.helper import extract_tenant_id from libs.uuid_utils import uuidv7 from models import ( diff --git a/api/core/schemas/resolver.py b/api/core/schemas/resolver.py index 6e26664ac2..e267c1abd9 100644 --- a/api/core/schemas/resolver.py +++ b/api/core/schemas/resolver.py @@ -254,7 +254,7 @@ def resolve_dify_schema_refs( return resolver.resolve(schema) -def _remove_metadata_fields(schema: dict) -> dict: +def _remove_metadata_fields(schema: dict[str, Any]) -> dict[str, Any]: """ Remove metadata fields from schema that shouldn't be included in resolved output diff --git a/api/core/telemetry/gateway.py b/api/core/telemetry/gateway.py index 7b013d0563..812edeeb14 100644 --- a/api/core/telemetry/gateway.py +++ b/api/core/telemetry/gateway.py @@ -89,7 +89,7 @@ def _get_case_routing() -> dict[TelemetryCase, CaseRoute]: return _case_routing -def __getattr__(name: str) -> dict: +def __getattr__(name: str) -> Any: """Lazy module-level access to routing tables.""" if name == "CASE_ROUTING": return _get_case_routing() diff --git a/api/core/tools/__base/tool.py b/api/core/tools/__base/tool.py index 7bb2cdb876..ab0f73a9a2 100644 --- a/api/core/tools/__base/tool.py +++ b/api/core/tools/__base/tool.py @@ -198,7 +198,7 @@ class Tool(ABC): message=ToolInvokeMessage.TextMessage(text=text), ) - def create_blob_message(self, blob: bytes, meta: dict | None = None) -> ToolInvokeMessage: + def create_blob_message(self, blob: bytes, meta: dict[str, Any] | None = None) -> ToolInvokeMessage: """ create a blob message @@ -212,7 +212,7 @@ class Tool(ABC): meta=meta, ) - def create_json_message(self, object: dict, suppress_output: bool = False) -> ToolInvokeMessage: + def create_json_message(self, object: dict[str, Any], suppress_output: bool = False) -> ToolInvokeMessage: """ create a json message """ diff --git a/api/core/tools/builtin_tool/providers/audio/tools/asr.py b/api/core/tools/builtin_tool/providers/audio/tools/asr.py index e539074303..95660ab93b 100644 --- a/api/core/tools/builtin_tool/providers/audio/tools/asr.py +++ b/api/core/tools/builtin_tool/providers/audio/tools/asr.py @@ -2,15 +2,14 @@ import io from collections.abc import Generator from typing import Any -from graphon.file import FileType -from graphon.file.file_manager import download -from graphon.model_runtime.entities.model_entities import ModelType - from core.model_manager import ModelManager from core.plugin.entities.parameters import PluginParameterOption from core.tools.builtin_tool.tool import BuiltinTool from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolInvokeMessage, ToolParameter +from graphon.file import FileType +from graphon.file.file_manager import download +from graphon.model_runtime.entities.model_entities import ModelType from services.model_provider_service import ModelProviderService diff --git a/api/core/tools/builtin_tool/providers/audio/tools/tts.py b/api/core/tools/builtin_tool/providers/audio/tools/tts.py index f49c669fe0..ac3820f1ab 100644 --- a/api/core/tools/builtin_tool/providers/audio/tools/tts.py +++ b/api/core/tools/builtin_tool/providers/audio/tools/tts.py @@ -2,13 +2,12 @@ import io from collections.abc import Generator from typing import Any -from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType - from core.model_manager import ModelManager from core.plugin.entities.parameters import PluginParameterOption from core.tools.builtin_tool.tool import BuiltinTool from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolInvokeMessage, ToolParameter +from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType from services.model_provider_service import ModelProviderService diff --git a/api/core/tools/builtin_tool/tool.py b/api/core/tools/builtin_tool/tool.py index 14af63a962..d41503e1e6 100644 --- a/api/core/tools/builtin_tool/tool.py +++ b/api/core/tools/builtin_tool/tool.py @@ -1,12 +1,11 @@ from __future__ import annotations -from graphon.model_runtime.entities.llm_entities import LLMResult -from graphon.model_runtime.entities.message_entities import PromptMessage, SystemPromptMessage, UserPromptMessage - from core.tools.__base.tool import Tool from core.tools.__base.tool_runtime import ToolRuntime from core.tools.entities.tool_entities import ToolProviderType from core.tools.utils.model_invocation_utils import ModelInvocationUtils +from graphon.model_runtime.entities.llm_entities import LLMResult +from graphon.model_runtime.entities.message_entities import PromptMessage, SystemPromptMessage, UserPromptMessage _SUMMARY_PROMPT = """You are a professional language researcher, you are interested in the language and you can quickly aimed at the main point of an webpage and reproduce it in your own words but diff --git a/api/core/tools/custom_tool/tool.py b/api/core/tools/custom_tool/tool.py index 0a2c37c563..168e5f4493 100644 --- a/api/core/tools/custom_tool/tool.py +++ b/api/core/tools/custom_tool/tool.py @@ -6,7 +6,6 @@ from typing import Any, Union from urllib.parse import urlencode import httpx -from graphon.file.file_manager import download from core.helper import ssrf_proxy from core.tools.__base.tool import Tool @@ -14,6 +13,7 @@ from core.tools.__base.tool_runtime import ToolRuntime from core.tools.entities.tool_bundle import ApiToolBundle from core.tools.entities.tool_entities import ToolEntity, ToolInvokeMessage, ToolProviderType from core.tools.errors import ToolInvokeError, ToolParameterValidationError, ToolProviderCredentialValidationError +from graphon.file.file_manager import download API_TOOL_DEFAULT_TIMEOUT = ( int(getenv("API_TOOL_DEFAULT_CONNECT_TIMEOUT", "10")), diff --git a/api/core/tools/entities/api_entities.py b/api/core/tools/entities/api_entities.py index d5d3d1b1d9..42a88c0003 100644 --- a/api/core/tools/entities/api_entities.py +++ b/api/core/tools/entities/api_entities.py @@ -2,7 +2,6 @@ from collections.abc import Mapping from datetime import datetime from typing import Any, Literal -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field, field_validator from core.entities.mcp_provider import MCPAuthentication, MCPConfiguration @@ -10,6 +9,7 @@ from core.plugin.entities.plugin_daemon import CredentialType from core.tools.__base.tool import ToolParameter from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolProviderType +from graphon.model_runtime.utils.encoders import jsonable_encoder class ToolApiEntity(BaseModel): @@ -75,22 +75,27 @@ class ToolProviderApiEntity(BaseModel): parameter.pop("input_schema", None) # ------------- optional_fields = self.optional_field("server_url", self.server_url) - if self.type == ToolProviderType.MCP: - optional_fields.update(self.optional_field("updated_at", self.updated_at)) - optional_fields.update(self.optional_field("server_identifier", self.server_identifier)) - optional_fields.update( - self.optional_field( - "configuration", self.configuration.model_dump() if self.configuration else MCPConfiguration() + match self.type: + case ToolProviderType.MCP: + optional_fields.update(self.optional_field("updated_at", self.updated_at)) + optional_fields.update(self.optional_field("server_identifier", self.server_identifier)) + optional_fields.update( + self.optional_field( + "configuration", self.configuration.model_dump() if self.configuration else MCPConfiguration() + ) ) - ) - optional_fields.update( - self.optional_field("authentication", self.authentication.model_dump() if self.authentication else None) - ) - optional_fields.update(self.optional_field("is_dynamic_registration", self.is_dynamic_registration)) - optional_fields.update(self.optional_field("masked_headers", self.masked_headers)) - optional_fields.update(self.optional_field("original_headers", self.original_headers)) - elif self.type == ToolProviderType.WORKFLOW: - optional_fields.update(self.optional_field("workflow_app_id", self.workflow_app_id)) + optional_fields.update( + self.optional_field( + "authentication", self.authentication.model_dump() if self.authentication else None + ) + ) + optional_fields.update(self.optional_field("is_dynamic_registration", self.is_dynamic_registration)) + optional_fields.update(self.optional_field("masked_headers", self.masked_headers)) + optional_fields.update(self.optional_field("original_headers", self.original_headers)) + case ToolProviderType.WORKFLOW: + optional_fields.update(self.optional_field("workflow_app_id", self.workflow_app_id)) + case _: + pass return { "id": self.id, "author": self.author, diff --git a/api/core/tools/entities/tool_bundle.py b/api/core/tools/entities/tool_bundle.py index 10710c4376..4e07b7157a 100644 --- a/api/core/tools/entities/tool_bundle.py +++ b/api/core/tools/entities/tool_bundle.py @@ -1,4 +1,5 @@ from collections.abc import Mapping +from typing import Any from pydantic import BaseModel, Field @@ -26,6 +27,6 @@ class ApiToolBundle(BaseModel): # icon icon: str | None = None # openapi operation - openapi: dict + openapi: dict[str, Any] # output schema output_schema: Mapping[str, object] = Field(default_factory=dict) diff --git a/api/core/tools/entities/tool_entities.py b/api/core/tools/entities/tool_entities.py index 31e879add2..0c77693dde 100644 --- a/api/core/tools/entities/tool_entities.py +++ b/api/core/tools/entities/tool_entities.py @@ -149,7 +149,7 @@ class ToolInvokeMessage(BaseModel): text: str class JsonMessage(BaseModel): - json_object: dict | list + json_object: dict[str, Any] | list[Any] suppress_output: bool = Field(default=False, description="Whether to suppress JSON output in result string") class BlobMessage(BaseModel): @@ -337,7 +337,7 @@ class ToolParameter(PluginParameter): form: ToolParameterForm = Field(..., description="The form of the parameter, schema/form/llm") llm_description: str | None = None # MCP object and array type parameters use this field to store the schema - input_schema: dict | None = None + input_schema: dict[str, Any] | None = None @classmethod def get_simple_instance( @@ -450,6 +450,12 @@ class WorkflowToolParameterConfiguration(BaseModel): form: ToolParameter.ToolParameterForm = Field(..., description="The form of the parameter") +class ToolInvokeMetaDict(TypedDict): + time_cost: float + error: str | None + tool_config: dict[str, Any] | None + + class ToolInvokeMeta(BaseModel): """ Tool invoke meta @@ -457,7 +463,7 @@ class ToolInvokeMeta(BaseModel): time_cost: float = Field(..., description="The time cost of the tool invoke") error: str | None = None - tool_config: dict | None = None + tool_config: dict[str, Any] | None = None @classmethod def empty(cls) -> ToolInvokeMeta: @@ -473,12 +479,13 @@ class ToolInvokeMeta(BaseModel): """ return cls(time_cost=0.0, error=error, tool_config={}) - def to_dict(self): - return { + def to_dict(self) -> ToolInvokeMetaDict: + result: ToolInvokeMetaDict = { "time_cost": self.time_cost, "error": self.error, "tool_config": self.tool_config, } + return result class ToolLabel(BaseModel): diff --git a/api/core/tools/errors.py b/api/core/tools/errors.py index 4c3efd6ff9..2b26832b44 100644 --- a/api/core/tools/errors.py +++ b/api/core/tools/errors.py @@ -38,6 +38,17 @@ class ToolCredentialPolicyViolationError(ValueError): pass +class ApiToolProviderNotFoundError(ValueError): + error_code = "api_tool_provider_not_found" + provider_name: str + tenant_id: str + + def __init__(self, provider_name: str, tenant_id: str): + self.provider_name = provider_name + self.tenant_id = tenant_id + super().__init__(f"api provider {provider_name} does not exist") + + class WorkflowToolHumanInputNotSupportedError(BaseHTTPException): error_code = "workflow_tool_human_input_not_supported" description = "Workflow with Human Input nodes cannot be published as a workflow tool." diff --git a/api/core/tools/mcp_tool/tool.py b/api/core/tools/mcp_tool/tool.py index f6d09472b3..00fc8a8282 100644 --- a/api/core/tools/mcp_tool/tool.py +++ b/api/core/tools/mcp_tool/tool.py @@ -6,8 +6,6 @@ import logging from collections.abc import Generator, Mapping from typing import Any, cast -from graphon.model_runtime.entities.llm_entities import LLMUsage, LLMUsageMetadata - from core.mcp.auth_client import MCPClientWithAuthRetry from core.mcp.error import MCPConnectionError from core.mcp.types import ( @@ -23,6 +21,7 @@ from core.tools.__base.tool import Tool from core.tools.__base.tool_runtime import ToolRuntime from core.tools.entities.tool_entities import ToolEntity, ToolInvokeMessage, ToolProviderType from core.tools.errors import ToolInvokeError +from graphon.model_runtime.entities.llm_entities import LLMUsage, LLMUsageMetadata logger = logging.getLogger(__name__) diff --git a/api/core/tools/tool_engine.py b/api/core/tools/tool_engine.py index 685d687d8c..3caacb8706 100644 --- a/api/core/tools/tool_engine.py +++ b/api/core/tools/tool_engine.py @@ -7,7 +7,6 @@ from datetime import UTC, datetime from mimetypes import guess_type from typing import Any, Union, cast -from graphon.file import FileTransferMethod, FileType from yarl import URL from core.app.entities.app_invoke_entities import InvokeFrom @@ -33,6 +32,7 @@ from core.tools.errors import ( from core.tools.utils.message_transformer import ToolFileMessageTransformer, safe_json_value from core.tools.workflow_as_tool.tool import WorkflowTool from extensions.ext_database import db +from graphon.file import FileTransferMethod, FileType from models.enums import CreatorUserRole, MessageFileBelongsTo from models.model import Message, MessageFile @@ -47,7 +47,7 @@ class ToolEngine: @staticmethod def agent_invoke( tool: Tool, - tool_parameters: Union[str, dict], + tool_parameters: Union[str, dict[str, Any]], user_id: str, tenant_id: str, message: Message, @@ -85,7 +85,8 @@ class ToolEngine: invocation_meta_dict: dict[str, ToolInvokeMeta] = {} def message_callback( - invocation_meta_dict: dict, messages: Generator[ToolInvokeMessage | ToolInvokeMeta, None, None] + invocation_meta_dict: dict[str, ToolInvokeMeta], + messages: Generator[ToolInvokeMessage | ToolInvokeMeta, None, None], ): for message in messages: if isinstance(message, ToolInvokeMeta): @@ -200,7 +201,7 @@ class ToolEngine: @staticmethod def _invoke( tool: Tool, - tool_parameters: dict, + tool_parameters: dict[str, Any], user_id: str, conversation_id: str | None = None, app_id: str | None = None, @@ -262,6 +263,8 @@ class ToolEngine: ensure_ascii=False, ) ) + elif response.type == ToolInvokeMessage.MessageType.VARIABLE: + continue else: parts.append(str(response.message)) diff --git a/api/core/tools/tool_file_manager.py b/api/core/tools/tool_file_manager.py index 7ac29cf069..b3424cd9a5 100644 --- a/api/core/tools/tool_file_manager.py +++ b/api/core/tools/tool_file_manager.py @@ -6,17 +6,17 @@ import os import time from collections.abc import Generator from mimetypes import guess_extension, guess_type -from typing import Union from uuid import uuid4 import httpx -from graphon.file import File, FileTransferMethod, get_file_type_by_mime_type +from sqlalchemy import select from configs import dify_config from core.db.session_factory import session_factory from core.helper import ssrf_proxy from core.workflow.file_reference import build_file_reference from extensions.ext_storage import storage +from graphon.file import File, FileTransferMethod, get_file_type_by_mime_type from models.model import MessageFile from models.tools import ToolFile @@ -157,7 +157,7 @@ class ToolFileManager: return tool_file - def get_file_binary(self, id: str) -> Union[tuple[bytes, str], None]: + def get_file_binary(self, id: str) -> tuple[bytes, str] | None: """ get file binary @@ -166,13 +166,7 @@ class ToolFileManager: :return: the binary of the file, mime type """ with session_factory.create_session() as session: - tool_file: ToolFile | None = ( - session.query(ToolFile) - .where( - ToolFile.id == id, - ) - .first() - ) + tool_file: ToolFile | None = session.scalar(select(ToolFile).where(ToolFile.id == id).limit(1)) if not tool_file: return None @@ -181,7 +175,7 @@ class ToolFileManager: return blob, tool_file.mimetype - def get_file_binary_by_message_file_id(self, id: str) -> Union[tuple[bytes, str], None]: + def get_file_binary_by_message_file_id(self, id: str) -> tuple[bytes, str] | None: """ get file binary @@ -190,13 +184,7 @@ class ToolFileManager: :return: the binary of the file, mime type """ with session_factory.create_session() as session: - message_file: MessageFile | None = ( - session.query(MessageFile) - .where( - MessageFile.id == id, - ) - .first() - ) + message_file: MessageFile | None = session.scalar(select(MessageFile).where(MessageFile.id == id).limit(1)) # Check if message_file is not None if message_file is not None: @@ -210,13 +198,7 @@ class ToolFileManager: else: tool_file_id = None - tool_file: ToolFile | None = ( - session.query(ToolFile) - .where( - ToolFile.id == tool_file_id, - ) - .first() - ) + tool_file: ToolFile | None = session.scalar(select(ToolFile).where(ToolFile.id == tool_file_id).limit(1)) if not tool_file: return None @@ -234,13 +216,7 @@ class ToolFileManager: :return: the binary of the file, mime type """ with session_factory.create_session() as session: - tool_file: ToolFile | None = ( - session.query(ToolFile) - .where( - ToolFile.id == tool_file_id, - ) - .first() - ) + tool_file: ToolFile | None = session.scalar(select(ToolFile).where(ToolFile.id == tool_file_id).limit(1)) if not tool_file: return None, None diff --git a/api/core/tools/tool_label_manager.py b/api/core/tools/tool_label_manager.py index 58190d1089..d8969a3391 100644 --- a/api/core/tools/tool_label_manager.py +++ b/api/core/tools/tool_label_manager.py @@ -1,4 +1,5 @@ from sqlalchemy import delete, select +from sqlalchemy.orm import Session, sessionmaker from core.tools.__base.tool_provider import ToolProviderController from core.tools.builtin_tool.provider import BuiltinToolProviderController @@ -19,10 +20,18 @@ class ToolLabelManager: return list(set(tool_labels)) @classmethod - def update_tool_labels(cls, controller: ToolProviderController, labels: list[str]): + def update_tool_labels( + cls, controller: ToolProviderController, labels: list[str], session: Session | None = None + ) -> None: """ Update tool labels + + :param controller: tool provider controller + :param labels: list of tool labels + :param session: database session, if None, a new session will be created + :return: None """ + labels = cls.filter_tool_labels(labels) if isinstance(controller, ApiToolProviderController | WorkflowToolProviderController): @@ -30,26 +39,46 @@ class ToolLabelManager: else: raise ValueError("Unsupported tool type") + if session is not None: + cls._update_tool_labels_logics(session, provider_id, controller, labels) + else: + with sessionmaker(db.engine).begin() as _session: + cls._update_tool_labels_logics(_session, provider_id, controller, labels) + + @classmethod + def _update_tool_labels_logics( + cls, session: Session, provider_id: str, controller: ToolProviderController, labels: list[str] + ) -> None: + """ + Update tool labels logics + + :param session: database session + :param provider_id: tool provider ID + :param controller: tool provider controller + :param labels: list of tool labels + :return: None + """ + # delete old labels - db.session.execute(delete(ToolLabelBinding).where(ToolLabelBinding.tool_id == provider_id)) + _ = session.execute( + delete(ToolLabelBinding).where( + ToolLabelBinding.tool_id == provider_id, ToolLabelBinding.tool_type == controller.provider_type + ) + ) # insert new labels for label in labels: - db.session.add( - ToolLabelBinding( - tool_id=provider_id, - tool_type=controller.provider_type, - label_name=label, - ) - ) - - db.session.commit() + session.add(ToolLabelBinding(tool_id=provider_id, tool_type=controller.provider_type, label_name=label)) @classmethod def get_tool_labels(cls, controller: ToolProviderController) -> list[str]: """ Get tool labels + + :param controller: tool provider controller + :return: list of tool labels (str) """ + if isinstance(controller, ApiToolProviderController | WorkflowToolProviderController): provider_id = controller.provider_id elif isinstance(controller, BuiltinToolProviderController): @@ -60,9 +89,11 @@ class ToolLabelManager: ToolLabelBinding.tool_id == provider_id, ToolLabelBinding.tool_type == controller.provider_type, ) - labels = db.session.scalars(stmt).all() - return list(labels) + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + labels: list[str] = list(_session.scalars(stmt).all()) + + return labels @classmethod def get_tools_labels(cls, tool_providers: list[ToolProviderController]) -> dict[str, list[str]]: @@ -78,16 +109,22 @@ class ToolLabelManager: if not tool_providers: return {} + provider_ids: list[str] = [] + provider_types: set[str] = set() + for controller in tool_providers: if not isinstance(controller, ApiToolProviderController | WorkflowToolProviderController): raise ValueError("Unsupported tool type") - - provider_ids = [] - for controller in tool_providers: - assert isinstance(controller, ApiToolProviderController | WorkflowToolProviderController) provider_ids.append(controller.provider_id) + provider_types.add(controller.provider_type) - labels = db.session.scalars(select(ToolLabelBinding).where(ToolLabelBinding.tool_id.in_(provider_ids))).all() + labels: list[ToolLabelBinding] = [] + + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + stmt = select(ToolLabelBinding).where( + ToolLabelBinding.tool_id.in_(provider_ids), ToolLabelBinding.tool_type.in_(list(provider_types)) + ) + labels = list(_session.scalars(stmt).all()) tool_labels: dict[str, list[str]] = {label.tool_id: [] for label in labels} diff --git a/api/core/tools/tool_manager.py b/api/core/tools/tool_manager.py index d45d45c520..f4588904d3 100644 --- a/api/core/tools/tool_manager.py +++ b/api/core/tools/tool_manager.py @@ -5,10 +5,9 @@ import time from collections.abc import Generator, Mapping from os import listdir, path from threading import Lock -from typing import TYPE_CHECKING, Any, Literal, Optional, Protocol, Union, cast +from typing import TYPE_CHECKING, Any, Literal, Protocol, cast import sqlalchemy as sa -from graphon.runtime import VariablePool from pydantic import TypeAdapter from sqlalchemy import select from sqlalchemy.orm import Session @@ -29,14 +28,13 @@ from core.tools.plugin_tool.tool import PluginTool from core.tools.utils.uuid_utils import is_valid_uuid from core.tools.workflow_as_tool.provider import WorkflowToolProviderController from extensions.ext_database import db +from graphon.runtime import VariablePool from models.provider_ids import ToolProviderID from services.tools.mcp_tools_manage_service import MCPToolManageService if TYPE_CHECKING: pass -from graphon.model_runtime.utils.encoders import jsonable_encoder - from core.agent.entities import AgentToolEntity from core.app.entities.app_invoke_entities import InvokeFrom from core.helper.module_import_helper import load_single_subclass_from_source @@ -62,6 +60,7 @@ from core.tools.tool_label_manager import ToolLabelManager from core.tools.utils.configuration import ToolParameterConfigurationManager from core.tools.utils.encryption import create_provider_encrypter, create_tool_provider_encrypter from core.tools.workflow_as_tool.tool import WorkflowTool +from graphon.model_runtime.utils.encoders import jsonable_encoder from models.tools import ApiToolProvider, BuiltinToolProvider, WorkflowToolProvider from services.tools.tools_transform_service import ToolTransformService @@ -100,7 +99,7 @@ class ToolManager: _builtin_provider_lock = Lock() _hardcoded_providers: dict[str, BuiltinToolProviderController] = {} _builtin_providers_loaded = False - _builtin_tools_labels: dict[str, Union[I18nObject, None]] = {} + _builtin_tools_labels: dict[str, I18nObject | None] = {} @classmethod def get_hardcoded_provider(cls, provider: str) -> BuiltinToolProviderController: @@ -190,7 +189,7 @@ class ToolManager: invoke_from: InvokeFrom = InvokeFrom.DEBUGGER, tool_invoke_from: ToolInvokeFrom = ToolInvokeFrom.AGENT, credential_id: str | None = None, - ) -> Union[BuiltinTool, PluginTool, ApiTool, WorkflowTool, MCPTool]: + ) -> BuiltinTool | PluginTool | ApiTool | WorkflowTool | MCPTool: """ get the tool runtime @@ -205,16 +204,160 @@ class ToolManager: :return: the tool """ - if provider_type == ToolProviderType.BUILT_IN: - # check if the builtin tool need credentials - provider_controller = cls.get_builtin_provider(provider_id, tenant_id) + match provider_type: + case ToolProviderType.BUILT_IN: + provider_controller = cls.get_builtin_provider(provider_id, tenant_id) - builtin_tool = provider_controller.get_tool(tool_name) - if not builtin_tool: - raise ToolProviderNotFoundError(f"builtin tool {tool_name} not found") + builtin_tool = provider_controller.get_tool(tool_name) + if not builtin_tool: + raise ToolProviderNotFoundError(f"builtin tool {tool_name} not found") + + if not provider_controller.need_credentials: + return builtin_tool.fork_tool_runtime( + runtime=ToolRuntime( + tenant_id=tenant_id, + user_id=user_id, + credentials={}, + invoke_from=invoke_from, + tool_invoke_from=tool_invoke_from, + ) + ) + builtin_provider = None + if isinstance(provider_controller, PluginToolProviderController): + provider_id_entity = ToolProviderID(provider_id) + if is_valid_uuid(credential_id): + try: + builtin_provider_stmt = select(BuiltinToolProvider).where( + BuiltinToolProvider.tenant_id == tenant_id, + BuiltinToolProvider.id == credential_id, + ) + builtin_provider = db.session.scalar(builtin_provider_stmt) + except Exception as e: + builtin_provider = None + logger.info("Error getting builtin provider %s:%s", credential_id, e, exc_info=True) + if builtin_provider is None: + raise ToolProviderNotFoundError(f"provider has been deleted: {credential_id}") + + if builtin_provider is None: + with Session(db.engine) as session: + builtin_provider = session.scalar( + sa.select(BuiltinToolProvider) + .where( + BuiltinToolProvider.tenant_id == tenant_id, + (BuiltinToolProvider.provider == str(provider_id_entity)) + | (BuiltinToolProvider.provider == provider_id_entity.provider_name), + ) + .order_by(BuiltinToolProvider.is_default.desc(), BuiltinToolProvider.created_at.asc()) + ) + if builtin_provider is None: + raise ToolProviderNotFoundError(f"no default provider for {provider_id}") + else: + builtin_provider = db.session.scalar( + select(BuiltinToolProvider) + .where( + BuiltinToolProvider.tenant_id == tenant_id, (BuiltinToolProvider.provider == provider_id) + ) + .order_by(BuiltinToolProvider.is_default.desc(), BuiltinToolProvider.created_at.asc()) + .limit(1) + ) + + if builtin_provider is None: + raise ToolProviderNotFoundError(f"builtin provider {provider_id} not found") + + from core.helper.credential_utils import check_credential_policy_compliance + + check_credential_policy_compliance( + credential_id=builtin_provider.id, + provider=provider_id, + credential_type=PluginCredentialType.TOOL, + check_existence=False, + ) + + encrypter, cache = create_provider_encrypter( + tenant_id=tenant_id, + config=[ + x.to_basic_provider_config() + for x in provider_controller.get_credentials_schema_by_type(builtin_provider.credential_type) + ], + cache=ToolProviderCredentialsCache( + tenant_id=tenant_id, provider=provider_id, credential_id=builtin_provider.id + ), + ) + + decrypted_credentials: Mapping[str, Any] = encrypter.decrypt(builtin_provider.credentials) + + if builtin_provider.expires_at != -1 and (builtin_provider.expires_at - 60) < int(time.time()): + # TODO: circular import + from core.plugin.impl.oauth import OAuthHandler + from services.tools.builtin_tools_manage_service import BuiltinToolManageService + + tool_provider = ToolProviderID(provider_id) + provider_name = tool_provider.provider_name + redirect_uri = f"{dify_config.CONSOLE_API_URL}/console/api/oauth/plugin/{provider_id}/tool/callback" + system_credentials = BuiltinToolManageService.get_oauth_client(tenant_id, provider_id) + + oauth_handler = OAuthHandler() + refreshed_credentials = oauth_handler.refresh_credentials( + tenant_id=tenant_id, + user_id=builtin_provider.user_id, + plugin_id=tool_provider.plugin_id, + provider=provider_name, + redirect_uri=redirect_uri, + system_credentials=system_credentials or {}, + credentials=decrypted_credentials, + ) + # update the credentials + builtin_provider.encrypted_credentials = json.dumps( + encrypter.encrypt(refreshed_credentials.credentials) + ) + builtin_provider.expires_at = refreshed_credentials.expires_at + db.session.commit() + decrypted_credentials = refreshed_credentials.credentials + cache.delete() - if not provider_controller.need_credentials: return builtin_tool.fork_tool_runtime( + runtime=ToolRuntime( + tenant_id=tenant_id, + user_id=user_id, + credentials=dict(decrypted_credentials), + credential_type=builtin_provider.credential_type, + runtime_parameters={}, + invoke_from=invoke_from, + tool_invoke_from=tool_invoke_from, + ) + ) + + case ToolProviderType.API: + api_provider, credentials = cls.get_api_provider_controller(tenant_id, provider_id) + encrypter, _ = create_tool_provider_encrypter( + tenant_id=tenant_id, + controller=api_provider, + ) + return api_provider.get_tool(tool_name).fork_tool_runtime( + runtime=ToolRuntime( + tenant_id=tenant_id, + user_id=user_id, + credentials=dict(encrypter.decrypt(credentials)), + invoke_from=invoke_from, + tool_invoke_from=tool_invoke_from, + ) + ) + case ToolProviderType.WORKFLOW: + workflow_provider_stmt = select(WorkflowToolProvider).where( + WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.id == provider_id + ) + with Session(db.engine, expire_on_commit=False) as session, session.begin(): + workflow_provider = session.scalar(workflow_provider_stmt) + + if workflow_provider is None: + raise ToolProviderNotFoundError(f"workflow provider {provider_id} not found") + + controller = ToolTransformService.workflow_provider_to_controller(db_provider=workflow_provider) + controller_tools: list[WorkflowTool] = controller.get_tools(tenant_id=workflow_provider.tenant_id) + if controller_tools is None or len(controller_tools) == 0: + raise ToolProviderNotFoundError(f"workflow provider {provider_id} not found") + + return controller.get_tools(tenant_id=workflow_provider.tenant_id)[0].fork_tool_runtime( runtime=ToolRuntime( tenant_id=tenant_id, user_id=user_id, @@ -223,177 +366,28 @@ class ToolManager: tool_invoke_from=tool_invoke_from, ) ) - builtin_provider = None - if isinstance(provider_controller, PluginToolProviderController): - provider_id_entity = ToolProviderID(provider_id) - # get specific credentials - if is_valid_uuid(credential_id): - try: - builtin_provider_stmt = select(BuiltinToolProvider).where( - BuiltinToolProvider.tenant_id == tenant_id, - BuiltinToolProvider.id == credential_id, - ) - builtin_provider = db.session.scalar(builtin_provider_stmt) - except Exception as e: - builtin_provider = None - logger.info("Error getting builtin provider %s:%s", credential_id, e, exc_info=True) - # if the provider has been deleted, raise an error - if builtin_provider is None: - raise ToolProviderNotFoundError(f"provider has been deleted: {credential_id}") - - # fallback to the default provider - if builtin_provider is None: - # use the default provider - with Session(db.engine) as session: - builtin_provider = session.scalar( - sa.select(BuiltinToolProvider) - .where( - BuiltinToolProvider.tenant_id == tenant_id, - (BuiltinToolProvider.provider == str(provider_id_entity)) - | (BuiltinToolProvider.provider == provider_id_entity.provider_name), - ) - .order_by(BuiltinToolProvider.is_default.desc(), BuiltinToolProvider.created_at.asc()) - ) - if builtin_provider is None: - raise ToolProviderNotFoundError(f"no default provider for {provider_id}") - else: - builtin_provider = db.session.scalar( - select(BuiltinToolProvider) - .where(BuiltinToolProvider.tenant_id == tenant_id, (BuiltinToolProvider.provider == provider_id)) - .order_by(BuiltinToolProvider.is_default.desc(), BuiltinToolProvider.created_at.asc()) - .limit(1) - ) - - if builtin_provider is None: - raise ToolProviderNotFoundError(f"builtin provider {provider_id} not found") - - # check if the credential is allowed to be used - from core.helper.credential_utils import check_credential_policy_compliance - - check_credential_policy_compliance( - credential_id=builtin_provider.id, - provider=provider_id, - credential_type=PluginCredentialType.TOOL, - check_existence=False, - ) - - encrypter, cache = create_provider_encrypter( - tenant_id=tenant_id, - config=[ - x.to_basic_provider_config() - for x in provider_controller.get_credentials_schema_by_type(builtin_provider.credential_type) - ], - cache=ToolProviderCredentialsCache( - tenant_id=tenant_id, provider=provider_id, credential_id=builtin_provider.id - ), - ) - - # decrypt the credentials - decrypted_credentials: Mapping[str, Any] = encrypter.decrypt(builtin_provider.credentials) - - # check if the credentials is expired - if builtin_provider.expires_at != -1 and (builtin_provider.expires_at - 60) < int(time.time()): - # TODO: circular import - from core.plugin.impl.oauth import OAuthHandler - from services.tools.builtin_tools_manage_service import BuiltinToolManageService - - # refresh the credentials - tool_provider = ToolProviderID(provider_id) - provider_name = tool_provider.provider_name - redirect_uri = f"{dify_config.CONSOLE_API_URL}/console/api/oauth/plugin/{provider_id}/tool/callback" - system_credentials = BuiltinToolManageService.get_oauth_client(tenant_id, provider_id) - - oauth_handler = OAuthHandler() - # refresh the credentials - refreshed_credentials = oauth_handler.refresh_credentials( - tenant_id=tenant_id, - user_id=builtin_provider.user_id, - plugin_id=tool_provider.plugin_id, - provider=provider_name, - redirect_uri=redirect_uri, - system_credentials=system_credentials or {}, - credentials=decrypted_credentials, - ) - # update the credentials - builtin_provider.encrypted_credentials = json.dumps( - encrypter.encrypt(refreshed_credentials.credentials) - ) - builtin_provider.expires_at = refreshed_credentials.expires_at - db.session.commit() - decrypted_credentials = refreshed_credentials.credentials - cache.delete() - - return builtin_tool.fork_tool_runtime( - runtime=ToolRuntime( - tenant_id=tenant_id, - user_id=user_id, - credentials=dict(decrypted_credentials), - credential_type=builtin_provider.credential_type, - runtime_parameters={}, - invoke_from=invoke_from, - tool_invoke_from=tool_invoke_from, - ) - ) - - elif provider_type == ToolProviderType.API: - api_provider, credentials = cls.get_api_provider_controller(tenant_id, provider_id) - encrypter, _ = create_tool_provider_encrypter( - tenant_id=tenant_id, - controller=api_provider, - ) - return api_provider.get_tool(tool_name).fork_tool_runtime( - runtime=ToolRuntime( - tenant_id=tenant_id, - user_id=user_id, - credentials=dict(encrypter.decrypt(credentials)), - invoke_from=invoke_from, - tool_invoke_from=tool_invoke_from, - ) - ) - elif provider_type == ToolProviderType.WORKFLOW: - workflow_provider_stmt = select(WorkflowToolProvider).where( - WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.id == provider_id - ) - with Session(db.engine, expire_on_commit=False) as session, session.begin(): - workflow_provider = session.scalar(workflow_provider_stmt) - - if workflow_provider is None: - raise ToolProviderNotFoundError(f"workflow provider {provider_id} not found") - - controller = ToolTransformService.workflow_provider_to_controller(db_provider=workflow_provider) - controller_tools: list[WorkflowTool] = controller.get_tools(tenant_id=workflow_provider.tenant_id) - if controller_tools is None or len(controller_tools) == 0: - raise ToolProviderNotFoundError(f"workflow provider {provider_id} not found") - - return controller.get_tools(tenant_id=workflow_provider.tenant_id)[0].fork_tool_runtime( - runtime=ToolRuntime( - tenant_id=tenant_id, - user_id=user_id, - credentials={}, - invoke_from=invoke_from, - tool_invoke_from=tool_invoke_from, - ) - ) - elif provider_type == ToolProviderType.APP: - raise NotImplementedError("app provider not implemented") - elif provider_type == ToolProviderType.PLUGIN: - plugin_tool = cls.get_plugin_provider(provider_id, tenant_id).get_tool(tool_name) - runtime = getattr(plugin_tool, "runtime", None) - if runtime is not None: - runtime.user_id = user_id - runtime.invoke_from = invoke_from - runtime.tool_invoke_from = tool_invoke_from - return plugin_tool - elif provider_type == ToolProviderType.MCP: - mcp_tool = cls.get_mcp_provider_controller(tenant_id, provider_id).get_tool(tool_name) - runtime = getattr(mcp_tool, "runtime", None) - if runtime is not None: - runtime.user_id = user_id - runtime.invoke_from = invoke_from - runtime.tool_invoke_from = tool_invoke_from - return mcp_tool - else: - raise ToolProviderNotFoundError(f"provider type {provider_type.value} not found") + case ToolProviderType.APP: + raise NotImplementedError("app provider not implemented") + case ToolProviderType.PLUGIN: + plugin_tool = cls.get_plugin_provider(provider_id, tenant_id).get_tool(tool_name) + runtime = getattr(plugin_tool, "runtime", None) + if runtime is not None: + runtime.user_id = user_id + runtime.invoke_from = invoke_from + runtime.tool_invoke_from = tool_invoke_from + return plugin_tool + case ToolProviderType.MCP: + mcp_tool = cls.get_mcp_provider_controller(tenant_id, provider_id).get_tool(tool_name) + runtime = getattr(mcp_tool, "runtime", None) + if runtime is not None: + runtime.user_id = user_id + runtime.invoke_from = invoke_from + runtime.tool_invoke_from = tool_invoke_from + return mcp_tool + case ToolProviderType.DATASET_RETRIEVAL: + raise ToolProviderNotFoundError(f"provider type {provider_type.value} not found") + case _: + raise ToolProviderNotFoundError(f"provider type {provider_type} not found") @classmethod def get_agent_tool_runtime( @@ -403,7 +397,7 @@ class ToolManager: agent_tool: AgentToolEntity, user_id: str | None = None, invoke_from: InvokeFrom = InvokeFrom.DEBUGGER, - variable_pool: Optional["VariablePool"] = None, + variable_pool: "VariablePool | None" = None, ) -> Tool: """ get the agent tool runtime @@ -447,7 +441,7 @@ class ToolManager: workflow_tool: WorkflowToolRuntimeSpec, user_id: str | None = None, invoke_from: InvokeFrom = InvokeFrom.DEBUGGER, - variable_pool: Optional["VariablePool"] = None, + variable_pool: "VariablePool | None" = None, ) -> Tool: """ get the workflow tool runtime @@ -639,7 +633,7 @@ class ToolManager: cls._builtin_providers_loaded = False @classmethod - def get_tool_label(cls, tool_name: str) -> Union[I18nObject, None]: + def get_tool_label(cls, tool_name: str) -> I18nObject | None: """ get the tool label @@ -687,7 +681,7 @@ class ToolManager: with Session(db.engine, autoflush=False) as session: ids = [row.id for row in session.execute(sa.text(sql), {"tenant_id": tenant_id}).all()] - return session.query(BuiltinToolProvider).where(BuiltinToolProvider.id.in_(ids)).all() + return list(session.scalars(select(BuiltinToolProvider).where(BuiltinToolProvider.id.in_(ids)))) @classmethod def list_providers_from_api( @@ -998,7 +992,7 @@ class ToolManager: return {"background": "#252525", "content": "\ud83d\ude01"} @classmethod - def generate_mcp_tool_icon_url(cls, tenant_id: str, provider_id: str) -> EmojiIconDict | dict[str, str] | str: + def generate_mcp_tool_icon_url(cls, tenant_id: str, provider_id: str) -> EmojiIconDict | str: try: with Session(db.engine) as session: mcp_service = MCPToolManageService(session=session) @@ -1006,7 +1000,7 @@ class ToolManager: mcp_provider = mcp_service.get_provider_entity( provider_id=provider_id, tenant_id=tenant_id, by_server_id=True ) - return mcp_provider.provider_icon + return cast(EmojiIconDict | str, mcp_provider.provider_icon) except ValueError: raise ToolProviderNotFoundError(f"mcp provider {provider_id} not found") except Exception: @@ -1018,7 +1012,7 @@ class ToolManager: tenant_id: str, provider_type: ToolProviderType, provider_id: str, - ) -> str | EmojiIconDict | dict[str, str]: + ) -> str | EmojiIconDict: """ get the tool icon @@ -1027,37 +1021,37 @@ class ToolManager: :param provider_id: the id of the provider :return: """ - provider_type = provider_type - provider_id = provider_id - if provider_type == ToolProviderType.BUILT_IN: - provider = ToolManager.get_builtin_provider(provider_id, tenant_id) - if isinstance(provider, PluginToolProviderController): + match provider_type: + case ToolProviderType.BUILT_IN: + provider = ToolManager.get_builtin_provider(provider_id, tenant_id) + if isinstance(provider, PluginToolProviderController): + try: + return cls.generate_plugin_tool_icon_url(tenant_id, provider.entity.identity.icon) + except Exception: + return {"background": "#252525", "content": "\ud83d\ude01"} + return cls.generate_builtin_tool_icon_url(provider_id) + case ToolProviderType.API: + return cls.generate_api_tool_icon_url(tenant_id, provider_id) + case ToolProviderType.WORKFLOW: + return cls.generate_workflow_tool_icon_url(tenant_id, provider_id) + case ToolProviderType.PLUGIN: + provider = ToolManager.get_plugin_provider(provider_id, tenant_id) try: return cls.generate_plugin_tool_icon_url(tenant_id, provider.entity.identity.icon) except Exception: return {"background": "#252525", "content": "\ud83d\ude01"} - return cls.generate_builtin_tool_icon_url(provider_id) - elif provider_type == ToolProviderType.API: - return cls.generate_api_tool_icon_url(tenant_id, provider_id) - elif provider_type == ToolProviderType.WORKFLOW: - return cls.generate_workflow_tool_icon_url(tenant_id, provider_id) - elif provider_type == ToolProviderType.PLUGIN: - provider = ToolManager.get_plugin_provider(provider_id, tenant_id) - try: - return cls.generate_plugin_tool_icon_url(tenant_id, provider.entity.identity.icon) - except Exception: - return {"background": "#252525", "content": "\ud83d\ude01"} - raise ValueError(f"plugin provider {provider_id} not found") - elif provider_type == ToolProviderType.MCP: - return cls.generate_mcp_tool_icon_url(tenant_id, provider_id) - else: - raise ValueError(f"provider type {provider_type} not found") + case ToolProviderType.MCP: + return cls.generate_mcp_tool_icon_url(tenant_id, provider_id) + case ToolProviderType.APP | ToolProviderType.DATASET_RETRIEVAL: + raise ValueError(f"provider type {provider_type} not found") + case _: + raise ValueError(f"provider type {provider_type} not found") @classmethod def _convert_tool_parameters_type( cls, parameters: list[ToolParameter], - variable_pool: Optional["VariablePool"], + variable_pool: "VariablePool | None", tool_configurations: Mapping[str, Any], typ: Literal["agent", "workflow", "tool"] = "workflow", ) -> dict[str, Any]: diff --git a/api/core/tools/utils/dataset_retriever/dataset_multi_retriever_tool.py b/api/core/tools/utils/dataset_retriever/dataset_multi_retriever_tool.py index c72bdf02ed..b6890b2611 100644 --- a/api/core/tools/utils/dataset_retriever/dataset_multi_retriever_tool.py +++ b/api/core/tools/utils/dataset_retriever/dataset_multi_retriever_tool.py @@ -1,21 +1,20 @@ import threading from flask import Flask, current_app -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import BaseModel, Field from sqlalchemy import select from core.callback_handler.index_tool_callback_handler import DatasetIndexToolCallbackHandler from core.model_manager import ModelManager -from core.rag.datasource.retrieval_service import RetrievalService +from core.rag.datasource.retrieval_service import DefaultRetrievalModelDict, RetrievalService from core.rag.entities import RetrievalSourceMetadata from core.rag.index_processor.constant.index_type import IndexTechniqueType from core.rag.models.document import Document as RagDocument from core.rag.rerank.rerank_model import RerankModelRunner from core.rag.retrieval.retrieval_methods import RetrievalMethod from core.tools.utils.dataset_retriever.dataset_retriever_base_tool import DatasetRetrieverBaseTool -from core.tools.utils.dataset_retriever.dataset_retriever_tool import DefaultRetrievalModelDict from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelType from models.dataset import Dataset, Document, DocumentSegment default_retrieval_model: DefaultRetrievalModelDict = { diff --git a/api/core/tools/utils/dataset_retriever/dataset_retriever_tool.py b/api/core/tools/utils/dataset_retriever/dataset_retriever_tool.py index a346eb53c4..0d1dc7273b 100644 --- a/api/core/tools/utils/dataset_retriever/dataset_retriever_tool.py +++ b/api/core/tools/utils/dataset_retriever/dataset_retriever_tool.py @@ -1,11 +1,10 @@ -from typing import NotRequired, TypedDict, cast +from typing import Any, cast from pydantic import BaseModel, Field from sqlalchemy import select from core.app.app_config.entities import DatasetRetrieveConfigEntity, ModelConfig -from core.rag.data_post_processor.data_post_processor import RerankingModelDict, WeightsDict -from core.rag.datasource.retrieval_service import RetrievalService +from core.rag.datasource.retrieval_service import DefaultRetrievalModelDict, RetrievalService from core.rag.entities import DocumentContext, RetrievalSourceMetadata from core.rag.index_processor.constant.index_type import IndexTechniqueType from core.rag.models.document import Document as RetrievalDocument @@ -17,18 +16,6 @@ from models.dataset import Dataset from models.dataset import Document as DatasetDocument from services.external_knowledge_service import ExternalDatasetService - -class DefaultRetrievalModelDict(TypedDict): - search_method: RetrievalMethod - reranking_enable: bool - reranking_model: RerankingModelDict - reranking_mode: NotRequired[str] - weights: NotRequired[WeightsDict | None] - score_threshold: NotRequired[float] - top_k: int - score_threshold_enabled: bool - - default_retrieval_model: DefaultRetrievalModelDict = { "search_method": RetrievalMethod.SEMANTIC_SEARCH, "reranking_enable": False, @@ -52,7 +39,7 @@ class DatasetRetrieverTool(DatasetRetrieverBaseTool): dataset_id: str user_id: str | None = None retrieve_config: DatasetRetrieveConfigEntity - inputs: dict + inputs: dict[str, Any] @classmethod def from_dataset(cls, dataset: Dataset, **kwargs): diff --git a/api/core/tools/utils/dataset_retriever_tool.py b/api/core/tools/utils/dataset_retriever_tool.py index fca6e6f1c7..0bdc3df869 100644 --- a/api/core/tools/utils/dataset_retriever_tool.py +++ b/api/core/tools/utils/dataset_retriever_tool.py @@ -33,7 +33,7 @@ class DatasetRetrieverTool(Tool): invoke_from: InvokeFrom, hit_callback: DatasetIndexToolCallbackHandler, user_id: str, - inputs: dict, + inputs: dict[str, Any], ) -> list["DatasetRetrieverTool"]: """ get dataset tool diff --git a/api/core/tools/utils/message_transformer.py b/api/core/tools/utils/message_transformer.py index bb5b3ba76e..79d0c114d4 100644 --- a/api/core/tools/utils/message_transformer.py +++ b/api/core/tools/utils/message_transformer.py @@ -4,15 +4,16 @@ from collections.abc import Generator from datetime import date, datetime from decimal import Decimal from mimetypes import guess_extension +from typing import Any from uuid import UUID import numpy as np import pytz -from graphon.file import File, FileTransferMethod, FileType from core.tools.entities.tool_entities import ToolInvokeMessage from core.tools.tool_file_manager import ToolFileManager from core.workflow.file_reference import parse_file_reference +from graphon.file import File, FileTransferMethod, FileType from libs.login import current_user from models import Account @@ -50,7 +51,7 @@ def safe_json_value(v): return v -def safe_json_dict(d: dict): +def safe_json_dict(d: dict[str, Any]): if not isinstance(d, dict): raise TypeError("safe_json_dict() expects a dictionary (dict) as input") return {k: safe_json_value(v) for k, v in d.items()} @@ -118,7 +119,8 @@ class ToolFileMessageTransformer: if not isinstance(message.message, ToolInvokeMessage.BlobMessage): raise ValueError("unexpected message type") - assert isinstance(message.message.blob, bytes) + if not isinstance(message.message.blob, bytes): + raise TypeError(f"Expected blob to be bytes, got {type(message.message.blob).__name__}") tool_file_manager = ToolFileManager() tool_file = tool_file_manager.create_file_by_raw( user_id=user_id, @@ -195,11 +197,11 @@ class ToolFileMessageTransformer: @staticmethod def _with_tool_file_meta( - meta: dict | None, + meta: dict[str, Any] | None, *, tool_file_id: str | None = None, url: str | None = None, - ) -> dict: + ) -> dict[str, Any]: normalized_meta = meta.copy() if meta is not None else {} resolved_tool_file_id = tool_file_id or ToolFileMessageTransformer._extract_tool_file_id(url) if resolved_tool_file_id and "tool_file_id" not in normalized_meta: diff --git a/api/core/tools/utils/model_invocation_utils.py b/api/core/tools/utils/model_invocation_utils.py index 8d6f83dc07..9e1d41cb39 100644 --- a/api/core/tools/utils/model_invocation_utils.py +++ b/api/core/tools/utils/model_invocation_utils.py @@ -8,6 +8,9 @@ import json from decimal import Decimal from typing import cast +from core.model_manager import ModelManager +from core.tools.entities.tool_entities import ToolProviderType +from extensions.ext_database import db from graphon.model_runtime.entities.llm_entities import LLMResult from graphon.model_runtime.entities.message_entities import PromptMessage from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType @@ -20,10 +23,6 @@ from graphon.model_runtime.errors.invoke import ( ) from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from graphon.model_runtime.utils.encoders import jsonable_encoder - -from core.model_manager import ModelManager -from core.tools.entities.tool_entities import ToolProviderType -from extensions.ext_database import db from models.tools import ToolModelInvoke diff --git a/api/core/tools/utils/parser.py b/api/core/tools/utils/parser.py index f7484b93fb..434af55583 100644 --- a/api/core/tools/utils/parser.py +++ b/api/core/tools/utils/parser.py @@ -32,7 +32,7 @@ class OpenAPISpecDict(TypedDict): class ApiBasedToolSchemaParser: @staticmethod def parse_openapi_to_tool_bundle( - openapi: Mapping[str, Any], extra_info: dict | None = None, warning: dict | None = None + openapi: Mapping[str, Any], extra_info: dict[str, Any] | None = None, warning: dict[str, Any] | None = None ) -> list[ApiToolBundle]: warning = warning if warning is not None else {} extra_info = extra_info if extra_info is not None else {} @@ -236,7 +236,7 @@ class ApiBasedToolSchemaParser: return value @staticmethod - def _get_tool_parameter_type(parameter: dict) -> ToolParameter.ToolParameterType | None: + def _get_tool_parameter_type(parameter: dict[str, Any]) -> ToolParameter.ToolParameterType | None: parameter = parameter or {} typ: str | None = None if parameter.get("format") == "binary": @@ -265,7 +265,7 @@ class ApiBasedToolSchemaParser: @staticmethod def parse_openapi_yaml_to_tool_bundle( - yaml: str, extra_info: dict | None = None, warning: dict | None = None + yaml: str, extra_info: dict[str, Any] | None = None, warning: dict[str, Any] | None = None ) -> list[ApiToolBundle]: """ parse openapi yaml to tool bundle @@ -278,14 +278,14 @@ class ApiBasedToolSchemaParser: warning = warning if warning is not None else {} extra_info = extra_info if extra_info is not None else {} - openapi: dict = safe_load(yaml) + openapi: dict[str, Any] = safe_load(yaml) if openapi is None: raise ToolApiSchemaError("Invalid openapi yaml.") return ApiBasedToolSchemaParser.parse_openapi_to_tool_bundle(openapi, extra_info=extra_info, warning=warning) @staticmethod def parse_swagger_to_openapi( - swagger: dict, extra_info: dict | None = None, warning: dict | None = None + swagger: dict[str, Any], extra_info: dict[str, Any] | None = None, warning: dict[str, Any] | None = None ) -> OpenAPISpecDict: warning = warning or {} """ @@ -351,7 +351,7 @@ class ApiBasedToolSchemaParser: @staticmethod def parse_openai_plugin_json_to_tool_bundle( - json: str, extra_info: dict | None = None, warning: dict | None = None + json: str, extra_info: dict[str, Any] | None = None, warning: dict[str, Any] | None = None ) -> list[ApiToolBundle]: """ parse openapi plugin yaml to tool bundle @@ -392,7 +392,7 @@ class ApiBasedToolSchemaParser: @staticmethod def auto_parse_to_tool_bundle( - content: str, extra_info: dict | None = None, warning: dict | None = None + content: str, extra_info: dict[str, Any] | None = None, warning: dict[str, Any] | None = None ) -> tuple[list[ApiToolBundle], ApiProviderSchemaType]: """ auto parse to tool bundle diff --git a/api/core/tools/utils/text_processing_utils.py b/api/core/tools/utils/text_processing_utils.py index 4bfaa5e49b..1dd0605f28 100644 --- a/api/core/tools/utils/text_processing_utils.py +++ b/api/core/tools/utils/text_processing_utils.py @@ -19,5 +19,18 @@ def remove_leading_symbols(text: str) -> str: # Match Unicode ranges for punctuation and symbols # FIXME this pattern is confused quick fix for #11868 maybe refactor it later - pattern = r'^[\[\]\u2000-\u2025\u2027-\u206F\u2E00-\u2E7F\u3000-\u300F\u3011-\u303F"#$%&\'()*+,./:;<=>?@^_`~]+' + pattern = re.compile( + r""" + ^ + (?: + [\u2000-\u2025] # General Punctuation: spaces, quotes, dashes + | [\u2027-\u206F] # General Punctuation: ellipsis, underscores, etc. + | [\u2E00-\u2E7F] # Supplemental Punctuation: medieval, ancient marks + | [\u3000-\u300F] # CJK Punctuation: 、。〃「」『》』 (excludes 【】) + | [\u3012-\u303F] # CJK Punctuation: 〖〗〔〕〘〙〚〛〜 etc. + | ["#$%&'()*+,./:;<=>?@^_`~] # ASCII punctuation (excludes []【】) + )+ + """, + re.VERBOSE, + ) return re.sub(pattern, "", text) diff --git a/api/core/tools/utils/workflow_configuration_sync.py b/api/core/tools/utils/workflow_configuration_sync.py index c4b7d57449..45718cadb6 100644 --- a/api/core/tools/utils/workflow_configuration_sync.py +++ b/api/core/tools/utils/workflow_configuration_sync.py @@ -1,13 +1,12 @@ from collections.abc import Mapping, Sequence from typing import Any +from core.tools.entities.tool_entities import WorkflowToolParameterConfiguration +from core.tools.errors import WorkflowToolHumanInputNotSupportedError from graphon.enums import BuiltinNodeTypes from graphon.nodes.base.entities import OutputVariableEntity from graphon.variables.input_entities import VariableEntity -from core.tools.entities.tool_entities import WorkflowToolParameterConfiguration -from core.tools.errors import WorkflowToolHumanInputNotSupportedError - class WorkflowToolConfigurationUtils: @classmethod @@ -17,10 +16,8 @@ class WorkflowToolConfigurationUtils: """ nodes = graph.get("nodes", []) start_node = next(filter(lambda x: x.get("data", {}).get("type") == "start", nodes), None) - if not start_node: return [] - return [VariableEntity.model_validate(variable) for variable in start_node.get("data", {}).get("variables", [])] @classmethod diff --git a/api/core/tools/workflow_as_tool/provider.py b/api/core/tools/workflow_as_tool/provider.py index f48b24be30..5905fd919e 100644 --- a/api/core/tools/workflow_as_tool/provider.py +++ b/api/core/tools/workflow_as_tool/provider.py @@ -2,8 +2,8 @@ from __future__ import annotations from collections.abc import Mapping -from graphon.variables.input_entities import VariableEntity, VariableEntityType from pydantic import Field +from sqlalchemy import select from sqlalchemy.orm import Session from core.app.apps.workflow.app_config_manager import WorkflowAppConfigManager @@ -24,6 +24,7 @@ from core.tools.entities.tool_entities import ( from core.tools.utils.workflow_configuration_sync import WorkflowToolConfigurationUtils from core.tools.workflow_as_tool.tool import WorkflowTool from extensions.ext_database import db +from graphon.variables.input_entities import VariableEntity, VariableEntityType from models.account import Account from models.model import App, AppMode from models.tools import WorkflowToolProvider @@ -96,10 +97,10 @@ class WorkflowToolProviderController(ToolProviderController): :param app: the app :return: the tool """ - workflow: Workflow | None = ( - session.query(Workflow) + workflow: Workflow | None = session.scalar( + select(Workflow) .where(Workflow.app_id == db_provider.app_id, Workflow.version == db_provider.version) - .first() + .limit(1) ) if not workflow: @@ -217,13 +218,13 @@ class WorkflowToolProviderController(ToolProviderController): return self.tools with Session(db.engine, expire_on_commit=False) as session, session.begin(): - db_provider: WorkflowToolProvider | None = ( - session.query(WorkflowToolProvider) + db_provider: WorkflowToolProvider | None = session.scalar( + select(WorkflowToolProvider) .where( WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.id == self.provider_id, ) - .first() + .limit(1) ) if not db_provider: diff --git a/api/core/tools/workflow_as_tool/tool.py b/api/core/tools/workflow_as_tool/tool.py index a3fb4eda92..52ab605963 100644 --- a/api/core/tools/workflow_as_tool/tool.py +++ b/api/core/tools/workflow_as_tool/tool.py @@ -5,8 +5,6 @@ import logging from collections.abc import Generator, Mapping, Sequence from typing import Any, cast -from graphon.file import FILE_MODEL_IDENTITY, File, FileTransferMethod -from graphon.model_runtime.entities.llm_entities import LLMUsage, LLMUsageMetadata from sqlalchemy import select from core.app.file_access import DatabaseFileAccessController @@ -22,6 +20,8 @@ from core.tools.entities.tool_entities import ( from core.tools.errors import ToolInvokeError from core.workflow.file_reference import resolve_file_record_id from factories.file_factory import build_from_mapping +from graphon.file import FILE_MODEL_IDENTITY, File, FileTransferMethod +from graphon.model_runtime.entities.llm_entities import LLMUsage, LLMUsageMetadata from models import Account, Tenant from models.model import App, EndUser from models.utils.file_input_compat import build_file_from_stored_mapping @@ -277,7 +277,7 @@ class WorkflowTool(Tool): session.expunge(app) return app - def _transform_args(self, tool_parameters: dict) -> tuple[dict, list[dict]]: + def _transform_args(self, tool_parameters: dict[str, Any]) -> tuple[dict[str, Any], list[dict[str, str | None]]]: """ transform the tool parameters @@ -305,14 +305,15 @@ class WorkflowTool(Tool): "transfer_method": file.transfer_method.value, "type": file.type.value, } - if file.transfer_method == FileTransferMethod.TOOL_FILE: - file_dict["tool_file_id"] = resolve_file_record_id(file.reference) - elif file.transfer_method == FileTransferMethod.LOCAL_FILE: - file_dict["upload_file_id"] = resolve_file_record_id(file.reference) - elif file.transfer_method == FileTransferMethod.DATASOURCE_FILE: - file_dict["datasource_file_id"] = resolve_file_record_id(file.reference) - elif file.transfer_method == FileTransferMethod.REMOTE_URL: - file_dict["url"] = file.generate_url() + match file.transfer_method: + case FileTransferMethod.TOOL_FILE: + file_dict["tool_file_id"] = resolve_file_record_id(file.reference) + case FileTransferMethod.LOCAL_FILE: + file_dict["upload_file_id"] = resolve_file_record_id(file.reference) + case FileTransferMethod.DATASOURCE_FILE: + file_dict["datasource_file_id"] = resolve_file_record_id(file.reference) + case FileTransferMethod.REMOTE_URL: + file_dict["url"] = file.generate_url() files.append(file_dict) except Exception: @@ -322,7 +323,7 @@ class WorkflowTool(Tool): return parameters_result, files - def _extract_files(self, outputs: dict) -> tuple[dict, list[File]]: + def _extract_files(self, outputs: dict[str, Any]) -> tuple[dict[str, Any], list[File]]: """ extract files from the result @@ -354,11 +355,14 @@ class WorkflowTool(Tool): return result, files - def _update_file_mapping(self, file_dict: dict): + def _update_file_mapping(self, file_dict: dict[str, Any]) -> dict[str, Any]: file_id = resolve_file_record_id(file_dict.get("reference") or file_dict.get("related_id")) transfer_method = FileTransferMethod.value_of(file_dict.get("transfer_method")) - if transfer_method == FileTransferMethod.TOOL_FILE: - file_dict["tool_file_id"] = file_id - elif transfer_method == FileTransferMethod.LOCAL_FILE: - file_dict["upload_file_id"] = file_id + match transfer_method: + case FileTransferMethod.TOOL_FILE: + file_dict["tool_file_id"] = file_id + case FileTransferMethod.LOCAL_FILE: + file_dict["upload_file_id"] = file_id + case FileTransferMethod.REMOTE_URL | FileTransferMethod.DATASOURCE_FILE: + pass return file_dict diff --git a/api/core/trigger/debug/event_selectors.py b/api/core/trigger/debug/event_selectors.py index 61d1cd8540..24c1271488 100644 --- a/api/core/trigger/debug/event_selectors.py +++ b/api/core/trigger/debug/event_selectors.py @@ -8,7 +8,6 @@ from collections.abc import Mapping from datetime import datetime from typing import Any -from graphon.entities.graph_config import NodeConfigDict from pydantic import BaseModel from core.plugin.entities.request import TriggerInvokeEventResponse @@ -28,6 +27,7 @@ from core.trigger.debug.events import ( from core.workflow.nodes.trigger_plugin.entities import TriggerEventNodeData from core.workflow.nodes.trigger_schedule.entities import ScheduleConfig from extensions.ext_redis import redis_client +from graphon.entities.graph_config import NodeConfigDict from libs.datetime_utils import ensure_naive_utc, naive_utc_now from libs.schedule_utils import calculate_next_run_at from models.model import App diff --git a/api/core/workflow/human_input_compat.py b/api/core/workflow/human_input_compat.py index c95516a240..75a0a0c202 100644 --- a/api/core/workflow/human_input_compat.py +++ b/api/core/workflow/human_input_compat.py @@ -14,12 +14,13 @@ from typing import Annotated, Any, ClassVar, Literal import bleach import markdown +from markdown.extensions.tables import TableExtension +from pydantic import AliasChoices, BaseModel, ConfigDict, Field, TypeAdapter + from graphon.enums import BuiltinNodeTypes from graphon.nodes.base.variable_template_parser import VariableTemplateParser from graphon.runtime import VariablePool from graphon.variables.consts import SELECTORS_LENGTH -from markdown.extensions.tables import TableExtension -from pydantic import AliasChoices, BaseModel, ConfigDict, Field, TypeAdapter class DeliveryMethodType(enum.StrEnum): diff --git a/api/core/workflow/node_factory.py b/api/core/workflow/node_factory.py index f6c3aee4c1..351da3444f 100644 --- a/api/core/workflow/node_factory.py +++ b/api/core/workflow/node_factory.py @@ -1,25 +1,10 @@ import importlib import pkgutil from collections.abc import Callable, Iterator, Mapping, MutableMapping +from dataclasses import dataclass from functools import lru_cache from typing import TYPE_CHECKING, Any, cast, final, override -from graphon.entities.base_node_data import BaseNodeData -from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter -from graphon.enums import BuiltinNodeTypes, NodeType -from graphon.file.file_manager import file_manager -from graphon.graph.graph import NodeFactory -from graphon.model_runtime.memory import PromptMessageMemory -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel -from graphon.nodes.base.node import Node -from graphon.nodes.code.code_node import WorkflowCodeExecutor -from graphon.nodes.code.entities import CodeLanguage -from graphon.nodes.code.limits import CodeNodeLimits -from graphon.nodes.document_extractor import UnstructuredApiConfig -from graphon.nodes.http_request import build_http_request_config -from graphon.nodes.llm.entities import LLMNodeData -from graphon.nodes.parameter_extractor.entities import ParameterExtractorNodeData -from graphon.nodes.question_classifier.entities import QuestionClassifierNodeData from sqlalchemy import select from sqlalchemy.orm import Session @@ -55,6 +40,22 @@ from core.workflow.nodes.agent.runtime_support import AgentRuntimeSupport from core.workflow.system_variables import SystemVariableKey, get_system_text, system_variable_selector from core.workflow.template_rendering import CodeExecutorJinja2TemplateRenderer from extensions.ext_database import db +from graphon.entities.base_node_data import BaseNodeData +from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter +from graphon.enums import BuiltinNodeTypes, NodeType +from graphon.file.file_manager import file_manager +from graphon.graph.graph import NodeFactory +from graphon.model_runtime.memory import PromptMessageMemory +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel +from graphon.nodes.base.node import Node +from graphon.nodes.code.code_node import WorkflowCodeExecutor +from graphon.nodes.code.entities import CodeLanguage +from graphon.nodes.code.limits import CodeNodeLimits +from graphon.nodes.document_extractor import UnstructuredApiConfig +from graphon.nodes.http_request import build_http_request_config +from graphon.nodes.llm.entities import LLMNodeData +from graphon.nodes.parameter_extractor.entities import ParameterExtractorNodeData +from graphon.nodes.question_classifier.entities import QuestionClassifierNodeData from models.model import Conversation if TYPE_CHECKING: @@ -67,6 +68,31 @@ _START_NODE_TYPES: frozenset[NodeType] = frozenset( ) +@dataclass(frozen=True, slots=True) +class DifyGraphInitContext: + """Explicit graph-init values owned by the workflow layer. + + Dify is gradually removing direct `GraphInitParams` construction from its + production call sites. Keep the translation here until `graphon` exposes an + equivalent explicit API. + """ + + workflow_id: str + graph_config: Mapping[str, Any] + run_context: Mapping[str, Any] + call_depth: int + + def to_graph_init_params(self) -> "GraphInitParams": + from graphon.entities import GraphInitParams + + return GraphInitParams( + workflow_id=self.workflow_id, + graph_config=self.graph_config, + run_context=self.run_context, + call_depth=self.call_depth, + ) + + def _import_node_package(package_name: str, *, excluded_modules: frozenset[str] = frozenset()) -> None: package = importlib.import_module(package_name) for _, module_name, _ in pkgutil.walk_packages(package.__path__, package.__name__ + "."): @@ -237,6 +263,19 @@ class DifyNodeFactory(NodeFactory): Default implementation of NodeFactory that resolves node classes from the live registry. """ + @classmethod + def from_graph_init_context( + cls, + *, + graph_init_context: DifyGraphInitContext, + graph_runtime_state: "GraphRuntimeState", + ) -> "DifyNodeFactory": + """Bridge Dify's explicit init context into the current `graphon` API.""" + return cls( + graph_init_params=graph_init_context.to_graph_init_params(), + graph_runtime_state=graph_runtime_state, + ) + def __init__( self, graph_init_params: "GraphInitParams", diff --git a/api/core/workflow/node_runtime.py b/api/core/workflow/node_runtime.py index 19cb3a7b0a..2e632e56f0 100644 --- a/api/core/workflow/node_runtime.py +++ b/api/core/workflow/node_runtime.py @@ -4,6 +4,32 @@ from collections.abc import Callable, Generator, Mapping, Sequence from dataclasses import dataclass from typing import TYPE_CHECKING, Any, cast +from sqlalchemy import select +from sqlalchemy.orm import Session + +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext +from core.app.file_access import DatabaseFileAccessController +from core.callback_handler.workflow_tool_callback_handler import DifyWorkflowCallbackHandler +from core.llm_generator.output_parser.errors import OutputParserError +from core.llm_generator.output_parser.structured_output import invoke_llm_with_structured_output +from core.model_manager import ModelInstance +from core.plugin.impl.exc import PluginDaemonClientSideError, PluginInvokeError +from core.plugin.impl.plugin import PluginInstaller +from core.prompt.utils.prompt_message_util import PromptMessageUtil +from core.repositories.human_input_repository import ( + FormCreateParams, + HumanInputFormRepository, + HumanInputFormRepositoryImpl, +) +from core.tools.entities.tool_entities import ToolProviderType as CoreToolProviderType +from core.tools.errors import ToolInvokeError +from core.tools.tool_engine import ToolEngine +from core.tools.tool_file_manager import ToolFileManager +from core.tools.tool_manager import ToolManager +from core.tools.utils.message_transformer import ToolFileMessageTransformer +from core.workflow.file_reference import build_file_reference +from extensions.ext_database import db +from factories import file_factory from graphon.file import FileTransferMethod, FileType from graphon.model_runtime.entities import LLMMode from graphon.model_runtime.entities.llm_entities import ( @@ -34,32 +60,6 @@ from graphon.nodes.tool_runtime_entities import ( ToolRuntimeMessage, ToolRuntimeParameter, ) -from sqlalchemy import select -from sqlalchemy.orm import Session - -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext -from core.app.file_access import DatabaseFileAccessController -from core.callback_handler.workflow_tool_callback_handler import DifyWorkflowCallbackHandler -from core.llm_generator.output_parser.errors import OutputParserError -from core.llm_generator.output_parser.structured_output import invoke_llm_with_structured_output -from core.model_manager import ModelInstance -from core.plugin.impl.exc import PluginDaemonClientSideError, PluginInvokeError -from core.plugin.impl.plugin import PluginInstaller -from core.prompt.utils.prompt_message_util import PromptMessageUtil -from core.repositories.human_input_repository import ( - FormCreateParams, - HumanInputFormRepository, - HumanInputFormRepositoryImpl, -) -from core.tools.entities.tool_entities import ToolProviderType as CoreToolProviderType -from core.tools.errors import ToolInvokeError -from core.tools.tool_engine import ToolEngine -from core.tools.tool_file_manager import ToolFileManager -from core.tools.tool_manager import ToolManager -from core.tools.utils.message_transformer import ToolFileMessageTransformer -from core.workflow.file_reference import build_file_reference -from extensions.ext_database import db -from factories import file_factory from models.dataset import SegmentAttachmentBinding from models.model import UploadFile from services.tools.builtin_tools_manage_service import BuiltinToolManageService @@ -76,13 +76,12 @@ from .human_input_compat import ( from .system_variables import SystemVariableKey, get_system_text if TYPE_CHECKING: + from core.tools.__base.tool import Tool + from core.tools.entities.tool_entities import ToolInvokeMessage as CoreToolInvokeMessage from graphon.file import File from graphon.nodes.llm.file_saver import LLMFileSaver from graphon.nodes.tool.entities import ToolNodeData - from core.tools.__base.tool import Tool - from core.tools.entities.tool_entities import ToolInvokeMessage as CoreToolInvokeMessage - _file_access_controller = DatabaseFileAccessController() diff --git a/api/core/workflow/nodes/agent/agent_node.py b/api/core/workflow/nodes/agent/agent_node.py index bfd5536e4a..7b000101b0 100644 --- a/api/core/workflow/nodes/agent/agent_node.py +++ b/api/core/workflow/nodes/agent/agent_node.py @@ -3,15 +3,14 @@ from __future__ import annotations from collections.abc import Generator, Mapping, Sequence from typing import TYPE_CHECKING, Any +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext +from core.workflow.system_variables import SystemVariableKey, get_system_text from graphon.entities.graph_config import NodeConfigDict from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionStatus from graphon.node_events import NodeEventBase, NodeRunResult, StreamCompletedEvent from graphon.nodes.base.node import Node from graphon.nodes.base.variable_template_parser import VariableTemplateParser -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext -from core.workflow.system_variables import SystemVariableKey, get_system_text - from .entities import AgentNodeData from .exceptions import ( AgentInvocationError, diff --git a/api/core/workflow/nodes/agent/entities.py b/api/core/workflow/nodes/agent/entities.py index c52aad150b..51452c29a3 100644 --- a/api/core/workflow/nodes/agent/entities.py +++ b/api/core/workflow/nodes/agent/entities.py @@ -1,12 +1,12 @@ from enum import IntEnum, StrEnum, auto from typing import Any, Literal, Union -from graphon.entities.base_node_data import BaseNodeData -from graphon.enums import BuiltinNodeTypes, NodeType from pydantic import BaseModel from core.prompt.entities.advanced_prompt_entities import MemoryConfig from core.tools.entities.tool_entities import ToolSelector +from graphon.entities.base_node_data import BaseNodeData +from graphon.enums import BuiltinNodeTypes, NodeType class AgentNodeData(BaseNodeData): diff --git a/api/core/workflow/nodes/agent/message_transformer.py b/api/core/workflow/nodes/agent/message_transformer.py index db74590ed7..f44681377d 100644 --- a/api/core/workflow/nodes/agent/message_transformer.py +++ b/api/core/workflow/nodes/agent/message_transformer.py @@ -3,6 +3,14 @@ from __future__ import annotations from collections.abc import Generator, Mapping from typing import Any, cast +from sqlalchemy import select +from sqlalchemy.orm import Session + +from core.app.file_access import DatabaseFileAccessController +from core.tools.entities.tool_entities import ToolInvokeMessage +from core.tools.utils.message_transformer import ToolFileMessageTransformer +from extensions.ext_database import db +from factories import file_factory from graphon.enums import BuiltinNodeTypes, NodeType, WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus from graphon.file import File, FileTransferMethod, get_file_type_by_mime_type from graphon.model_runtime.entities.llm_entities import LLMUsage, LLMUsageMetadata @@ -15,14 +23,6 @@ from graphon.node_events import ( StreamCompletedEvent, ) from graphon.variables.segments import ArrayFileSegment -from sqlalchemy import select -from sqlalchemy.orm import Session - -from core.app.file_access import DatabaseFileAccessController -from core.tools.entities.tool_entities import ToolInvokeMessage -from core.tools.utils.message_transformer import ToolFileMessageTransformer -from extensions.ext_database import db -from factories import file_factory from models import ToolFile from services.tools.builtin_tools_manage_service import BuiltinToolManageService diff --git a/api/core/workflow/nodes/agent/runtime_support.py b/api/core/workflow/nodes/agent/runtime_support.py index be50edbc4d..a872774c98 100644 --- a/api/core/workflow/nodes/agent/runtime_support.py +++ b/api/core/workflow/nodes/agent/runtime_support.py @@ -4,8 +4,6 @@ import json from collections.abc import Sequence from typing import Any, cast -from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelType -from graphon.runtime import VariablePool from packaging.version import Version from pydantic import ValidationError from sqlalchemy import select @@ -21,6 +19,8 @@ from core.tools.entities.tool_entities import ToolIdentity, ToolParameter, ToolP from core.tools.tool_manager import ToolManager from core.workflow.system_variables import SystemVariableKey, get_system_text from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelType +from graphon.runtime import VariablePool from models.model import Conversation from .entities import AgentNodeData, AgentOldVersionModelFeatures, ParamsAutoGenerated diff --git a/api/core/workflow/nodes/datasource/datasource_node.py b/api/core/workflow/nodes/datasource/datasource_node.py index d9247b2593..e4f6b3b470 100644 --- a/api/core/workflow/nodes/datasource/datasource_node.py +++ b/api/core/workflow/nodes/datasource/datasource_node.py @@ -1,6 +1,12 @@ from collections.abc import Generator, Mapping, Sequence from typing import TYPE_CHECKING, Any +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext +from core.datasource.datasource_manager import DatasourceManager +from core.datasource.entities.datasource_entities import DatasourceProviderType +from core.plugin.impl.exc import PluginDaemonClientSideError +from core.workflow.file_reference import resolve_file_record_id +from core.workflow.system_variables import SystemVariableKey, get_system_segment from graphon.entities.graph_config import NodeConfigDict from graphon.enums import ( BuiltinNodeTypes, @@ -12,13 +18,6 @@ from graphon.node_events import NodeRunResult, StreamCompletedEvent from graphon.nodes.base.node import Node from graphon.nodes.base.variable_template_parser import VariableTemplateParser -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext -from core.datasource.datasource_manager import DatasourceManager -from core.datasource.entities.datasource_entities import DatasourceProviderType -from core.plugin.impl.exc import PluginDaemonClientSideError -from core.workflow.file_reference import resolve_file_record_id -from core.workflow.system_variables import SystemVariableKey, get_system_segment - from .entities import DatasourceNodeData, DatasourceParameter, OnlineDriveDownloadFileParam from .exc import DatasourceNodeError diff --git a/api/core/workflow/nodes/datasource/entities.py b/api/core/workflow/nodes/datasource/entities.py index cad32f8d5b..28966f2392 100644 --- a/api/core/workflow/nodes/datasource/entities.py +++ b/api/core/workflow/nodes/datasource/entities.py @@ -1,9 +1,10 @@ from typing import Any, Literal, Union +from pydantic import BaseModel, field_validator +from pydantic_core.core_schema import ValidationInfo + from graphon.entities.base_node_data import BaseNodeData from graphon.enums import BuiltinNodeTypes, NodeType -from pydantic import BaseModel, field_validator -from pydantic_core.core_schema import ValidationInfo class DatasourceEntity(BaseModel): diff --git a/api/core/workflow/nodes/knowledge_index/entities.py b/api/core/workflow/nodes/knowledge_index/entities.py index 6ff162973c..260881e49c 100644 --- a/api/core/workflow/nodes/knowledge_index/entities.py +++ b/api/core/workflow/nodes/knowledge_index/entities.py @@ -1,22 +1,13 @@ from typing import Union -from graphon.entities.base_node_data import BaseNodeData -from graphon.enums import NodeType from pydantic import BaseModel -from core.rag.entities import WeightedScoreConfig +from core.rag.entities import RerankingModelConfig, WeightedScoreConfig from core.rag.index_processor.index_processor_base import SummaryIndexSettingDict from core.rag.retrieval.retrieval_methods import RetrievalMethod from core.workflow.nodes.knowledge_index import KNOWLEDGE_INDEX_NODE_TYPE - - -class RerankingModelConfig(BaseModel): - """ - Reranking Model Config. - """ - - reranking_provider_name: str - reranking_model_name: str +from graphon.entities.base_node_data import BaseNodeData +from graphon.enums import NodeType class RetrievalSetting(BaseModel): diff --git a/api/core/workflow/nodes/knowledge_index/knowledge_index_node.py b/api/core/workflow/nodes/knowledge_index/knowledge_index_node.py index bb72fe3881..d5cab05dbe 100644 --- a/api/core/workflow/nodes/knowledge_index/knowledge_index_node.py +++ b/api/core/workflow/nodes/knowledge_index/knowledge_index_node.py @@ -2,17 +2,16 @@ import logging from collections.abc import Mapping from typing import TYPE_CHECKING, Any -from graphon.entities.graph_config import NodeConfigDict -from graphon.enums import NodeExecutionType, WorkflowNodeExecutionStatus -from graphon.node_events import NodeRunResult -from graphon.nodes.base.node import Node -from graphon.nodes.base.template import Template - from core.rag.index_processor.index_processor import IndexProcessor from core.rag.index_processor.index_processor_base import SummaryIndexSettingDict from core.rag.summary_index.summary_index import SummaryIndex from core.workflow.nodes.knowledge_index import KNOWLEDGE_INDEX_NODE_TYPE from core.workflow.system_variables import SystemVariableKey, get_system_segment, get_system_text +from graphon.entities.graph_config import NodeConfigDict +from graphon.enums import NodeExecutionType, WorkflowNodeExecutionStatus +from graphon.node_events import NodeRunResult +from graphon.nodes.base.node import Node +from graphon.nodes.base.template import Template from .entities import KnowledgeIndexNodeData from .exc import ( diff --git a/api/core/workflow/nodes/knowledge_index/protocols.py b/api/core/workflow/nodes/knowledge_index/protocols.py index 6668f0c98e..d04e79c2a8 100644 --- a/api/core/workflow/nodes/knowledge_index/protocols.py +++ b/api/core/workflow/nodes/knowledge_index/protocols.py @@ -43,15 +43,20 @@ class IndexProcessorProtocol(Protocol): original_document_id: str, chunks: Mapping[str, Any], batch: Any, - summary_index_setting: dict | None = None, + summary_index_setting: dict[str, Any] | None = None, ) -> IndexingResultDict: ... def get_preview_output( - self, chunks: Any, dataset_id: str, document_id: str, chunk_structure: str, summary_index_setting: dict | None + self, + chunks: Any, + dataset_id: str, + document_id: str, + chunk_structure: str, + summary_index_setting: dict[str, Any] | None, ) -> Preview: ... class SummaryIndexServiceProtocol(Protocol): def generate_and_vectorize_summary( - self, dataset_id: str, document_id: str, is_preview: bool, summary_index_setting: dict | None = None + self, dataset_id: str, document_id: str, is_preview: bool, summary_index_setting: dict[str, Any] | None = None ) -> None: ... diff --git a/api/core/workflow/nodes/knowledge_retrieval/entities.py b/api/core/workflow/nodes/knowledge_retrieval/entities.py index f4bc3fb9d3..3825f526a2 100644 --- a/api/core/workflow/nodes/knowledge_retrieval/entities.py +++ b/api/core/workflow/nodes/knowledge_retrieval/entities.py @@ -1,24 +1,15 @@ from typing import Literal +from pydantic import BaseModel, Field + +from core.rag.entities import Condition, MetadataFilteringCondition, RerankingModelConfig, WeightedScoreConfig from graphon.entities.base_node_data import BaseNodeData from graphon.enums import BuiltinNodeTypes, NodeType from graphon.nodes.llm.entities import ModelConfig, VisionConfig -from pydantic import BaseModel, Field - -from core.rag.entities import Condition, MetadataFilteringCondition, WeightedScoreConfig __all__ = ["Condition"] -class RerankingModelConfig(BaseModel): - """ - Reranking Model Config. - """ - - provider: str - model: str - - class MultipleRetrievalConfig(BaseModel): """ Multiple Retrieval Config. diff --git a/api/core/workflow/nodes/knowledge_retrieval/knowledge_retrieval_node.py b/api/core/workflow/nodes/knowledge_retrieval/knowledge_retrieval_node.py index 13624b27b3..47ad14b499 100644 --- a/api/core/workflow/nodes/knowledge_retrieval/knowledge_retrieval_node.py +++ b/api/core/workflow/nodes/knowledge_retrieval/knowledge_retrieval_node.py @@ -8,6 +8,11 @@ import logging from collections.abc import Mapping, Sequence from typing import TYPE_CHECKING, Any, Literal +from core.app.app_config.entities import DatasetRetrieveConfigEntity +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext +from core.rag.data_post_processor.data_post_processor import RerankingModelDict, WeightsDict +from core.rag.retrieval.dataset_retrieval import DatasetRetrieval +from core.workflow.file_reference import parse_file_reference from graphon.entities import GraphInitParams from graphon.entities.graph_config import NodeConfigDict from graphon.enums import ( @@ -27,12 +32,6 @@ from graphon.variables import ( ) from graphon.variables.segments import ArrayObjectSegment -from core.app.app_config.entities import DatasetRetrieveConfigEntity -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext -from core.rag.data_post_processor.data_post_processor import RerankingModelDict, WeightsDict -from core.rag.retrieval.dataset_retrieval import DatasetRetrieval -from core.workflow.file_reference import parse_file_reference - from .entities import ( Condition, KnowledgeRetrievalNodeData, diff --git a/api/core/workflow/nodes/knowledge_retrieval/retrieval.py b/api/core/workflow/nodes/knowledge_retrieval/retrieval.py index 39e2008a2c..ea45dcf5c2 100644 --- a/api/core/workflow/nodes/knowledge_retrieval/retrieval.py +++ b/api/core/workflow/nodes/knowledge_retrieval/retrieval.py @@ -1,10 +1,10 @@ from typing import Any, Literal, Protocol -from graphon.model_runtime.entities import LLMUsage -from graphon.nodes.llm.entities import ModelConfig from pydantic import BaseModel, Field from core.rag.data_post_processor.data_post_processor import RerankingModelDict, WeightsDict +from graphon.model_runtime.entities import LLMUsage +from graphon.nodes.llm.entities import ModelConfig from .entities import MetadataFilteringCondition diff --git a/api/core/workflow/nodes/trigger_plugin/entities.py b/api/core/workflow/nodes/trigger_plugin/entities.py index bf5be2379a..23ed2cd408 100644 --- a/api/core/workflow/nodes/trigger_plugin/entities.py +++ b/api/core/workflow/nodes/trigger_plugin/entities.py @@ -1,12 +1,12 @@ from collections.abc import Mapping from typing import Any, Literal, Union -from graphon.entities.base_node_data import BaseNodeData -from graphon.enums import NodeType from pydantic import BaseModel, Field, ValidationInfo, field_validator from core.trigger.constants import TRIGGER_PLUGIN_NODE_TYPE from core.trigger.entities.entities import EventParameter +from graphon.entities.base_node_data import BaseNodeData +from graphon.enums import NodeType from .exc import TriggerEventParameterError diff --git a/api/core/workflow/nodes/trigger_plugin/trigger_event_node.py b/api/core/workflow/nodes/trigger_plugin/trigger_event_node.py index e50de11bb9..c848a86255 100644 --- a/api/core/workflow/nodes/trigger_plugin/trigger_event_node.py +++ b/api/core/workflow/nodes/trigger_plugin/trigger_event_node.py @@ -1,13 +1,12 @@ from collections.abc import Mapping from typing import Any +from core.trigger.constants import TRIGGER_PLUGIN_NODE_TYPE +from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID from graphon.enums import NodeExecutionType, WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus from graphon.node_events import NodeRunResult from graphon.nodes.base.node import Node -from core.trigger.constants import TRIGGER_PLUGIN_NODE_TYPE -from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID - from .entities import TriggerEventNodeData diff --git a/api/core/workflow/nodes/trigger_schedule/entities.py b/api/core/workflow/nodes/trigger_schedule/entities.py index f14ca893c9..683c8d420f 100644 --- a/api/core/workflow/nodes/trigger_schedule/entities.py +++ b/api/core/workflow/nodes/trigger_schedule/entities.py @@ -1,10 +1,10 @@ -from typing import Literal, Union +from typing import Any, Literal, Union -from graphon.entities.base_node_data import BaseNodeData -from graphon.enums import NodeType from pydantic import BaseModel, Field from core.trigger.constants import TRIGGER_SCHEDULE_NODE_TYPE +from graphon.entities.base_node_data import BaseNodeData +from graphon.enums import NodeType class TriggerScheduleNodeData(BaseNodeData): @@ -16,7 +16,7 @@ class TriggerScheduleNodeData(BaseNodeData): mode: str = Field(default="visual", description="Schedule mode: visual or cron") frequency: str | None = Field(default=None, description="Frequency for visual mode: hourly, daily, weekly, monthly") cron_expression: str | None = Field(default=None, description="Cron expression for cron mode") - visual_config: dict | None = Field(default=None, description="Visual configuration details") + visual_config: dict[str, Any] | None = Field(default=None, description="Visual configuration details") timezone: str = Field(default="UTC", description="Timezone for schedule execution") diff --git a/api/core/workflow/nodes/trigger_schedule/trigger_schedule_node.py b/api/core/workflow/nodes/trigger_schedule/trigger_schedule_node.py index a9753ab387..b46cc76a6e 100644 --- a/api/core/workflow/nodes/trigger_schedule/trigger_schedule_node.py +++ b/api/core/workflow/nodes/trigger_schedule/trigger_schedule_node.py @@ -1,11 +1,10 @@ from collections.abc import Mapping -from graphon.enums import NodeExecutionType, WorkflowNodeExecutionStatus -from graphon.node_events import NodeRunResult -from graphon.nodes.base.node import Node - from core.trigger.constants import TRIGGER_SCHEDULE_NODE_TYPE from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID +from graphon.enums import NodeExecutionType, WorkflowNodeExecutionStatus +from graphon.node_events import NodeRunResult +from graphon.nodes.base.node import Node from .entities import TriggerScheduleNodeData diff --git a/api/core/workflow/nodes/trigger_webhook/entities.py b/api/core/workflow/nodes/trigger_webhook/entities.py index a30f877e4b..b261039448 100644 --- a/api/core/workflow/nodes/trigger_webhook/entities.py +++ b/api/core/workflow/nodes/trigger_webhook/entities.py @@ -1,12 +1,12 @@ from collections.abc import Sequence from enum import StrEnum -from graphon.entities.base_node_data import BaseNodeData -from graphon.enums import NodeType -from graphon.variables.types import SegmentType from pydantic import BaseModel, Field, field_validator from core.trigger.constants import TRIGGER_WEBHOOK_NODE_TYPE +from graphon.entities.base_node_data import BaseNodeData +from graphon.enums import NodeType +from graphon.variables.types import SegmentType _WEBHOOK_HEADER_ALLOWED_TYPES: frozenset[SegmentType] = frozenset((SegmentType.STRING,)) diff --git a/api/core/workflow/nodes/trigger_webhook/node.py b/api/core/workflow/nodes/trigger_webhook/node.py index ebaac93934..13c4f05bfd 100644 --- a/api/core/workflow/nodes/trigger_webhook/node.py +++ b/api/core/workflow/nodes/trigger_webhook/node.py @@ -2,6 +2,10 @@ import logging from collections.abc import Mapping from typing import Any +from core.trigger.constants import TRIGGER_WEBHOOK_NODE_TYPE +from core.workflow.file_reference import resolve_file_record_id +from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID +from factories.variable_factory import build_segment_with_type from graphon.enums import NodeExecutionType, WorkflowNodeExecutionStatus from graphon.file import FileTransferMethod from graphon.node_events import NodeRunResult @@ -10,11 +14,6 @@ from graphon.nodes.protocols import FileReferenceFactoryProtocol from graphon.variables.types import SegmentType from graphon.variables.variables import FileVariable -from core.trigger.constants import TRIGGER_WEBHOOK_NODE_TYPE -from core.workflow.file_reference import resolve_file_record_id -from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID -from factories.variable_factory import build_segment_with_type - from .entities import ContentType, WebhookData logger = logging.getLogger(__name__) @@ -29,7 +28,7 @@ class TriggerWebhookNode(Node[WebhookData]): def post_init(self) -> None: from core.workflow.node_runtime import DifyFileReferenceFactory - self._file_reference_factory = DifyFileReferenceFactory(self.graph_init_params.run_context) + self._file_reference_factory = DifyFileReferenceFactory(self.run_context) @classmethod def get_default_config(cls, filters: Mapping[str, object] | None = None) -> Mapping[str, object]: @@ -75,7 +74,7 @@ class TriggerWebhookNode(Node[WebhookData]): outputs=outputs, ) - def generate_file_var(self, param_name: str, file: dict): + def generate_file_var(self, param_name: str, file: dict[str, Any]): file_id = resolve_file_record_id(file.get("reference") or file.get("related_id")) transfer_method_value = file.get("transfer_method") if transfer_method_value: @@ -147,7 +146,7 @@ class TriggerWebhookNode(Node[WebhookData]): outputs[param_name] = str(webhook_data.get("body", {}).get("raw", "")) continue elif self.node_data.content_type == ContentType.BINARY: - raw_data: dict = webhook_data.get("body", {}).get("raw", {}) + raw_data: dict[str, Any] = webhook_data.get("body", {}).get("raw", {}) file_var = self.generate_file_var(param_name, raw_data) if file_var: outputs[param_name] = file_var @@ -155,24 +154,25 @@ class TriggerWebhookNode(Node[WebhookData]): outputs[param_name] = raw_data continue - if param_type == SegmentType.FILE: - # Get File object (already processed by webhook controller) - files = webhook_data.get("files", {}) - if files and isinstance(files, dict): - file = files.get(param_name) - if file and isinstance(file, dict): - file_var = self.generate_file_var(param_name, file) - if file_var: - outputs[param_name] = file_var + match param_type: + case SegmentType.FILE: + # Get File object (already processed by webhook controller) + files = webhook_data.get("files", {}) + if files and isinstance(files, dict): + file = files.get(param_name) + if file and isinstance(file, dict): + file_var = self.generate_file_var(param_name, file) + if file_var: + outputs[param_name] = file_var + else: + outputs[param_name] = files else: outputs[param_name] = files else: outputs[param_name] = files - else: - outputs[param_name] = files - else: - # Get regular body parameter - outputs[param_name] = webhook_data.get("body", {}).get(param_name) + case _: + # Get regular body parameter + outputs[param_name] = webhook_data.get("body", {}).get(param_name) # Include raw webhook data for debugging/advanced use outputs["_webhook_raw"] = webhook_data diff --git a/api/core/workflow/template_rendering.py b/api/core/workflow/template_rendering.py index d51cfadd09..b4ffb37549 100644 --- a/api/core/workflow/template_rendering.py +++ b/api/core/workflow/template_rendering.py @@ -3,11 +3,10 @@ from __future__ import annotations from collections.abc import Mapping from typing import Any +from core.helper.code_executor.code_executor import CodeExecutionError, CodeExecutor from graphon.nodes.code.entities import CodeLanguage from graphon.template_rendering import Jinja2TemplateRenderer, TemplateRenderError -from core.helper.code_executor.code_executor import CodeExecutionError, CodeExecutor - class CodeExecutorJinja2TemplateRenderer(Jinja2TemplateRenderer): """Sandbox-backed Jinja2 renderer for workflow-owned node composition.""" diff --git a/api/core/workflow/workflow_entry.py b/api/core/workflow/workflow_entry.py index cecc20145a..4e2f603e5b 100644 --- a/api/core/workflow/workflow_entry.py +++ b/api/core/workflow/workflow_entry.py @@ -3,6 +3,29 @@ import time from collections.abc import Generator, Mapping, Sequence from typing import Any, TypedDict +from configs import dify_config +from context import capture_current_context +from core.app.apps.exc import GenerateTaskStoppedError +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom, build_dify_run_context +from core.app.file_access import DatabaseFileAccessController +from core.app.workflow.layers.llm_quota import LLMQuotaLayer +from core.app.workflow.layers.observability import ObservabilityLayer +from core.workflow.node_factory import ( + DifyGraphInitContext, + DifyNodeFactory, + is_start_node_type, + resolve_workflow_node_class, +) +from core.workflow.system_variables import ( + default_system_variables, + get_node_creation_preload_selectors, + inject_default_system_variable_mappings, + preload_node_creation_variables, +) +from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add_variables_to_pool +from core.workflow.variable_prefixes import ENVIRONMENT_VARIABLE_NODE_ID +from extensions.otel.runtime import is_instrument_flag_enabled +from factories import file_factory from graphon.entities import GraphInitParams from graphon.entities.graph_config import NodeConfigDictAdapter from graphon.errors import WorkflowNodeRunFailedError @@ -16,25 +39,6 @@ from graphon.nodes import BuiltinNodeTypes from graphon.nodes.base.node import Node from graphon.runtime import ChildGraphNotFoundError, GraphRuntimeState, VariablePool from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader, load_into_variable_pool - -from configs import dify_config -from context import capture_current_context -from core.app.apps.exc import GenerateTaskStoppedError -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom, build_dify_run_context -from core.app.file_access import DatabaseFileAccessController -from core.app.workflow.layers.llm_quota import LLMQuotaLayer -from core.app.workflow.layers.observability import ObservabilityLayer -from core.workflow.node_factory import DifyNodeFactory, is_start_node_type, resolve_workflow_node_class -from core.workflow.system_variables import ( - default_system_variables, - get_node_creation_preload_selectors, - inject_default_system_variable_mappings, - preload_node_creation_variables, -) -from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add_variables_to_pool -from core.workflow.variable_prefixes import ENVIRONMENT_VARIABLE_NODE_ID -from extensions.otel.runtime import is_instrument_flag_enabled -from factories import file_factory from models.workflow import Workflow logger = logging.getLogger(__name__) @@ -251,17 +255,18 @@ class WorkflowEntry: node_version = str(node_config_data.version) node_cls = resolve_workflow_node_class(node_type=node_type, node_version=node_version) - # init graph init params and runtime state - graph_init_params = GraphInitParams( + # init graph context and runtime state + run_context = build_dify_run_context( + tenant_id=workflow.tenant_id, + app_id=workflow.app_id, + user_id=user_id, + user_from=UserFrom.ACCOUNT, + invoke_from=InvokeFrom.DEBUGGER, + ) + graph_init_context = DifyGraphInitContext( workflow_id=workflow.id, graph_config=workflow.graph_dict, - run_context=build_dify_run_context( - tenant_id=workflow.tenant_id, - app_id=workflow.app_id, - user_id=user_id, - user_from=UserFrom.ACCOUNT, - invoke_from=InvokeFrom.DEBUGGER, - ), + run_context=run_context, call_depth=0, ) graph_runtime_state = GraphRuntimeState( @@ -313,8 +318,8 @@ class WorkflowEntry: ) # init workflow run state - node_factory = DifyNodeFactory( - graph_init_params=graph_init_params, + node_factory = DifyNodeFactory.from_graph_init_context( + graph_init_context=graph_init_context, graph_runtime_state=graph_runtime_state, ) node = node_factory.create_node(node_config) @@ -409,17 +414,18 @@ class WorkflowEntry: variable_pool = VariablePool() add_variables_to_pool(variable_pool, default_system_variables()) - # init graph init params and runtime state - graph_init_params = GraphInitParams( + # init graph context and runtime state + run_context = build_dify_run_context( + tenant_id=tenant_id, + app_id="", + user_id=user_id, + user_from=UserFrom.ACCOUNT, + invoke_from=InvokeFrom.DEBUGGER, + ) + graph_init_context = DifyGraphInitContext( workflow_id="", graph_config=graph_dict, - run_context=build_dify_run_context( - tenant_id=tenant_id, - app_id="", - user_id=user_id, - user_from=UserFrom.ACCOUNT, - invoke_from=InvokeFrom.DEBUGGER, - ), + run_context=run_context, call_depth=0, ) graph_runtime_state = GraphRuntimeState( @@ -430,8 +436,8 @@ class WorkflowEntry: # init workflow run state node_config = NodeConfigDictAdapter.validate_python({"id": node_id, "data": node_data}) - node_factory = DifyNodeFactory( - graph_init_params=graph_init_params, + node_factory = DifyNodeFactory.from_graph_init_context( + graph_init_context=graph_init_context, graph_runtime_state=graph_runtime_state, ) node = node_factory.create_node(node_config) diff --git a/api/docker/entrypoint.sh b/api/docker/entrypoint.sh index 6b904b7d0d..fc118df5bc 100755 --- a/api/docker/entrypoint.sh +++ b/api/docker/entrypoint.sh @@ -35,10 +35,10 @@ if [[ "${MODE}" == "worker" ]]; then if [[ -z "${CELERY_QUEUES}" ]]; then if [[ "${EDITION}" == "CLOUD" ]]; then # Cloud edition: separate queues for dataset and trigger tasks - DEFAULT_QUEUES="api_token,dataset,dataset_summary,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow_professional,workflow_team,workflow_sandbox,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention,workflow_based_app_execution" + DEFAULT_QUEUES="api_token,dataset,dataset_summary,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow_professional,workflow_team,workflow_sandbox,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_publisher,trigger_refresh_executor,retention,workflow_based_app_execution" else # Community edition (SELF_HOSTED): dataset, pipeline and workflow have separate queues - DEFAULT_QUEUES="api_token,dataset,dataset_summary,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention,workflow_based_app_execution" + DEFAULT_QUEUES="api_token,dataset,dataset_summary,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_publisher,trigger_refresh_executor,retention,workflow_based_app_execution" fi else DEFAULT_QUEUES="${CELERY_QUEUES}" @@ -119,14 +119,16 @@ elif [[ "${MODE}" == "job" ]]; then else if [[ "${DEBUG}" == "true" ]]; then - exec flask run --host=${DIFY_BIND_ADDRESS:-0.0.0.0} --port=${DIFY_PORT:-5001} --debug + export HOST=${DIFY_BIND_ADDRESS:-0.0.0.0} + export PORT=${DIFY_PORT:-5001} + exec python -m app else exec gunicorn \ --bind "${DIFY_BIND_ADDRESS:-0.0.0.0}:${DIFY_PORT:-5001}" \ --workers ${SERVER_WORKER_AMOUNT:-1} \ - --worker-class ${SERVER_WORKER_CLASS:-gevent} \ + --worker-class ${SERVER_WORKER_CLASS:-geventwebsocket.gunicorn.workers.GeventWebSocketWorker} \ --worker-connections ${SERVER_WORKER_CONNECTIONS:-10} \ --timeout ${GUNICORN_TIMEOUT:-200} \ - app:app + app:socketio_app fi fi diff --git a/api/enterprise/telemetry/draft_trace.py b/api/enterprise/telemetry/draft_trace.py index 5a8d0ee6f4..dff558988c 100644 --- a/api/enterprise/telemetry/draft_trace.py +++ b/api/enterprise/telemetry/draft_trace.py @@ -3,10 +3,9 @@ from __future__ import annotations from collections.abc import Mapping from typing import Any -from graphon.enums import WorkflowNodeExecutionMetadataKey - from core.telemetry import TelemetryContext, TelemetryEvent, TraceTaskName from core.telemetry import emit as telemetry_emit +from graphon.enums import WorkflowNodeExecutionMetadataKey from models.workflow import WorkflowNodeExecutionModel diff --git a/api/enterprise/telemetry/metric_handler.py b/api/enterprise/telemetry/metric_handler.py index ffd9a7e2b5..c564ace584 100644 --- a/api/enterprise/telemetry/metric_handler.py +++ b/api/enterprise/telemetry/metric_handler.py @@ -68,46 +68,49 @@ class EnterpriseMetricHandler: # Route to appropriate handler based on case case = envelope.case - if case == TelemetryCase.APP_CREATED: - self._on_app_created(envelope) - self._increment_diagnostic_counter("processed_total", {"case": "app_created"}) - elif case == TelemetryCase.APP_UPDATED: - self._on_app_updated(envelope) - self._increment_diagnostic_counter("processed_total", {"case": "app_updated"}) - elif case == TelemetryCase.APP_DELETED: - self._on_app_deleted(envelope) - self._increment_diagnostic_counter("processed_total", {"case": "app_deleted"}) - elif case == TelemetryCase.FEEDBACK_CREATED: - self._on_feedback_created(envelope) - self._increment_diagnostic_counter("processed_total", {"case": "feedback_created"}) - elif case == TelemetryCase.MESSAGE_RUN: - self._on_message_run(envelope) - self._increment_diagnostic_counter("processed_total", {"case": "message_run"}) - elif case == TelemetryCase.TOOL_EXECUTION: - self._on_tool_execution(envelope) - self._increment_diagnostic_counter("processed_total", {"case": "tool_execution"}) - elif case == TelemetryCase.MODERATION_CHECK: - self._on_moderation_check(envelope) - self._increment_diagnostic_counter("processed_total", {"case": "moderation_check"}) - elif case == TelemetryCase.SUGGESTED_QUESTION: - self._on_suggested_question(envelope) - self._increment_diagnostic_counter("processed_total", {"case": "suggested_question"}) - elif case == TelemetryCase.DATASET_RETRIEVAL: - self._on_dataset_retrieval(envelope) - self._increment_diagnostic_counter("processed_total", {"case": "dataset_retrieval"}) - elif case == TelemetryCase.GENERATE_NAME: - self._on_generate_name(envelope) - self._increment_diagnostic_counter("processed_total", {"case": "generate_name"}) - elif case == TelemetryCase.PROMPT_GENERATION: - self._on_prompt_generation(envelope) - self._increment_diagnostic_counter("processed_total", {"case": "prompt_generation"}) - else: - logger.warning( - "Unknown telemetry case: %s (tenant_id=%s, event_id=%s)", - case, - envelope.tenant_id, - envelope.event_id, - ) + match case: + case TelemetryCase.APP_CREATED: + self._on_app_created(envelope) + self._increment_diagnostic_counter("processed_total", {"case": "app_created"}) + case TelemetryCase.APP_UPDATED: + self._on_app_updated(envelope) + self._increment_diagnostic_counter("processed_total", {"case": "app_updated"}) + case TelemetryCase.APP_DELETED: + self._on_app_deleted(envelope) + self._increment_diagnostic_counter("processed_total", {"case": "app_deleted"}) + case TelemetryCase.FEEDBACK_CREATED: + self._on_feedback_created(envelope) + self._increment_diagnostic_counter("processed_total", {"case": "feedback_created"}) + case TelemetryCase.MESSAGE_RUN: + self._on_message_run(envelope) + self._increment_diagnostic_counter("processed_total", {"case": "message_run"}) + case TelemetryCase.TOOL_EXECUTION: + self._on_tool_execution(envelope) + self._increment_diagnostic_counter("processed_total", {"case": "tool_execution"}) + case TelemetryCase.MODERATION_CHECK: + self._on_moderation_check(envelope) + self._increment_diagnostic_counter("processed_total", {"case": "moderation_check"}) + case TelemetryCase.SUGGESTED_QUESTION: + self._on_suggested_question(envelope) + self._increment_diagnostic_counter("processed_total", {"case": "suggested_question"}) + case TelemetryCase.DATASET_RETRIEVAL: + self._on_dataset_retrieval(envelope) + self._increment_diagnostic_counter("processed_total", {"case": "dataset_retrieval"}) + case TelemetryCase.GENERATE_NAME: + self._on_generate_name(envelope) + self._increment_diagnostic_counter("processed_total", {"case": "generate_name"}) + case TelemetryCase.PROMPT_GENERATION: + self._on_prompt_generation(envelope) + self._increment_diagnostic_counter("processed_total", {"case": "prompt_generation"}) + case TelemetryCase.WORKFLOW_RUN | TelemetryCase.NODE_EXECUTION | TelemetryCase.DRAFT_NODE_EXECUTION: + pass + case _: + logger.warning( + "Unknown telemetry case: %s (tenant_id=%s, event_id=%s)", + case, + envelope.tenant_id, + envelope.event_id, + ) def _is_duplicate(self, envelope: TelemetryEnvelope) -> bool: """Check if this event has already been processed. @@ -326,7 +329,7 @@ class EnterpriseMetricHandler: return include_content = exporter.include_content - attrs: dict = { + attrs: dict[str, Any] = { "dify.message.id": payload.get("message_id"), "dify.tenant_id": envelope.tenant_id, "dify.event.id": envelope.event_id, diff --git a/api/events/event_handlers/create_document_index.py b/api/events/event_handlers/create_document_index.py index b7e7a6e60f..0c535a1c5b 100644 --- a/api/events/event_handlers/create_document_index.py +++ b/api/events/event_handlers/create_document_index.py @@ -6,9 +6,9 @@ import click from sqlalchemy import select from werkzeug.exceptions import NotFound +from core.db.session_factory import session_factory from core.indexing_runner import DocumentIsPausedError, IndexingRunner from events.document_index_event import document_index_created -from extensions.ext_database import db from libs.datetime_utils import naive_utc_now from models.dataset import Document from models.enums import IndexingStatus @@ -22,24 +22,25 @@ def handle(sender, **kwargs): document_ids = kwargs.get("document_ids", []) documents = [] start_at = time.perf_counter() - for document_id in document_ids: - logger.info(click.style(f"Start process document: {document_id}", fg="green")) + with session_factory.create_session() as session: + for document_id in document_ids: + logger.info(click.style(f"Start process document: {document_id}", fg="green")) - document = db.session.scalar( - select(Document).where( - Document.id == document_id, - Document.dataset_id == dataset_id, + document = session.scalar( + select(Document).where( + Document.id == document_id, + Document.dataset_id == dataset_id, + ) ) - ) - if not document: - raise NotFound("Document not found") + if not document: + raise NotFound("Document not found") - document.indexing_status = IndexingStatus.PARSING - document.processing_started_at = naive_utc_now() - documents.append(document) - db.session.add(document) - db.session.commit() + document.indexing_status = IndexingStatus.PARSING + document.processing_started_at = naive_utc_now() + documents.append(document) + session.add(document) + session.commit() with contextlib.suppress(Exception): try: diff --git a/api/events/event_handlers/create_installed_app_when_app_created.py b/api/events/event_handlers/create_installed_app_when_app_created.py index 57412cc4ad..38e102d5fd 100644 --- a/api/events/event_handlers/create_installed_app_when_app_created.py +++ b/api/events/event_handlers/create_installed_app_when_app_created.py @@ -1,5 +1,5 @@ +from core.db.session_factory import session_factory from events.app_event import app_was_created -from extensions.ext_database import db from models.model import InstalledApp @@ -12,5 +12,6 @@ def handle(sender, **kwargs): app_id=app.id, app_owner_tenant_id=app.tenant_id, ) - db.session.add(installed_app) - db.session.commit() + with session_factory.create_session() as session: + session.add(installed_app) + session.commit() diff --git a/api/events/event_handlers/create_site_record_when_app_created.py b/api/events/event_handlers/create_site_record_when_app_created.py index 84be592b1a..5e2a456dce 100644 --- a/api/events/event_handlers/create_site_record_when_app_created.py +++ b/api/events/event_handlers/create_site_record_when_app_created.py @@ -1,5 +1,5 @@ +from core.db.session_factory import session_factory from events.app_event import app_was_created -from extensions.ext_database import db from models.enums import CustomizeTokenStrategy from models.model import Site @@ -22,6 +22,6 @@ def handle(sender, **kwargs): created_by=app.created_by, updated_by=app.updated_by, ) - - db.session.add(site) - db.session.commit() + with session_factory.create_session() as session: + session.add(site) + session.commit() diff --git a/api/events/event_handlers/delete_tool_parameters_cache_when_sync_draft_workflow.py b/api/events/event_handlers/delete_tool_parameters_cache_when_sync_draft_workflow.py index 7bd8e88231..ba9758175f 100644 --- a/api/events/event_handlers/delete_tool_parameters_cache_when_sync_draft_workflow.py +++ b/api/events/event_handlers/delete_tool_parameters_cache_when_sync_draft_workflow.py @@ -1,12 +1,11 @@ import logging -from graphon.nodes import BuiltinNodeTypes -from graphon.nodes.tool.entities import ToolEntity - from core.tools.entities.tool_entities import ToolProviderType from core.tools.tool_manager import ToolManager from core.tools.utils.configuration import ToolParameterConfigurationManager from events.app_event import app_draft_workflow_was_synced +from graphon.nodes import BuiltinNodeTypes +from graphon.nodes.tool.entities import ToolEntity logger = logging.getLogger(__name__) diff --git a/api/events/event_handlers/update_app_dataset_join_when_app_published_workflow_updated.py b/api/events/event_handlers/update_app_dataset_join_when_app_published_workflow_updated.py index 86b5b2bbf0..6769b94cde 100644 --- a/api/events/event_handlers/update_app_dataset_join_when_app_published_workflow_updated.py +++ b/api/events/event_handlers/update_app_dataset_join_when_app_published_workflow_updated.py @@ -1,11 +1,11 @@ from typing import cast -from graphon.nodes import BuiltinNodeTypes from sqlalchemy import delete, select from core.workflow.nodes.knowledge_retrieval.entities import KnowledgeRetrievalNodeData from events.app_event import app_published_workflow_was_updated from extensions.ext_database import db +from graphon.nodes import BuiltinNodeTypes from models.dataset import AppDatasetJoin from models.workflow import Workflow diff --git a/api/extensions/ext_celery.py b/api/extensions/ext_celery.py index 1b3ccd1207..340f514fcc 100644 --- a/api/extensions/ext_celery.py +++ b/api/extensions/ext_celery.py @@ -5,12 +5,32 @@ from typing import Any import pytz # type: ignore[import-untyped] from celery import Celery, Task from celery.schedules import crontab +from typing_extensions import TypedDict from configs import dify_config from dify_app import DifyApp +from extensions.redis_names import normalize_redis_key_prefix -def get_celery_ssl_options() -> dict[str, Any] | None: +class _CelerySentinelKwargsDict(TypedDict): + socket_timeout: float | None + password: str | None + + +class CelerySentinelTransportDict(TypedDict, total=False): + master_name: str | None + sentinel_kwargs: _CelerySentinelKwargsDict + global_keyprefix: str + + +class CelerySSLOptionsDict(TypedDict): + ssl_cert_reqs: int + ssl_ca_certs: str | None + ssl_certfile: str | None + ssl_keyfile: str | None + + +def get_celery_ssl_options() -> CelerySSLOptionsDict | None: """Get SSL configuration for Celery broker/backend connections.""" # Only apply SSL if we're using Redis as broker/backend if not dify_config.BROKER_USE_SSL: @@ -33,27 +53,41 @@ def get_celery_ssl_options() -> dict[str, Any] | None: ssl_cert_reqs = cert_reqs_map.get(dify_config.REDIS_SSL_CERT_REQS, ssl.CERT_NONE) - ssl_options = { - "ssl_cert_reqs": ssl_cert_reqs, - "ssl_ca_certs": dify_config.REDIS_SSL_CA_CERTS, - "ssl_certfile": dify_config.REDIS_SSL_CERTFILE, - "ssl_keyfile": dify_config.REDIS_SSL_KEYFILE, - } - - return ssl_options + return CelerySSLOptionsDict( + ssl_cert_reqs=ssl_cert_reqs, + ssl_ca_certs=dify_config.REDIS_SSL_CA_CERTS, + ssl_certfile=dify_config.REDIS_SSL_CERTFILE, + ssl_keyfile=dify_config.REDIS_SSL_KEYFILE, + ) -def get_celery_broker_transport_options() -> dict[str, Any]: +def get_celery_broker_transport_options() -> CelerySentinelTransportDict | dict[str, Any]: """Get broker transport options (e.g. Redis Sentinel) for Celery connections.""" + transport_options: CelerySentinelTransportDict | dict[str, Any] if dify_config.CELERY_USE_SENTINEL: - return { - "master_name": dify_config.CELERY_SENTINEL_MASTER_NAME, - "sentinel_kwargs": { - "socket_timeout": dify_config.CELERY_SENTINEL_SOCKET_TIMEOUT, - "password": dify_config.CELERY_SENTINEL_PASSWORD, - }, - } - return {} + transport_options = CelerySentinelTransportDict( + master_name=dify_config.CELERY_SENTINEL_MASTER_NAME, + sentinel_kwargs=_CelerySentinelKwargsDict( + socket_timeout=dify_config.CELERY_SENTINEL_SOCKET_TIMEOUT, + password=dify_config.CELERY_SENTINEL_PASSWORD, + ), + ) + else: + transport_options = {} + + global_keyprefix = get_celery_redis_global_keyprefix() + if global_keyprefix: + transport_options["global_keyprefix"] = global_keyprefix + + return transport_options + + +def get_celery_redis_global_keyprefix() -> str | None: + """Return the Redis transport prefix for Celery when namespace isolation is enabled.""" + normalized_prefix = normalize_redis_key_prefix(dify_config.REDIS_KEY_PREFIX) + if not normalized_prefix: + return None + return f"{normalized_prefix}:" def init_app(app: DifyApp) -> Celery: diff --git a/api/extensions/ext_redis.py b/api/extensions/ext_redis.py index 5f528dbf9e..9f7f73765e 100644 --- a/api/extensions/ext_redis.py +++ b/api/extensions/ext_redis.py @@ -3,29 +3,41 @@ import logging import ssl from collections.abc import Callable from datetime import timedelta -from typing import TYPE_CHECKING, Any, Union +from typing import Any, Union, cast import redis from redis import RedisError +from redis.backoff import ExponentialWithJitterBackoff # type: ignore from redis.cache import CacheConfig from redis.client import PubSub from redis.cluster import ClusterNode, RedisCluster from redis.connection import Connection, SSLConnection +from redis.retry import Retry from redis.sentinel import Sentinel +from typing_extensions import TypedDict from configs import dify_config from dify_app import DifyApp +from extensions.redis_names import ( + normalize_redis_key_prefix, + serialize_redis_name, + serialize_redis_name_arg, + serialize_redis_name_args, +) from libs.broadcast_channel.channel import BroadcastChannel as BroadcastChannelProtocol from libs.broadcast_channel.redis.channel import BroadcastChannel as RedisBroadcastChannel from libs.broadcast_channel.redis.sharded_channel import ShardedRedisBroadcastChannel from libs.broadcast_channel.redis.streams_channel import StreamsBroadcastChannel -if TYPE_CHECKING: - from redis.lock import Lock - logger = logging.getLogger(__name__) +_normalize_redis_key_prefix = normalize_redis_key_prefix +_serialize_redis_name = serialize_redis_name +_serialize_redis_name_arg = serialize_redis_name_arg +_serialize_redis_name_args = serialize_redis_name_args + + class RedisClientWrapper: """ A wrapper class for the Redis client that addresses the issue where the global @@ -56,74 +68,189 @@ class RedisClientWrapper: if self._client is None: self._client = client - if TYPE_CHECKING: - # Type hints for IDE support and static analysis - # These are not executed at runtime but provide type information - def get(self, name: str | bytes) -> Any: ... - - def set( - self, - name: str | bytes, - value: Any, - ex: int | None = None, - px: int | None = None, - nx: bool = False, - xx: bool = False, - keepttl: bool = False, - get: bool = False, - exat: int | None = None, - pxat: int | None = None, - ) -> Any: ... - - def setex(self, name: str | bytes, time: int | timedelta, value: Any) -> Any: ... - def setnx(self, name: str | bytes, value: Any) -> Any: ... - def delete(self, *names: str | bytes) -> Any: ... - def incr(self, name: str | bytes, amount: int = 1) -> Any: ... - def expire( - self, - name: str | bytes, - time: int | timedelta, - nx: bool = False, - xx: bool = False, - gt: bool = False, - lt: bool = False, - ) -> Any: ... - def lock( - self, - name: str, - timeout: float | None = None, - sleep: float = 0.1, - blocking: bool = True, - blocking_timeout: float | None = None, - thread_local: bool = True, - ) -> Lock: ... - def zadd( - self, - name: str | bytes, - mapping: dict[str | bytes | int | float, float | int | str | bytes], - nx: bool = False, - xx: bool = False, - ch: bool = False, - incr: bool = False, - gt: bool = False, - lt: bool = False, - ) -> Any: ... - def zremrangebyscore(self, name: str | bytes, min: float | str, max: float | str) -> Any: ... - def zcard(self, name: str | bytes) -> Any: ... - def getdel(self, name: str | bytes) -> Any: ... - def pubsub(self) -> PubSub: ... - def pipeline(self, transaction: bool = True, shard_hint: str | None = None) -> Any: ... - - def __getattr__(self, item: str) -> Any: + def _require_client(self) -> redis.Redis | RedisCluster: if self._client is None: raise RuntimeError("Redis client is not initialized. Call init_app first.") - return getattr(self._client, item) + return self._client + + def _get_prefix(self) -> str: + return dify_config.REDIS_KEY_PREFIX + + def get(self, name: str | bytes) -> Any: + return self._require_client().get(_serialize_redis_name_arg(name, self._get_prefix())) + + def set( + self, + name: str | bytes, + value: Any, + ex: int | None = None, + px: int | None = None, + nx: bool = False, + xx: bool = False, + keepttl: bool = False, + get: bool = False, + exat: int | None = None, + pxat: int | None = None, + ) -> Any: + return self._require_client().set( + _serialize_redis_name_arg(name, self._get_prefix()), + value, + ex=ex, + px=px, + nx=nx, + xx=xx, + keepttl=keepttl, + get=get, + exat=exat, + pxat=pxat, + ) + + def setex(self, name: str | bytes, time: int | timedelta, value: Any) -> Any: + return self._require_client().setex(_serialize_redis_name_arg(name, self._get_prefix()), time, value) + + def setnx(self, name: str | bytes, value: Any) -> Any: + return self._require_client().setnx(_serialize_redis_name_arg(name, self._get_prefix()), value) + + def delete(self, *names: str | bytes) -> Any: + return self._require_client().delete(*_serialize_redis_name_args(names, self._get_prefix())) + + def incr(self, name: str | bytes, amount: int = 1) -> Any: + return self._require_client().incr(_serialize_redis_name_arg(name, self._get_prefix()), amount) + + def expire( + self, + name: str | bytes, + time: int | timedelta, + nx: bool = False, + xx: bool = False, + gt: bool = False, + lt: bool = False, + ) -> Any: + return self._require_client().expire( + _serialize_redis_name_arg(name, self._get_prefix()), + time, + nx=nx, + xx=xx, + gt=gt, + lt=lt, + ) + + def exists(self, *names: str | bytes) -> Any: + return self._require_client().exists(*_serialize_redis_name_args(names, self._get_prefix())) + + def ttl(self, name: str | bytes) -> Any: + return self._require_client().ttl(_serialize_redis_name_arg(name, self._get_prefix())) + + def getdel(self, name: str | bytes) -> Any: + return self._require_client().getdel(_serialize_redis_name_arg(name, self._get_prefix())) + + def lock( + self, + name: str, + timeout: float | None = None, + sleep: float = 0.1, + blocking: bool = True, + blocking_timeout: float | None = None, + thread_local: bool = True, + ) -> Any: + return self._require_client().lock( + _serialize_redis_name(name, self._get_prefix()), + timeout=timeout, + sleep=sleep, + blocking=blocking, + blocking_timeout=blocking_timeout, + thread_local=thread_local, + ) + + def hset(self, name: str | bytes, *args: Any, **kwargs: Any) -> Any: + return self._require_client().hset(_serialize_redis_name_arg(name, self._get_prefix()), *args, **kwargs) + + def hgetall(self, name: str | bytes) -> Any: + return self._require_client().hgetall(_serialize_redis_name_arg(name, self._get_prefix())) + + def hdel(self, name: str | bytes, *keys: str | bytes) -> Any: + return self._require_client().hdel(_serialize_redis_name_arg(name, self._get_prefix()), *keys) + + def hlen(self, name: str | bytes) -> Any: + return self._require_client().hlen(_serialize_redis_name_arg(name, self._get_prefix())) + + def zadd( + self, + name: str | bytes, + mapping: dict[str | bytes | int | float, float | int | str | bytes], + nx: bool = False, + xx: bool = False, + ch: bool = False, + incr: bool = False, + gt: bool = False, + lt: bool = False, + ) -> Any: + return self._require_client().zadd( + _serialize_redis_name_arg(name, self._get_prefix()), + cast(Any, mapping), + nx=nx, + xx=xx, + ch=ch, + incr=incr, + gt=gt, + lt=lt, + ) + + def zremrangebyscore(self, name: str | bytes, min: float | str, max: float | str) -> Any: + return self._require_client().zremrangebyscore(_serialize_redis_name_arg(name, self._get_prefix()), min, max) + + def zcard(self, name: str | bytes) -> Any: + return self._require_client().zcard(_serialize_redis_name_arg(name, self._get_prefix())) + + def pubsub(self) -> PubSub: + return self._require_client().pubsub() + + def pipeline(self, transaction: bool = True, shard_hint: str | None = None) -> Any: + return self._require_client().pipeline(transaction=transaction, shard_hint=shard_hint) + + def __getattr__(self, item: str) -> Any: + return getattr(self._require_client(), item) redis_client: RedisClientWrapper = RedisClientWrapper() _pubsub_redis_client: redis.Redis | RedisCluster | None = None +class RedisSSLParamsDict(TypedDict): + ssl_cert_reqs: int + ssl_ca_certs: str | None + ssl_certfile: str | None + ssl_keyfile: str | None + + +class RedisHealthParamsDict(TypedDict): + retry: Retry + socket_timeout: float | None + socket_connect_timeout: float | None + health_check_interval: int | None + + +class RedisClusterHealthParamsDict(TypedDict): + retry: Retry + socket_timeout: float | None + socket_connect_timeout: float | None + + +class RedisBaseParamsDict(TypedDict): + username: str | None + password: str | None + db: int + encoding: str + encoding_errors: str + decode_responses: bool + protocol: int + cache_config: CacheConfig | None + retry: Retry + socket_timeout: float | None + socket_connect_timeout: float | None + health_check_interval: int | None + + def _get_ssl_configuration() -> tuple[type[Union[Connection, SSLConnection]], dict[str, Any]]: """Get SSL configuration for Redis connection.""" if not dify_config.REDIS_USE_SSL: @@ -158,21 +285,60 @@ def _get_cache_configuration() -> CacheConfig | None: return CacheConfig() -def _get_base_redis_params() -> dict[str, Any]: - """Get base Redis connection parameters.""" - return { - "username": dify_config.REDIS_USERNAME, - "password": dify_config.REDIS_PASSWORD or None, - "db": dify_config.REDIS_DB, - "encoding": "utf-8", - "encoding_errors": "strict", - "decode_responses": False, - "protocol": dify_config.REDIS_SERIALIZATION_PROTOCOL, - "cache_config": _get_cache_configuration(), +def _get_retry_policy() -> Retry: + """Build the shared retry policy for Redis connections.""" + return Retry( + backoff=ExponentialWithJitterBackoff( + base=dify_config.REDIS_RETRY_BACKOFF_BASE, + cap=dify_config.REDIS_RETRY_BACKOFF_CAP, + ), + retries=dify_config.REDIS_RETRY_RETRIES, + ) + + +def _get_connection_health_params() -> RedisHealthParamsDict: + """Get connection health and retry parameters for standalone and Sentinel Redis clients.""" + return RedisHealthParamsDict( + retry=_get_retry_policy(), + socket_timeout=dify_config.REDIS_SOCKET_TIMEOUT, + socket_connect_timeout=dify_config.REDIS_SOCKET_CONNECT_TIMEOUT, + health_check_interval=dify_config.REDIS_HEALTH_CHECK_INTERVAL, + ) + + +def _get_cluster_connection_health_params() -> RedisClusterHealthParamsDict: + """Get retry and timeout parameters for Redis Cluster clients. + + RedisCluster does not support ``health_check_interval`` as a constructor + keyword (it is silently stripped by ``cleanup_kwargs``), so it is excluded + here. Only ``retry``, ``socket_timeout``, and ``socket_connect_timeout`` + are passed through. + """ + health_params = _get_connection_health_params() + result: RedisClusterHealthParamsDict = { + "retry": health_params["retry"], + "socket_timeout": health_params["socket_timeout"], + "socket_connect_timeout": health_params["socket_connect_timeout"], } + return result -def _create_sentinel_client(redis_params: dict[str, Any]) -> Union[redis.Redis, RedisCluster]: +def _get_base_redis_params() -> RedisBaseParamsDict: + """Get base Redis connection parameters including retry and health policy.""" + return RedisBaseParamsDict( + username=dify_config.REDIS_USERNAME, + password=dify_config.REDIS_PASSWORD or None, + db=dify_config.REDIS_DB, + encoding="utf-8", + encoding_errors="strict", + decode_responses=False, + protocol=dify_config.REDIS_SERIALIZATION_PROTOCOL, + cache_config=_get_cache_configuration(), + **_get_connection_health_params(), + ) + + +def _create_sentinel_client(redis_params: RedisBaseParamsDict) -> Union[redis.Redis, RedisCluster]: """Create Redis client using Sentinel configuration.""" if not dify_config.REDIS_SENTINELS: raise ValueError("REDIS_SENTINELS must be set when REDIS_USE_SENTINEL is True") @@ -196,7 +362,8 @@ def _create_sentinel_client(redis_params: dict[str, Any]) -> Union[redis.Redis, sentinel_kwargs=sentinel_kwargs, ) - master: redis.Redis = sentinel.master_for(dify_config.REDIS_SENTINEL_SERVICE_NAME, **redis_params) + params: dict[str, Any] = {**redis_params} + master: redis.Redis = sentinel.master_for(dify_config.REDIS_SENTINEL_SERVICE_NAME, **params) return master @@ -215,6 +382,7 @@ def _create_cluster_client() -> Union[redis.Redis, RedisCluster]: "password": dify_config.REDIS_CLUSTERS_PASSWORD, "protocol": dify_config.REDIS_SERIALIZATION_PROTOCOL, "cache_config": _get_cache_configuration(), + **_get_cluster_connection_health_params(), } if dify_config.REDIS_MAX_CONNECTIONS: cluster_kwargs["max_connections"] = dify_config.REDIS_MAX_CONNECTIONS @@ -222,41 +390,43 @@ def _create_cluster_client() -> Union[redis.Redis, RedisCluster]: return cluster -def _create_standalone_client(redis_params: dict[str, Any]) -> Union[redis.Redis, RedisCluster]: +def _create_standalone_client(redis_params: RedisBaseParamsDict) -> Union[redis.Redis, RedisCluster]: """Create standalone Redis client.""" connection_class, ssl_kwargs = _get_ssl_configuration() - redis_params.update( - { - "host": dify_config.REDIS_HOST, - "port": dify_config.REDIS_PORT, - "connection_class": connection_class, - } - ) + params: dict[str, Any] = { + **redis_params, + "host": dify_config.REDIS_HOST, + "port": dify_config.REDIS_PORT, + "connection_class": connection_class, + } if dify_config.REDIS_MAX_CONNECTIONS: - redis_params["max_connections"] = dify_config.REDIS_MAX_CONNECTIONS + params["max_connections"] = dify_config.REDIS_MAX_CONNECTIONS if ssl_kwargs: - redis_params.update(ssl_kwargs) + params.update(ssl_kwargs) - pool = redis.ConnectionPool(**redis_params) + pool = redis.ConnectionPool(**params) client: redis.Redis = redis.Redis(connection_pool=pool) return client def _create_pubsub_client(pubsub_url: str, use_clusters: bool) -> redis.Redis | RedisCluster: max_conns = dify_config.REDIS_MAX_CONNECTIONS - if use_clusters: - if max_conns: - return RedisCluster.from_url(pubsub_url, max_connections=max_conns) - else: - return RedisCluster.from_url(pubsub_url) + if use_clusters: + health_params = _get_cluster_connection_health_params() + kwargs: dict[str, Any] = {**health_params} + if max_conns: + kwargs["max_connections"] = max_conns + return RedisCluster.from_url(pubsub_url, **kwargs) + + standalone_health_params: dict[str, Any] = dict(_get_connection_health_params()) + kwargs = {**standalone_health_params} if max_conns: - return redis.Redis.from_url(pubsub_url, max_connections=max_conns) - else: - return redis.Redis.from_url(pubsub_url) + kwargs["max_connections"] = max_conns + return redis.Redis.from_url(pubsub_url, **kwargs) def init_app(app: DifyApp): diff --git a/api/extensions/ext_sentry.py b/api/extensions/ext_sentry.py index 5cc58f27c4..69d1f1ab07 100644 --- a/api/extensions/ext_sentry.py +++ b/api/extensions/ext_sentry.py @@ -5,11 +5,12 @@ from dify_app import DifyApp def init_app(app: DifyApp): if dify_config.SENTRY_DSN: import sentry_sdk - from graphon.model_runtime.errors.invoke import InvokeRateLimitError from sentry_sdk.integrations.celery import CeleryIntegration from sentry_sdk.integrations.flask import FlaskIntegration from werkzeug.exceptions import HTTPException + from graphon.model_runtime.errors.invoke import InvokeRateLimitError + try: from langfuse._utils import parse_error diff --git a/api/extensions/ext_socketio.py b/api/extensions/ext_socketio.py new file mode 100644 index 0000000000..5ed82bac8d --- /dev/null +++ b/api/extensions/ext_socketio.py @@ -0,0 +1,5 @@ +import socketio # type: ignore[reportMissingTypeStubs] + +from configs import dify_config + +sio = socketio.Server(async_mode="gevent", cors_allowed_origins=dify_config.CONSOLE_CORS_ALLOW_ORIGINS) diff --git a/api/extensions/logstore/repositories/logstore_api_workflow_node_execution_repository.py b/api/extensions/logstore/repositories/logstore_api_workflow_node_execution_repository.py index db599c5d49..64ff0f0674 100644 --- a/api/extensions/logstore/repositories/logstore_api_workflow_node_execution_repository.py +++ b/api/extensions/logstore/repositories/logstore_api_workflow_node_execution_repository.py @@ -11,12 +11,12 @@ from collections.abc import Sequence from datetime import datetime from typing import Any -from graphon.enums import WorkflowNodeExecutionStatus from sqlalchemy.orm import sessionmaker from extensions.logstore.aliyun_logstore import AliyunLogStore from extensions.logstore.repositories import safe_float, safe_int from extensions.logstore.sql_escape import escape_identifier, escape_logstore_query_value +from graphon.enums import WorkflowNodeExecutionStatus from models.enums import CreatorUserRole from models.workflow import WorkflowNodeExecutionModel, WorkflowNodeExecutionTriggeredFrom from repositories.api_workflow_node_execution_repository import DifyAPIWorkflowNodeExecutionRepository diff --git a/api/extensions/logstore/repositories/logstore_api_workflow_run_repository.py b/api/extensions/logstore/repositories/logstore_api_workflow_run_repository.py index 2745141431..7f77a0437a 100644 --- a/api/extensions/logstore/repositories/logstore_api_workflow_run_repository.py +++ b/api/extensions/logstore/repositories/logstore_api_workflow_run_repository.py @@ -20,12 +20,12 @@ from collections.abc import Sequence from datetime import datetime from typing import Any, cast -from graphon.enums import WorkflowExecutionStatus from sqlalchemy.orm import sessionmaker from extensions.logstore.aliyun_logstore import AliyunLogStore from extensions.logstore.repositories import safe_float, safe_int from extensions.logstore.sql_escape import escape_identifier, escape_logstore_query_value, escape_sql_string +from graphon.enums import WorkflowExecutionStatus from libs.infinite_scroll_pagination import InfiniteScrollPagination from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom from models.workflow import WorkflowRun, WorkflowType diff --git a/api/extensions/logstore/repositories/logstore_workflow_execution_repository.py b/api/extensions/logstore/repositories/logstore_workflow_execution_repository.py index d0f3e2e244..544109276d 100644 --- a/api/extensions/logstore/repositories/logstore_workflow_execution_repository.py +++ b/api/extensions/logstore/repositories/logstore_workflow_execution_repository.py @@ -3,14 +3,14 @@ import logging import os import time -from graphon.entities import WorkflowExecution -from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from sqlalchemy.engine import Engine from sqlalchemy.orm import sessionmaker from core.repositories.factory import WorkflowExecutionRepository from core.repositories.sqlalchemy_workflow_execution_repository import SQLAlchemyWorkflowExecutionRepository from extensions.logstore.aliyun_logstore import AliyunLogStore +from graphon.entities import WorkflowExecution +from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from libs.helper import extract_tenant_id from models import ( Account, diff --git a/api/extensions/logstore/repositories/logstore_workflow_node_execution_repository.py b/api/extensions/logstore/repositories/logstore_workflow_node_execution_repository.py index 37952d6464..dc7654a25c 100644 --- a/api/extensions/logstore/repositories/logstore_workflow_node_execution_repository.py +++ b/api/extensions/logstore/repositories/logstore_workflow_node_execution_repository.py @@ -13,10 +13,6 @@ from collections.abc import Sequence from datetime import datetime from typing import Any -from graphon.entities import WorkflowNodeExecution -from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus -from graphon.model_runtime.utils.encoders import jsonable_encoder -from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from sqlalchemy.engine import Engine from sqlalchemy.orm import sessionmaker @@ -26,6 +22,10 @@ from core.repositories.factory import OrderConfig, WorkflowNodeExecutionReposito from extensions.logstore.aliyun_logstore import AliyunLogStore from extensions.logstore.repositories import safe_float, safe_int from extensions.logstore.sql_escape import escape_identifier +from graphon.entities import WorkflowNodeExecution +from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus +from graphon.model_runtime.utils.encoders import jsonable_encoder +from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from libs.helper import extract_tenant_id from models import ( Account, diff --git a/api/extensions/otel/decorators/base.py b/api/extensions/otel/decorators/base.py index 1dd92caeae..ad83826427 100644 --- a/api/extensions/otel/decorators/base.py +++ b/api/extensions/otel/decorators/base.py @@ -37,12 +37,7 @@ def trace_span[**P, R](handler_class: type[SpanHandler] | None = None) -> Callab handler = _get_handler_instance(handler_class or SpanHandler) tracer = get_tracer(__name__) - return handler.wrapper( - tracer=tracer, - wrapped=func, - args=args, - kwargs=kwargs, - ) + return handler.wrapper(tracer, func, *args, **kwargs) return cast(Callable[P, R], wrapper) diff --git a/api/extensions/otel/decorators/handler.py b/api/extensions/otel/decorators/handler.py index e465a615a6..b0d9fa7af6 100644 --- a/api/extensions/otel/decorators/handler.py +++ b/api/extensions/otel/decorators/handler.py @@ -1,8 +1,8 @@ import inspect -from collections.abc import Callable, Mapping +from collections.abc import Callable from typing import Any -from opentelemetry.trace import SpanKind, Status, StatusCode +from opentelemetry.trace import SpanKind, Status, StatusCode, Tracer class SpanHandler: @@ -16,9 +16,9 @@ class SpanHandler: exceptions. Handlers can override the wrapper method to customize behavior. """ - _signature_cache: dict[Callable[..., Any], inspect.Signature] = {} + _signature_cache: dict[Callable[..., object], inspect.Signature] = {} - def _build_span_name(self, wrapped: Callable[..., Any]) -> str: + def _build_span_name[**P, R](self, wrapped: Callable[P, R]) -> str: """ Build the span name from the wrapped function. @@ -29,11 +29,11 @@ class SpanHandler: """ return f"{wrapped.__module__}.{wrapped.__qualname__}" - def _extract_arguments[T]( + def _extract_arguments[**P, R]( self, - wrapped: Callable[..., T], - args: tuple[object, ...], - kwargs: Mapping[str, object], + wrapped: Callable[P, R], + *args: P.args, + **kwargs: P.kwargs, ) -> dict[str, Any] | None: """ Extract function arguments using inspect.signature. @@ -59,13 +59,13 @@ class SpanHandler: except Exception: return None - def wrapper[T]( + def wrapper[**P, R]( self, - tracer: Any, - wrapped: Callable[..., T], - args: tuple[object, ...], - kwargs: Mapping[str, object], - ) -> T: + tracer: Tracer, + wrapped: Callable[P, R], + *args: P.args, + **kwargs: P.kwargs, + ) -> R: """ Fully control the wrapper behavior. diff --git a/api/extensions/otel/decorators/handlers/generate_handler.py b/api/extensions/otel/decorators/handlers/generate_handler.py index cc6c75304f..df5142c310 100644 --- a/api/extensions/otel/decorators/handlers/generate_handler.py +++ b/api/extensions/otel/decorators/handlers/generate_handler.py @@ -1,8 +1,7 @@ import logging -from collections.abc import Callable, Mapping -from typing import Any +from collections.abc import Callable -from opentelemetry.trace import SpanKind, Status, StatusCode +from opentelemetry.trace import SpanKind, Status, StatusCode, Tracer from opentelemetry.util.types import AttributeValue from extensions.otel.decorators.handler import SpanHandler @@ -15,15 +14,15 @@ logger = logging.getLogger(__name__) class AppGenerateHandler(SpanHandler): """Span handler for ``AppGenerateService.generate``.""" - def wrapper[T]( + def wrapper[**P, R]( self, - tracer: Any, - wrapped: Callable[..., T], - args: tuple[object, ...], - kwargs: Mapping[str, object], - ) -> T: + tracer: Tracer, + wrapped: Callable[P, R], + *args: P.args, + **kwargs: P.kwargs, + ) -> R: try: - arguments = self._extract_arguments(wrapped, args, kwargs) + arguments = self._extract_arguments(wrapped, *args, **kwargs) if not arguments: return wrapped(*args, **kwargs) diff --git a/api/extensions/otel/decorators/handlers/workflow_app_runner_handler.py b/api/extensions/otel/decorators/handlers/workflow_app_runner_handler.py index 8abd60197c..6b2112ceb2 100644 --- a/api/extensions/otel/decorators/handlers/workflow_app_runner_handler.py +++ b/api/extensions/otel/decorators/handlers/workflow_app_runner_handler.py @@ -1,8 +1,7 @@ import logging -from collections.abc import Callable, Mapping -from typing import Any +from collections.abc import Callable -from opentelemetry.trace import SpanKind, Status, StatusCode +from opentelemetry.trace import SpanKind, Status, StatusCode, Tracer from opentelemetry.util.types import AttributeValue from extensions.otel.decorators.handler import SpanHandler @@ -14,15 +13,15 @@ logger = logging.getLogger(__name__) class WorkflowAppRunnerHandler(SpanHandler): """Span handler for ``WorkflowAppRunner.run``.""" - def wrapper( + def wrapper[**P, R]( self, - tracer: Any, - wrapped: Callable[..., Any], - args: tuple[Any, ...], - kwargs: Mapping[str, Any], - ) -> Any: + tracer: Tracer, + wrapped: Callable[P, R], + *args: P.args, + **kwargs: P.kwargs, + ) -> R: try: - arguments = self._extract_arguments(wrapped, args, kwargs) + arguments = self._extract_arguments(wrapped, *args, **kwargs) if not arguments: return wrapped(*args, **kwargs) diff --git a/api/extensions/otel/parser/base.py b/api/extensions/otel/parser/base.py index 23d324f9ea..fbf379b3e5 100644 --- a/api/extensions/otel/parser/base.py +++ b/api/extensions/otel/parser/base.py @@ -10,17 +10,17 @@ Gate is only active in EE (``ENTERPRISE_ENABLED=True``) when import json from typing import Any, Protocol -from graphon.enums import BuiltinNodeTypes -from graphon.file import File -from graphon.graph_events import GraphNodeEventBase -from graphon.nodes.base.node import Node -from graphon.variables import Segment from opentelemetry.trace import Span from opentelemetry.trace.status import Status, StatusCode from pydantic import BaseModel from configs import dify_config from extensions.otel.semconv.gen_ai import ChainAttributes, GenAIAttributes +from graphon.enums import BuiltinNodeTypes +from graphon.file import File +from graphon.graph_events import GraphNodeEventBase +from graphon.nodes.base.node import Node +from graphon.variables import Segment def should_include_content() -> bool: diff --git a/api/extensions/otel/parser/llm.py b/api/extensions/otel/parser/llm.py index 335c5cc29e..ec3c78a12d 100644 --- a/api/extensions/otel/parser/llm.py +++ b/api/extensions/otel/parser/llm.py @@ -6,12 +6,12 @@ import logging from collections.abc import Mapping from typing import Any -from graphon.graph_events import GraphNodeEventBase -from graphon.nodes.base.node import Node from opentelemetry.trace import Span from extensions.otel.parser.base import DefaultNodeOTelParser, safe_json_dumps from extensions.otel.semconv.gen_ai import LLMAttributes +from graphon.graph_events import GraphNodeEventBase +from graphon.nodes.base.node import Node logger = logging.getLogger(__name__) diff --git a/api/extensions/otel/parser/retrieval.py b/api/extensions/otel/parser/retrieval.py index 6df5f62c15..56672d1fd4 100644 --- a/api/extensions/otel/parser/retrieval.py +++ b/api/extensions/otel/parser/retrieval.py @@ -6,13 +6,13 @@ import logging from collections.abc import Sequence from typing import Any -from graphon.graph_events import GraphNodeEventBase -from graphon.nodes.base.node import Node -from graphon.variables import Segment from opentelemetry.trace import Span from extensions.otel.parser.base import DefaultNodeOTelParser, safe_json_dumps from extensions.otel.semconv.gen_ai import RetrieverAttributes +from graphon.graph_events import GraphNodeEventBase +from graphon.nodes.base.node import Node +from graphon.variables import Segment logger = logging.getLogger(__name__) diff --git a/api/extensions/otel/parser/tool.py b/api/extensions/otel/parser/tool.py index b9fdd9e1ca..75ddbba448 100644 --- a/api/extensions/otel/parser/tool.py +++ b/api/extensions/otel/parser/tool.py @@ -2,14 +2,14 @@ Parser for tool nodes that captures tool-specific metadata. """ -from graphon.enums import WorkflowNodeExecutionMetadataKey -from graphon.graph_events import GraphNodeEventBase -from graphon.nodes.base.node import Node -from graphon.nodes.tool.entities import ToolNodeData from opentelemetry.trace import Span from extensions.otel.parser.base import DefaultNodeOTelParser, safe_json_dumps from extensions.otel.semconv.gen_ai import ToolAttributes +from graphon.enums import WorkflowNodeExecutionMetadataKey +from graphon.graph_events import GraphNodeEventBase +from graphon.nodes.base.node import Node +from graphon.nodes.tool.entities import ToolNodeData class ToolNodeOTelParser: diff --git a/api/extensions/redis_names.py b/api/extensions/redis_names.py new file mode 100644 index 0000000000..9e63416daf --- /dev/null +++ b/api/extensions/redis_names.py @@ -0,0 +1,32 @@ +from configs import dify_config + + +def normalize_redis_key_prefix(prefix: str | None) -> str: + """Normalize the configured Redis key prefix for consistent runtime use.""" + if prefix is None: + return "" + return prefix.strip() + + +def get_redis_key_prefix() -> str: + """Read and normalize the current Redis key prefix from config.""" + return normalize_redis_key_prefix(dify_config.REDIS_KEY_PREFIX) + + +def serialize_redis_name(name: str, prefix: str | None = None) -> str: + """Convert a logical Redis name into the physical name used in Redis.""" + normalized_prefix = get_redis_key_prefix() if prefix is None else normalize_redis_key_prefix(prefix) + if not normalized_prefix: + return name + return f"{normalized_prefix}:{name}" + + +def serialize_redis_name_arg(name: str | bytes, prefix: str | None = None) -> str | bytes: + """Prefix string Redis names while preserving bytes inputs unchanged.""" + if isinstance(name, bytes): + return name + return serialize_redis_name(name, prefix) + + +def serialize_redis_name_args(names: tuple[str | bytes, ...], prefix: str | None = None) -> tuple[str | bytes, ...]: + return tuple(serialize_redis_name_arg(name, prefix) for name in names) diff --git a/api/extensions/storage/clickzetta_volume/clickzetta_volume_storage.py b/api/extensions/storage/clickzetta_volume/clickzetta_volume_storage.py index 18eed4e481..05492327c8 100644 --- a/api/extensions/storage/clickzetta_volume/clickzetta_volume_storage.py +++ b/api/extensions/storage/clickzetta_volume/clickzetta_volume_storage.py @@ -10,6 +10,7 @@ import tempfile from collections.abc import Generator from io import BytesIO from pathlib import Path +from typing import Any import clickzetta from pydantic import BaseModel, model_validator @@ -39,7 +40,7 @@ class ClickZettaVolumeConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): """Validate the configuration values. This method will first try to use CLICKZETTA_VOLUME_* environment variables, diff --git a/api/extensions/storage/clickzetta_volume/file_lifecycle.py b/api/extensions/storage/clickzetta_volume/file_lifecycle.py index 86b1bba544..1cb940b797 100644 --- a/api/extensions/storage/clickzetta_volume/file_lifecycle.py +++ b/api/extensions/storage/clickzetta_volume/file_lifecycle.py @@ -65,7 +65,7 @@ class FileMetadata: return data @classmethod - def from_dict(cls, data: dict) -> FileMetadata: + def from_dict(cls, data: dict[str, Any]) -> FileMetadata: """Create instance from dictionary""" data = data.copy() data["created_at"] = datetime.fromisoformat(data["created_at"]) @@ -459,7 +459,7 @@ class FileLifecycleManager: newest_file=None, ) - def _create_version_backup(self, filename: str, metadata: dict): + def _create_version_backup(self, filename: str, metadata: dict[str, Any]): """Create version backup""" try: # Read current file content @@ -487,7 +487,7 @@ class FileLifecycleManager: logger.warning("Failed to load metadata: %s", e) return {} - def _save_metadata(self, metadata_dict: dict): + def _save_metadata(self, metadata_dict: dict[str, Any]): """Save metadata file""" try: metadata_content = json.dumps(metadata_dict, indent=2, ensure_ascii=False) diff --git a/api/extensions/storage/opendal_storage.py b/api/extensions/storage/opendal_storage.py index 96f5915ff0..cd7f7db295 100644 --- a/api/extensions/storage/opendal_storage.py +++ b/api/extensions/storage/opendal_storage.py @@ -2,6 +2,7 @@ import logging import os from collections.abc import Generator from pathlib import Path +from typing import Any import opendal from dotenv import dotenv_values @@ -19,7 +20,7 @@ def _get_opendal_kwargs(*, scheme: str, env_file_path: str = ".env", prefix: str if key.startswith(config_prefix): kwargs[key[len(config_prefix) :].lower()] = value - file_env_vars: dict = dotenv_values(env_file_path) or {} + file_env_vars: dict[str, Any] = dotenv_values(env_file_path) or {} for key, value in file_env_vars.items(): if key.startswith(config_prefix) and key[len(config_prefix) :].lower() not in kwargs and value: kwargs[key[len(config_prefix) :].lower()] = value diff --git a/api/factories/file_factory/builders.py b/api/factories/file_factory/builders.py index 7516d18c8e..288d37d265 100644 --- a/api/factories/file_factory/builders.py +++ b/api/factories/file_factory/builders.py @@ -7,12 +7,12 @@ import uuid from collections.abc import Mapping, Sequence from typing import Any -from graphon.file import File, FileTransferMethod, FileType, FileUploadConfig, helpers, standardize_file_type from sqlalchemy import select from core.app.file_access import FileAccessControllerProtocol from core.workflow.file_reference import build_file_reference from extensions.ext_database import db +from graphon.file import File, FileTransferMethod, FileType, FileUploadConfig, helpers, standardize_file_type from models import ToolFile, UploadFile from .common import resolve_mapping_file_id diff --git a/api/factories/file_factory/message_files.py b/api/factories/file_factory/message_files.py index 5582b85c95..4b3d514238 100644 --- a/api/factories/file_factory/message_files.py +++ b/api/factories/file_factory/message_files.py @@ -4,9 +4,8 @@ from __future__ import annotations from collections.abc import Sequence -from graphon.file import File, FileBelongsTo, FileTransferMethod, FileUploadConfig - from core.app.file_access import FileAccessControllerProtocol +from graphon.file import File, FileBelongsTo, FileTransferMethod, FileUploadConfig from models import MessageFile from .builders import build_from_mapping diff --git a/api/factories/file_factory/storage_keys.py b/api/factories/file_factory/storage_keys.py index db3a7f3015..dba4c84407 100644 --- a/api/factories/file_factory/storage_keys.py +++ b/api/factories/file_factory/storage_keys.py @@ -5,12 +5,12 @@ from __future__ import annotations import uuid from collections.abc import Mapping, Sequence -from graphon.file import File, FileTransferMethod from sqlalchemy import select from sqlalchemy.orm import Session from core.app.file_access import FileAccessControllerProtocol from core.workflow.file_reference import build_file_reference, parse_file_reference +from graphon.file import File, FileTransferMethod from models import ToolFile, UploadFile diff --git a/api/factories/variable_factory.py b/api/factories/variable_factory.py index 57205b5739..fd7acb14d3 100644 --- a/api/factories/variable_factory.py +++ b/api/factories/variable_factory.py @@ -8,6 +8,11 @@ shared conversion functions for legacy callers and tests. from collections.abc import Mapping, Sequence from typing import Any, cast +from configs import dify_config +from core.workflow.variable_prefixes import ( + CONVERSATION_VARIABLE_NODE_ID, + ENVIRONMENT_VARIABLE_NODE_ID, +) from graphon.variables.exc import VariableError from graphon.variables.factory import ( TypeMismatchError, @@ -31,12 +36,6 @@ from graphon.variables.variables import ( VariableBase, ) -from configs import dify_config -from core.workflow.variable_prefixes import ( - CONVERSATION_VARIABLE_NODE_ID, - ENVIRONMENT_VARIABLE_NODE_ID, -) - __all__ = [ "TypeMismatchError", "UnsupportedSegmentTypeError", diff --git a/api/fields/conversation_fields.py b/api/fields/conversation_fields.py index 7878d58679..bf5c9ffcb1 100644 --- a/api/fields/conversation_fields.py +++ b/api/fields/conversation_fields.py @@ -3,10 +3,10 @@ from __future__ import annotations from datetime import datetime from typing import Any -from graphon.file import File from pydantic import Field, field_validator, model_validator from fields.base import ResponseModel +from graphon.file import File type JSONValue = Any @@ -80,7 +80,7 @@ class Feedback(ResponseModel): from_account: SimpleAccount | None = None -class Annotation(ResponseModel): +class ConversationAnnotation(ResponseModel): id: str question: str | None = None content: str @@ -95,8 +95,8 @@ class Annotation(ResponseModel): return value -class AnnotationHitHistory(ResponseModel): - annotation_id: str +class ConversationAnnotationHitHistory(ResponseModel): + annotation_id: str = Field(validation_alias="id") annotation_create_account: SimpleAccount | None = None created_at: int | None = None @@ -143,7 +143,7 @@ class MessageDetail(ResponseModel): query: str message: JSONValue message_tokens: int - answer: str + answer: str = Field(validation_alias="re_sign_file_url_answer") answer_tokens: int provider_response_latency: float from_source: str @@ -151,12 +151,12 @@ class MessageDetail(ResponseModel): from_account_id: str | None = None feedbacks: list[Feedback] workflow_run_id: str | None = None - annotation: Annotation | None = None - annotation_hit_history: AnnotationHitHistory | None = None + annotation: ConversationAnnotation | None = None + annotation_hit_history: ConversationAnnotationHitHistory | None = None created_at: int | None = None agent_thoughts: list[AgentThought] message_files: list[MessageFile] - metadata: JSONValue + metadata: JSONValue = Field(validation_alias="message_metadata_dict") status: str error: str | None = None parent_message_id: str | None = None @@ -196,7 +196,7 @@ class ModelConfig(ResponseModel): class SimpleModelConfig(ResponseModel): - model: JSONValue | None = None + model: JSONValue | None = Field(default=None, validation_alias="model_dict") pre_prompt: str | None = None @@ -211,6 +211,11 @@ class SimpleMessageDetail(ResponseModel): def _normalize_inputs(cls, value: JSONValue) -> JSONValue: return format_files_contained(value) + @field_validator("message", mode="before") + @classmethod + def _normalize_message(cls, value: JSONValue) -> str: + return message_text(value) + class Conversation(ResponseModel): id: str @@ -223,19 +228,26 @@ class Conversation(ResponseModel): read_at: int | None = None created_at: int | None = None updated_at: int | None = None - annotation: Annotation | None = None + annotation: ConversationAnnotation | None = None model_config_: SimpleModelConfig | None = Field(default=None, alias="model_config") user_feedback_stats: FeedbackStat | None = None admin_feedback_stats: FeedbackStat | None = None - message: SimpleMessageDetail | None = None + message: SimpleMessageDetail | None = Field(default=None, validation_alias="first_message") + + @field_validator("read_at", "created_at", "updated_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return to_timestamp(value) + return value class ConversationPagination(ResponseModel): page: int - limit: int + limit: int = Field(validation_alias="per_page") total: int - has_more: bool - data: list[Conversation] + has_more: bool = Field(validation_alias="has_next") + data: list[Conversation] = Field(validation_alias="items") class ConversationMessageDetail(ResponseModel): @@ -246,7 +258,14 @@ class ConversationMessageDetail(ResponseModel): from_account_id: str | None = None created_at: int | None = None model_config_: ModelConfig | None = Field(default=None, alias="model_config") - message: MessageDetail | None = None + message: MessageDetail | None = Field(default=None, validation_alias="first_message") + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_created_at(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return to_timestamp(value) + return value class ConversationWithSummary(ResponseModel): @@ -258,7 +277,7 @@ class ConversationWithSummary(ResponseModel): from_account_id: str | None = None from_account_name: str | None = None name: str - summary: str + summary: str = Field(validation_alias="summary_or_query") read_at: int | None = None created_at: int | None = None updated_at: int | None = None @@ -269,13 +288,20 @@ class ConversationWithSummary(ResponseModel): admin_feedback_stats: FeedbackStat | None = None status_count: StatusCount | None = None + @field_validator("read_at", "created_at", "updated_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return to_timestamp(value) + return value + class ConversationWithSummaryPagination(ResponseModel): page: int - limit: int + limit: int = Field(validation_alias="per_page") total: int - has_more: bool - data: list[ConversationWithSummary] + has_more: bool = Field(validation_alias="has_next") + data: list[ConversationWithSummary] = Field(validation_alias="items") class ConversationDetail(ResponseModel): @@ -293,6 +319,13 @@ class ConversationDetail(ResponseModel): user_feedback_stats: FeedbackStat | None = None admin_feedback_stats: FeedbackStat | None = None + @field_validator("created_at", "updated_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return to_timestamp(value) + return value + def to_timestamp(value: datetime | None) -> int | None: if value is None: diff --git a/api/fields/conversation_variable_fields.py b/api/fields/conversation_variable_fields.py index c55014a368..cf4a71d545 100644 --- a/api/fields/conversation_variable_fields.py +++ b/api/fields/conversation_variable_fields.py @@ -1,5 +1,13 @@ -from flask_restx import Namespace, fields +from __future__ import annotations +from datetime import datetime +from typing import Any + +from flask_restx import Namespace, fields +from pydantic import field_validator + +from fields.base import ResponseModel +from graphon.variables.types import SegmentType from libs.helper import TimestampField from ._value_type_serializer import serialize_value_type @@ -29,6 +37,74 @@ conversation_variable_infinite_scroll_pagination_fields = { } +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class ConversationVariableResponse(ResponseModel): + id: str + name: str + value_type: str + value: str | None = None + description: str | None = None + created_at: int | None = None + updated_at: int | None = None + + @field_validator("value_type", mode="before") + @classmethod + def _normalize_value_type(cls, value: Any) -> str: + exposed_type = getattr(value, "exposed_type", None) + if callable(exposed_type): + return str(exposed_type().value) + if isinstance(value, str): + try: + return str(SegmentType(value).exposed_type().value) + except ValueError: + return value + try: + return serialize_value_type(value) + except (AttributeError, TypeError, ValueError): + pass + + try: + return serialize_value_type({"value_type": value}) + except (AttributeError, TypeError, ValueError): + value_attr = getattr(value, "value", None) + if value_attr is not None: + return str(value_attr) + return str(value) + + @field_validator("value", mode="before") + @classmethod + def _normalize_value(cls, value: Any | None) -> str | None: + if value is None: + return None + if isinstance(value, str): + return value + return str(value) + + @field_validator("created_at", "updated_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class PaginatedConversationVariableResponse(ResponseModel): + page: int + limit: int + total: int + has_more: bool + data: list[ConversationVariableResponse] + + +class ConversationVariableInfiniteScrollPaginationResponse(ResponseModel): + limit: int + has_more: bool + data: list[ConversationVariableResponse] + + def build_conversation_variable_model(api_or_ns: Namespace): """Build the conversation variable model for the API or Namespace.""" return api_or_ns.model("ConversationVariable", conversation_variable_fields) diff --git a/api/fields/member_fields.py b/api/fields/member_fields.py index cfe0015918..67b320beaa 100644 --- a/api/fields/member_fields.py +++ b/api/fields/member_fields.py @@ -3,10 +3,10 @@ from __future__ import annotations from datetime import datetime from flask_restx import fields -from graphon.file import helpers as file_helpers from pydantic import computed_field, field_validator from fields.base import ResponseModel +from graphon.file import helpers as file_helpers simple_account_fields = { "id": fields.String, diff --git a/api/fields/message_fields.py b/api/fields/message_fields.py index a063a643b4..ca18f1c203 100644 --- a/api/fields/message_fields.py +++ b/api/fields/message_fields.py @@ -3,19 +3,16 @@ from __future__ import annotations from datetime import datetime from uuid import uuid4 -from graphon.file import File -from pydantic import BaseModel, ConfigDict, Field, field_validator +from pydantic import Field, field_validator from core.entities.execution_extra_content import ExecutionExtraContentDomainModel +from fields.base import ResponseModel from fields.conversation_fields import AgentThought, JSONValue, MessageFile +from graphon.file import File type JSONValueType = JSONValue -class ResponseModel(BaseModel): - model_config = ConfigDict(from_attributes=True, extra="ignore") - - class SimpleFeedback(ResponseModel): rating: str | None = None diff --git a/api/fields/online_user_fields.py b/api/fields/online_user_fields.py new file mode 100644 index 0000000000..bdbe19679c --- /dev/null +++ b/api/fields/online_user_fields.py @@ -0,0 +1,16 @@ +from flask_restx import fields + +online_user_partial_fields = { + "user_id": fields.String, + "username": fields.String, + "avatar": fields.String, +} + +workflow_online_users_fields = { + "app_id": fields.String, + "users": fields.List(fields.Nested(online_user_partial_fields)), +} + +online_user_list_fields = { + "data": fields.List(fields.Nested(workflow_online_users_fields)), +} diff --git a/api/fields/raws.py b/api/fields/raws.py index 4c65cdab7a..ee6f53b360 100644 --- a/api/fields/raws.py +++ b/api/fields/raws.py @@ -1,4 +1,5 @@ from flask_restx import fields + from graphon.file import File diff --git a/api/fields/workflow_app_log_fields.py b/api/fields/workflow_app_log_fields.py index d0e762f62b..1b2c71255d 100644 --- a/api/fields/workflow_app_log_fields.py +++ b/api/fields/workflow_app_log_fields.py @@ -1,8 +1,17 @@ -from flask_restx import Namespace, fields +from __future__ import annotations -from fields.end_user_fields import simple_end_user_fields -from fields.member_fields import simple_account_fields +from datetime import datetime +from typing import Any + +from flask_restx import Namespace, fields +from pydantic import field_validator + +from fields.base import ResponseModel +from fields.end_user_fields import SimpleEndUser, simple_end_user_fields +from fields.member_fields import SimpleAccount, simple_account_fields from fields.workflow_run_fields import ( + WorkflowRunForArchivedLogResponse, + WorkflowRunForLogResponse, build_workflow_run_for_archived_log_model, build_workflow_run_for_log_model, workflow_run_for_archived_log_fields, @@ -85,3 +94,55 @@ def build_workflow_archived_log_pagination_model(api_or_ns: Namespace): copied_fields = workflow_archived_log_pagination_fields.copy() copied_fields["data"] = fields.List(fields.Nested(workflow_archived_log_partial_model)) return api_or_ns.model("WorkflowArchivedLogPagination", copied_fields) + + +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class WorkflowAppLogPartialResponse(ResponseModel): + id: str + workflow_run: WorkflowRunForLogResponse | None = None + details: Any = None + created_from: str | None = None + created_by_role: str | None = None + created_by_account: SimpleAccount | None = None + created_by_end_user: SimpleEndUser | None = None + created_at: int | None = None + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class WorkflowArchivedLogPartialResponse(ResponseModel): + id: str + workflow_run: WorkflowRunForArchivedLogResponse | None = None + trigger_metadata: Any = None + created_by_account: SimpleAccount | None = None + created_by_end_user: SimpleEndUser | None = None + created_at: int | None = None + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class WorkflowAppLogPaginationResponse(ResponseModel): + page: int + limit: int + total: int + has_more: bool + data: list[WorkflowAppLogPartialResponse] + + +class WorkflowArchivedLogPaginationResponse(ResponseModel): + page: int + limit: int + total: int + has_more: bool + data: list[WorkflowArchivedLogPartialResponse] diff --git a/api/fields/workflow_comment_fields.py b/api/fields/workflow_comment_fields.py new file mode 100644 index 0000000000..c708dd3460 --- /dev/null +++ b/api/fields/workflow_comment_fields.py @@ -0,0 +1,96 @@ +from flask_restx import fields + +from libs.helper import AvatarUrlField, TimestampField + +# basic account fields for comments +account_fields = { + "id": fields.String, + "name": fields.String, + "email": fields.String, + "avatar_url": AvatarUrlField, +} + +# Comment mention fields +workflow_comment_mention_fields = { + "mentioned_user_id": fields.String, + "mentioned_user_account": fields.Nested(account_fields, allow_null=True), + "reply_id": fields.String, +} + +# Comment reply fields +workflow_comment_reply_fields = { + "id": fields.String, + "content": fields.String, + "created_by": fields.String, + "created_by_account": fields.Nested(account_fields, allow_null=True), + "created_at": TimestampField, +} + +# Basic comment fields (for list views) +workflow_comment_basic_fields = { + "id": fields.String, + "position_x": fields.Float, + "position_y": fields.Float, + "content": fields.String, + "created_by": fields.String, + "created_by_account": fields.Nested(account_fields, allow_null=True), + "created_at": TimestampField, + "updated_at": TimestampField, + "resolved": fields.Boolean, + "resolved_at": TimestampField, + "resolved_by": fields.String, + "resolved_by_account": fields.Nested(account_fields, allow_null=True), + "reply_count": fields.Integer, + "mention_count": fields.Integer, + "participants": fields.List(fields.Nested(account_fields)), +} + +# Detailed comment fields (for single comment view) +workflow_comment_detail_fields = { + "id": fields.String, + "position_x": fields.Float, + "position_y": fields.Float, + "content": fields.String, + "created_by": fields.String, + "created_by_account": fields.Nested(account_fields, allow_null=True), + "created_at": TimestampField, + "updated_at": TimestampField, + "resolved": fields.Boolean, + "resolved_at": TimestampField, + "resolved_by": fields.String, + "resolved_by_account": fields.Nested(account_fields, allow_null=True), + "replies": fields.List(fields.Nested(workflow_comment_reply_fields)), + "mentions": fields.List(fields.Nested(workflow_comment_mention_fields)), +} + +# Comment creation response fields (simplified) +workflow_comment_create_fields = { + "id": fields.String, + "created_at": TimestampField, +} + +# Comment update response fields (simplified) +workflow_comment_update_fields = { + "id": fields.String, + "updated_at": TimestampField, +} + +# Comment resolve response fields +workflow_comment_resolve_fields = { + "id": fields.String, + "resolved": fields.Boolean, + "resolved_at": TimestampField, + "resolved_by": fields.String, +} + +# Reply creation response fields (simplified) +workflow_comment_reply_create_fields = { + "id": fields.String, + "created_at": TimestampField, +} + +# Reply update response fields +workflow_comment_reply_update_fields = { + "id": fields.String, + "updated_at": TimestampField, +} diff --git a/api/fields/workflow_fields.py b/api/fields/workflow_fields.py index b0b6cc0b48..f9b5e98936 100644 --- a/api/fields/workflow_fields.py +++ b/api/fields/workflow_fields.py @@ -1,8 +1,8 @@ from flask_restx import fields -from graphon.variables import SecretVariable, SegmentType, VariableBase from core.helper import encrypter from fields.member_fields import simple_account_fields +from graphon.variables import SecretVariable, SegmentType, VariableBase from libs.helper import TimestampField from ._value_type_serializer import serialize_value_type diff --git a/api/fields/workflow_run_fields.py b/api/fields/workflow_run_fields.py index 35bb442c59..8c659086ed 100644 --- a/api/fields/workflow_run_fields.py +++ b/api/fields/workflow_run_fields.py @@ -1,7 +1,14 @@ -from flask_restx import Namespace, fields +from __future__ import annotations -from fields.end_user_fields import simple_end_user_fields -from fields.member_fields import simple_account_fields +from datetime import datetime +from typing import Any + +from flask_restx import Namespace, fields +from pydantic import Field, field_validator + +from fields.base import ResponseModel +from fields.end_user_fields import SimpleEndUser, simple_end_user_fields +from fields.member_fields import SimpleAccount, simple_account_fields from libs.helper import TimestampField workflow_run_for_log_fields = { @@ -147,3 +154,174 @@ workflow_run_node_execution_fields = { workflow_run_node_execution_list_fields = { "data": fields.List(fields.Nested(workflow_run_node_execution_fields)), } + + +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class WorkflowRunForLogResponse(ResponseModel): + id: str + version: str | None = None + status: str | None = None + triggered_from: str | None = None + error: str | None = None + elapsed_time: float | None = None + total_tokens: int | None = None + total_steps: int | None = None + created_at: int | None = None + finished_at: int | None = None + exceptions_count: int | None = None + + @field_validator("status", mode="before") + @classmethod + def _normalize_status(cls, value: Any) -> str | None: + if value is None or isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @field_validator("created_at", "finished_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class WorkflowRunForArchivedLogResponse(ResponseModel): + id: str + status: str | None = None + triggered_from: str | None = None + elapsed_time: float | None = None + total_tokens: int | None = None + + @field_validator("status", mode="before") + @classmethod + def _normalize_status(cls, value: Any) -> str | None: + if value is None or isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + +class WorkflowRunForListResponse(ResponseModel): + id: str + version: str | None = None + status: str | None = None + elapsed_time: float | None = None + total_tokens: int | None = None + total_steps: int | None = None + created_by_account: SimpleAccount | None = None + created_at: int | None = None + finished_at: int | None = None + exceptions_count: int | None = None + retry_index: int | None = None + + @field_validator("status", mode="before") + @classmethod + def _normalize_status(cls, value: Any) -> str | None: + if value is None or isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @field_validator("created_at", "finished_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class AdvancedChatWorkflowRunForListResponse(WorkflowRunForListResponse): + conversation_id: str | None = None + message_id: str | None = None + + +class AdvancedChatWorkflowRunPaginationResponse(ResponseModel): + limit: int + has_more: bool + data: list[AdvancedChatWorkflowRunForListResponse] + + +class WorkflowRunPaginationResponse(ResponseModel): + limit: int + has_more: bool + data: list[WorkflowRunForListResponse] + + +class WorkflowRunCountResponse(ResponseModel): + total: int + running: int + succeeded: int + failed: int + stopped: int + partial_succeeded: int = Field(validation_alias="partial-succeeded") + + +class WorkflowRunDetailResponse(ResponseModel): + id: str + version: str | None = None + graph: Any = Field(validation_alias="graph_dict") + inputs: Any = Field(validation_alias="inputs_dict") + status: str | None = None + outputs: Any = Field(validation_alias="outputs_dict") + error: str | None = None + elapsed_time: float | None = None + total_tokens: int | None = None + total_steps: int | None = None + created_by_role: str | None = None + created_by_account: SimpleAccount | None = None + created_by_end_user: SimpleEndUser | None = None + created_at: int | None = None + finished_at: int | None = None + exceptions_count: int | None = None + + @field_validator("status", mode="before") + @classmethod + def _normalize_status(cls, value: Any) -> str | None: + if value is None or isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @field_validator("created_at", "finished_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class WorkflowRunNodeExecutionResponse(ResponseModel): + id: str + index: int | None = None + predecessor_node_id: str | None = None + node_id: str | None = None + node_type: str | None = None + title: str | None = None + inputs: Any = Field(default=None, validation_alias="inputs_dict") + process_data: Any = Field(default=None, validation_alias="process_data_dict") + outputs: Any = Field(default=None, validation_alias="outputs_dict") + status: str | None = None + error: str | None = None + elapsed_time: float | None = None + execution_metadata: Any = Field(default=None, validation_alias="execution_metadata_dict") + extras: Any = None + created_at: int | None = None + created_by_role: str | None = None + created_by_account: SimpleAccount | None = None + created_by_end_user: SimpleEndUser | None = None + finished_at: int | None = None + inputs_truncated: bool | None = None + outputs_truncated: bool | None = None + process_data_truncated: bool | None = None + + @field_validator("status", mode="before") + @classmethod + def _normalize_status(cls, value: Any) -> str | None: + if value is None or isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @field_validator("created_at", "finished_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class WorkflowRunNodeExecutionListResponse(ResponseModel): + data: list[WorkflowRunNodeExecutionResponse] diff --git a/api/libs/broadcast_channel/redis/_subscription.py b/api/libs/broadcast_channel/redis/_subscription.py index 40027bc424..4db79a15a9 100644 --- a/api/libs/broadcast_channel/redis/_subscription.py +++ b/api/libs/broadcast_channel/redis/_subscription.py @@ -3,7 +3,7 @@ import queue import threading import types from collections.abc import Generator, Iterator -from typing import Self +from typing import Any, Self from libs.broadcast_channel.channel import Subscription from libs.broadcast_channel.exc import SubscriptionClosedError @@ -221,7 +221,7 @@ class RedisSubscriptionBase(Subscription): """Unsubscribe from the Redis topic using the appropriate command.""" raise NotImplementedError - def _get_message(self) -> dict | None: + def _get_message(self) -> dict[str, Any] | None: """Get a message from Redis using the appropriate method.""" raise NotImplementedError diff --git a/api/libs/broadcast_channel/redis/channel.py b/api/libs/broadcast_channel/redis/channel.py index bd6d58c53f..b76a23eb3c 100644 --- a/api/libs/broadcast_channel/redis/channel.py +++ b/api/libs/broadcast_channel/redis/channel.py @@ -1,5 +1,8 @@ from __future__ import annotations +from typing import Any + +from extensions.redis_names import serialize_redis_name from libs.broadcast_channel.channel import Producer, Subscriber, Subscription from redis import Redis, RedisCluster @@ -30,12 +33,13 @@ class Topic: def __init__(self, redis_client: Redis | RedisCluster, topic: str): self._client = redis_client self._topic = topic + self._redis_topic = serialize_redis_name(topic) def as_producer(self) -> Producer: return self def publish(self, payload: bytes) -> None: - self._client.publish(self._topic, payload) + self._client.publish(self._redis_topic, payload) def as_subscriber(self) -> Subscriber: return self @@ -44,7 +48,7 @@ class Topic: return _RedisSubscription( client=self._client, pubsub=self._client.pubsub(), - topic=self._topic, + topic=self._redis_topic, ) @@ -62,7 +66,7 @@ class _RedisSubscription(RedisSubscriptionBase): assert self._pubsub is not None self._pubsub.unsubscribe(self._topic) - def _get_message(self) -> dict | None: + def _get_message(self) -> dict[str, Any] | None: assert self._pubsub is not None return self._pubsub.get_message(ignore_subscribe_messages=True, timeout=1) diff --git a/api/libs/broadcast_channel/redis/sharded_channel.py b/api/libs/broadcast_channel/redis/sharded_channel.py index 20c43b8bbb..919d8d622e 100644 --- a/api/libs/broadcast_channel/redis/sharded_channel.py +++ b/api/libs/broadcast_channel/redis/sharded_channel.py @@ -1,5 +1,8 @@ from __future__ import annotations +from typing import Any + +from extensions.redis_names import serialize_redis_name from libs.broadcast_channel.channel import Producer, Subscriber, Subscription from redis import Redis, RedisCluster @@ -28,12 +31,13 @@ class ShardedTopic: def __init__(self, redis_client: Redis | RedisCluster, topic: str): self._client = redis_client self._topic = topic + self._redis_topic = serialize_redis_name(topic) def as_producer(self) -> Producer: return self def publish(self, payload: bytes) -> None: - self._client.spublish(self._topic, payload) # type: ignore[attr-defined,union-attr] + self._client.spublish(self._redis_topic, payload) # type: ignore[attr-defined,union-attr] def as_subscriber(self) -> Subscriber: return self @@ -42,7 +46,7 @@ class ShardedTopic: return _RedisShardedSubscription( client=self._client, pubsub=self._client.pubsub(), - topic=self._topic, + topic=self._redis_topic, ) @@ -60,7 +64,7 @@ class _RedisShardedSubscription(RedisSubscriptionBase): assert self._pubsub is not None self._pubsub.sunsubscribe(self._topic) # type: ignore[attr-defined] - def _get_message(self) -> dict | None: + def _get_message(self) -> dict[str, Any] | None: assert self._pubsub is not None # NOTE(QuantumGhost): this is an issue in # upstream code. If Sharded PubSub is used with Cluster, the diff --git a/api/libs/broadcast_channel/redis/streams_channel.py b/api/libs/broadcast_channel/redis/streams_channel.py index 983f785027..55ff6cd4f9 100644 --- a/api/libs/broadcast_channel/redis/streams_channel.py +++ b/api/libs/broadcast_channel/redis/streams_channel.py @@ -6,6 +6,7 @@ import threading from collections.abc import Iterator from typing import Self +from extensions.redis_names import serialize_redis_name from libs.broadcast_channel.channel import Producer, Subscriber, Subscription from libs.broadcast_channel.exc import SubscriptionClosedError from redis import Redis, RedisCluster @@ -35,7 +36,7 @@ class StreamsTopic: def __init__(self, redis_client: Redis | RedisCluster, topic: str, *, retention_seconds: int = 600): self._client = redis_client self._topic = topic - self._key = f"stream:{topic}" + self._key = serialize_redis_name(f"stream:{topic}") self._retention_seconds = retention_seconds self.max_length = 5000 diff --git a/api/libs/db_migration_lock.py b/api/libs/db_migration_lock.py index 1d3a81e0a2..b5fe38342a 100644 --- a/api/libs/db_migration_lock.py +++ b/api/libs/db_migration_lock.py @@ -14,9 +14,15 @@ from __future__ import annotations import logging import threading -from typing import Any +from typing import TYPE_CHECKING, Any +import redis +from redis.cluster import RedisCluster from redis.exceptions import LockNotOwnedError, RedisError +from redis.lock import Lock + +if TYPE_CHECKING: + from extensions.ext_redis import RedisClientWrapper logger = logging.getLogger(__name__) @@ -38,21 +44,21 @@ class DbMigrationAutoRenewLock: primary error/exit code. """ - _redis_client: Any + _redis_client: redis.Redis | RedisCluster | RedisClientWrapper _name: str _ttl_seconds: float _renew_interval_seconds: float _log_context: str | None _logger: logging.Logger - _lock: Any + _lock: Lock | None _stop_event: threading.Event | None _thread: threading.Thread | None _acquired: bool def __init__( self, - redis_client: Any, + redis_client: redis.Redis | RedisCluster | RedisClientWrapper, name: str, ttl_seconds: float = 60, renew_interval_seconds: float | None = None, @@ -97,7 +103,10 @@ class DbMigrationAutoRenewLock: timeout=self._ttl_seconds, thread_local=False, ) - acquired = bool(self._lock.acquire(*args, **kwargs)) + lock = self._lock + if lock is None: + raise RuntimeError("Redis lock initialization failed.") + acquired = bool(lock.acquire(*args, **kwargs)) self._acquired = acquired if acquired: self._start_heartbeat() @@ -127,7 +136,7 @@ class DbMigrationAutoRenewLock: ) self._thread.start() - def _heartbeat_loop(self, lock: Any, stop_event: threading.Event) -> None: + def _heartbeat_loop(self, lock: Lock, stop_event: threading.Event) -> None: while not stop_event.wait(self._renew_interval_seconds): try: lock.reacquire() diff --git a/api/libs/email_i18n.py b/api/libs/email_i18n.py index 0828cf80bf..1519f07bb1 100644 --- a/api/libs/email_i18n.py +++ b/api/libs/email_i18n.py @@ -37,6 +37,7 @@ class EmailType(StrEnum): ENTERPRISE_CUSTOM = auto() QUEUE_MONITOR_ALERT = auto() DOCUMENT_CLEAN_NOTIFY = auto() + WORKFLOW_COMMENT_MENTION = auto() EMAIL_REGISTER = auto() EMAIL_REGISTER_WHEN_ACCOUNT_EXIST = auto() RESET_PASSWORD_WHEN_ACCOUNT_NOT_EXIST_NO_REGISTER = auto() @@ -453,6 +454,18 @@ def create_default_email_config() -> EmailI18nConfig: branded_template_path="clean_document_job_mail_template_zh-CN.html", ), }, + EmailType.WORKFLOW_COMMENT_MENTION: { + EmailLanguage.EN_US: EmailTemplate( + subject="You were mentioned in a workflow comment", + template_path="workflow_comment_mention_template_en-US.html", + branded_template_path="without-brand/workflow_comment_mention_template_en-US.html", + ), + EmailLanguage.ZH_HANS: EmailTemplate( + subject="你在工作流评论中被提及", + template_path="workflow_comment_mention_template_zh-CN.html", + branded_template_path="without-brand/workflow_comment_mention_template_zh-CN.html", + ), + }, EmailType.TRIGGER_EVENTS_LIMIT_SANDBOX: { EmailLanguage.EN_US: EmailTemplate( subject="You’ve reached your Sandbox Trigger Events limit", diff --git a/api/libs/exception.py b/api/libs/exception.py index 73379dfded..1e4bbb44f6 100644 --- a/api/libs/exception.py +++ b/api/libs/exception.py @@ -1,9 +1,11 @@ +from typing import Any + from werkzeug.exceptions import HTTPException class BaseHTTPException(HTTPException): error_code: str = "unknown" - data: dict | None = None + data: dict[str, Any] | None = None def __init__(self, description=None, response=None): super().__init__(description, response) diff --git a/api/libs/external_api.py b/api/libs/external_api.py index e8592407c3..f907d17750 100644 --- a/api/libs/external_api.py +++ b/api/libs/external_api.py @@ -17,7 +17,6 @@ def http_status_message(code): def register_external_error_handlers(api: Api): - @api.errorhandler(HTTPException) def handle_http_exception(e: HTTPException): got_request_exception.send(current_app, exception=e) @@ -74,27 +73,18 @@ def register_external_error_handlers(api: Api): headers["Set-Cookie"] = build_force_logout_cookie_headers() return data, status_code, headers - _ = handle_http_exception - - @api.errorhandler(ValueError) def handle_value_error(e: ValueError): got_request_exception.send(current_app, exception=e) status_code = 400 data = {"code": "invalid_param", "message": str(e), "status": status_code} return data, status_code - _ = handle_value_error - - @api.errorhandler(AppInvokeQuotaExceededError) def handle_quota_exceeded(e: AppInvokeQuotaExceededError): got_request_exception.send(current_app, exception=e) status_code = 429 data = {"code": "too_many_requests", "message": str(e), "status": status_code} return data, status_code - _ = handle_quota_exceeded - - @api.errorhandler(Exception) def handle_general_exception(e: Exception): got_request_exception.send(current_app, exception=e) @@ -113,7 +103,10 @@ def register_external_error_handlers(api: Api): return data, status_code - _ = handle_general_exception + api.errorhandler(HTTPException)(handle_http_exception) + api.errorhandler(ValueError)(handle_value_error) + api.errorhandler(AppInvokeQuotaExceededError)(handle_quota_exceeded) + api.errorhandler(Exception)(handle_general_exception) class ExternalApi(Api): diff --git a/api/libs/helper.py b/api/libs/helper.py index ece53e8806..ac69a11084 100644 --- a/api/libs/helper.py +++ b/api/libs/helper.py @@ -10,14 +10,12 @@ import uuid from collections.abc import Callable, Generator, Mapping from datetime import datetime from hashlib import sha256 -from typing import TYPE_CHECKING, Annotated, Any, Optional, Protocol, Union, cast +from typing import TYPE_CHECKING, Annotated, Any, Protocol, cast from uuid import UUID from zoneinfo import available_timezones from flask import Response, stream_with_context from flask_restx import fields -from graphon.file import helpers as file_helpers -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, TypeAdapter from pydantic.functional_validators import AfterValidator from typing_extensions import TypedDict @@ -25,6 +23,8 @@ from typing_extensions import TypedDict from configs import dify_config from core.app.features.rate_limiting.rate_limit import RateLimitGenerator from extensions.ext_redis import redis_client +from graphon.file import helpers as file_helpers +from graphon.model_runtime.utils.encoders import jsonable_encoder if TYPE_CHECKING: from models import Account @@ -81,7 +81,7 @@ def escape_like_pattern(pattern: str) -> str: return pattern.replace("\\", "\\\\").replace("%", "\\%").replace("_", "\\_") -def extract_tenant_id(user: Union["Account", "EndUser"]) -> str | None: +def extract_tenant_id(user: "Account | EndUser") -> str | None: """ Extract tenant_id from Account or EndUser object. @@ -120,10 +120,22 @@ class AppIconUrlField(fields.Raw): obj = obj["app"] if isinstance(obj, App | Site) and obj.icon_type == IconType.IMAGE: - return file_helpers.get_signed_file_url(obj.icon) + return build_icon_url(obj.icon_type, obj.icon) return None +def build_icon_url(icon_type: Any, icon: str | None) -> str | None: + if icon is None or icon_type is None: + return None + + from models.model import IconType + + icon_type_value = icon_type.value if isinstance(icon_type, IconType) else str(icon_type) + if icon_type_value.lower() != IconType.IMAGE: + return None + return file_helpers.get_signed_file_url(icon) + + class AvatarUrlField(fields.Raw): def output(self, key, obj, **kwargs): if obj is None: @@ -164,7 +176,10 @@ def email(email): EmailStr = Annotated[str, AfterValidator(email)] -def uuid_value(value: Any) -> str: +def uuid_value(value: str | UUID) -> str: + if isinstance(value, UUID): + return str(value) + if value == "": return str(value) @@ -405,9 +420,9 @@ class TokenManager: def generate_token( cls, token_type: str, - account: Optional["Account"] = None, + account: "Account | None" = None, email: str | None = None, - additional_data: dict | None = None, + additional_data: dict[str, Any] | None = None, ) -> str: if account is None and email is None: raise ValueError("Account or email must be provided") @@ -465,9 +480,7 @@ class TokenManager: return current_token @classmethod - def _set_current_token_for_account( - cls, account_id: str, token: str, token_type: str, expiry_minutes: Union[int, float] - ): + def _set_current_token_for_account(cls, account_id: str, token: str, token_type: str, expiry_minutes: int | float): key = cls._get_account_token_key(account_id, token_type) expiry_seconds = int(expiry_minutes * 60) redis_client.setex(key, expiry_seconds, token) diff --git a/api/libs/pyrefly_type_coverage.py b/api/libs/pyrefly_type_coverage.py new file mode 100644 index 0000000000..369b8dff3c --- /dev/null +++ b/api/libs/pyrefly_type_coverage.py @@ -0,0 +1,145 @@ +"""Helpers for generating type-coverage summaries from pyrefly report output.""" + +from __future__ import annotations + +import json +import sys +from pathlib import Path +from typing import TypedDict + + +class CoverageSummary(TypedDict): + n_modules: int + n_typable: int + n_typed: int + n_any: int + n_untyped: int + coverage: float + strict_coverage: float + + +_REQUIRED_KEYS = frozenset(CoverageSummary.__annotations__) + +_EMPTY_SUMMARY: CoverageSummary = { + "n_modules": 0, + "n_typable": 0, + "n_typed": 0, + "n_any": 0, + "n_untyped": 0, + "coverage": 0.0, + "strict_coverage": 0.0, +} + + +def parse_summary(report_json: str) -> CoverageSummary: + """Extract the summary section from ``pyrefly report`` JSON output. + + Returns an empty summary when *report_json* is empty or malformed so that + the CI workflow can degrade gracefully instead of crashing. + """ + if not report_json or not report_json.strip(): + return _EMPTY_SUMMARY.copy() + + try: + data = json.loads(report_json) + except json.JSONDecodeError: + return _EMPTY_SUMMARY.copy() + + summary = data.get("summary") + if not isinstance(summary, dict) or not _REQUIRED_KEYS.issubset(summary): + return _EMPTY_SUMMARY.copy() + + return { + "n_modules": summary["n_modules"], + "n_typable": summary["n_typable"], + "n_typed": summary["n_typed"], + "n_any": summary["n_any"], + "n_untyped": summary["n_untyped"], + "coverage": summary["coverage"], + "strict_coverage": summary["strict_coverage"], + } + + +def format_summary_markdown(summary: CoverageSummary) -> str: + """Format a single coverage summary as a Markdown table.""" + + return ( + "| Metric | Value |\n" + "| --- | ---: |\n" + f"| Modules | {summary['n_modules']} |\n" + f"| Typable symbols | {summary['n_typable']:,} |\n" + f"| Typed symbols | {summary['n_typed']:,} |\n" + f"| Untyped symbols | {summary['n_untyped']:,} |\n" + f"| Any symbols | {summary['n_any']:,} |\n" + f"| **Type coverage** | **{summary['coverage']:.2f}%** |\n" + f"| Strict coverage | {summary['strict_coverage']:.2f}% |" + ) + + +def format_comparison_markdown( + base: CoverageSummary, + pr: CoverageSummary, +) -> str: + """Format a comparison between base and PR coverage as Markdown.""" + + coverage_delta = pr["coverage"] - base["coverage"] + strict_delta = pr["strict_coverage"] - base["strict_coverage"] + typed_delta = pr["n_typed"] - base["n_typed"] + untyped_delta = pr["n_untyped"] - base["n_untyped"] + + def _fmt_delta(value: float, fmt: str = ".2f") -> str: + sign = "+" if value > 0 else "" + return f"{sign}{value:{fmt}}" + + lines = [ + "| Metric | Base | PR | Delta |", + "| --- | ---: | ---: | ---: |", + (f"| **Type coverage** | {base['coverage']:.2f}% | {pr['coverage']:.2f}% | {_fmt_delta(coverage_delta)}% |"), + ( + f"| Strict coverage | {base['strict_coverage']:.2f}% " + f"| {pr['strict_coverage']:.2f}% " + f"| {_fmt_delta(strict_delta)}% |" + ), + (f"| Typed symbols | {base['n_typed']:,} | {pr['n_typed']:,} | {_fmt_delta(typed_delta, ',')} |"), + (f"| Untyped symbols | {base['n_untyped']:,} | {pr['n_untyped']:,} | {_fmt_delta(untyped_delta, ',')} |"), + ( + f"| Modules | {base['n_modules']} " + f"| {pr['n_modules']} " + f"| {_fmt_delta(pr['n_modules'] - base['n_modules'], ',')} |" + ), + ] + return "\n".join(lines) + + +def main() -> int: + """Read pyrefly report JSON from stdin and print a Markdown summary. + + Accepts an optional ``--base `` argument. When provided, the output + includes a base-vs-PR comparison table. + """ + + args = sys.argv[1:] + + base_file: str | None = None + if "--base" in args: + idx = args.index("--base") + if idx + 1 >= len(args): + sys.stderr.write("error: --base requires a file path\n") + return 1 + base_file = args[idx + 1] + + pr_report = sys.stdin.read() + pr_summary = parse_summary(pr_report) + + if base_file is not None: + base_text = Path(base_file).read_text() if Path(base_file).exists() else "" + base_summary = parse_summary(base_text) + sys.stdout.write(format_comparison_markdown(base_summary, pr_summary) + "\n") + else: + sys.stdout.write(format_summary_markdown(pr_summary) + "\n") + + return 0 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/api/libs/sendgrid.py b/api/libs/sendgrid.py index c047c54d06..0338641d11 100644 --- a/api/libs/sendgrid.py +++ b/api/libs/sendgrid.py @@ -1,4 +1,5 @@ import logging +from typing import Any import sendgrid from python_http_client.exceptions import ForbiddenError, UnauthorizedError @@ -12,7 +13,7 @@ class SendGridClient: self.sendgrid_api_key = sendgrid_api_key self._from = _from - def send(self, mail: dict): + def send(self, mail: dict[str, Any]): logger.debug("Sending email with SendGrid") _to = "" try: diff --git a/api/libs/smtp.py b/api/libs/smtp.py index 6f82f1440a..53906d1769 100644 --- a/api/libs/smtp.py +++ b/api/libs/smtp.py @@ -2,6 +2,7 @@ import logging import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText +from typing import Any from configs import dify_config @@ -20,7 +21,7 @@ class SMTPClient: self.use_tls = use_tls self.opportunistic_tls = opportunistic_tls - def send(self, mail: dict): + def send(self, mail: dict[str, Any]): smtp: smtplib.SMTP | None = None local_host = dify_config.SMTP_LOCAL_HOSTNAME try: diff --git a/api/migrations/versions/2026_04_14_1500-8574b23a38fd_add_qdrant_endpoint_to_tidb_auth_bindings.py b/api/migrations/versions/2026_04_14_1500-8574b23a38fd_add_qdrant_endpoint_to_tidb_auth_bindings.py new file mode 100644 index 0000000000..0e188ec080 --- /dev/null +++ b/api/migrations/versions/2026_04_14_1500-8574b23a38fd_add_qdrant_endpoint_to_tidb_auth_bindings.py @@ -0,0 +1,26 @@ +"""add qdrant_endpoint to tidb_auth_bindings + +Revision ID: 8574b23a38fd +Revises: 6b5f9f8b1a2c +Create Date: 2026-04-14 15:00:00.000000 + +""" + +import sqlalchemy as sa +from alembic import op + +# revision identifiers, used by Alembic. +revision = "8574b23a38fd" +down_revision = "6b5f9f8b1a2c" +branch_labels = None +depends_on = None + + +def upgrade(): + with op.batch_alter_table("tidb_auth_bindings", schema=None) as batch_op: + batch_op.add_column(sa.Column("qdrant_endpoint", sa.String(length=512), nullable=True)) + + +def downgrade(): + with op.batch_alter_table("tidb_auth_bindings", schema=None) as batch_op: + batch_op.drop_column("qdrant_endpoint") diff --git a/api/migrations/versions/2026_04_15_1726-227822d22895_add_workflow_comments_table.py b/api/migrations/versions/2026_04_15_1726-227822d22895_add_workflow_comments_table.py new file mode 100644 index 0000000000..0548c932b5 --- /dev/null +++ b/api/migrations/versions/2026_04_15_1726-227822d22895_add_workflow_comments_table.py @@ -0,0 +1,90 @@ +"""Add workflow comments table + +Revision ID: 227822d22895 +Revises: 8574b23a38fd +Create Date: 2025-08-22 17:26:15.255980 + +""" +from alembic import op +import models as models +import sqlalchemy as sa + + +# revision identifiers, used by Alembic. +revision = '227822d22895' +down_revision = '8574b23a38fd' +branch_labels = None +depends_on = None + + +def upgrade(): + # ### commands auto generated by Alembic - please adjust! ### + op.create_table('workflow_comments', + sa.Column('id', models.types.StringUUID(), nullable=False), + sa.Column('tenant_id', models.types.StringUUID(), nullable=False), + sa.Column('app_id', models.types.StringUUID(), nullable=False), + sa.Column('position_x', sa.Float(), nullable=False), + sa.Column('position_y', sa.Float(), nullable=False), + sa.Column('content', sa.Text(), nullable=False), + sa.Column('created_by', models.types.StringUUID(), nullable=False), + sa.Column('created_at', sa.DateTime(), server_default=sa.text('CURRENT_TIMESTAMP'), nullable=False), + sa.Column('updated_at', sa.DateTime(), server_default=sa.text('CURRENT_TIMESTAMP'), nullable=False), + sa.Column('resolved', sa.Boolean(), server_default=sa.text('false'), nullable=False), + sa.Column('resolved_at', sa.DateTime(), nullable=True), + sa.Column('resolved_by', models.types.StringUUID(), nullable=True), + sa.PrimaryKeyConstraint('id', name='workflow_comments_pkey') + ) + with op.batch_alter_table('workflow_comments', schema=None) as batch_op: + batch_op.create_index('workflow_comments_app_idx', ['tenant_id', 'app_id'], unique=False) + batch_op.create_index('workflow_comments_created_at_idx', ['created_at'], unique=False) + + op.create_table('workflow_comment_replies', + sa.Column('id', models.types.StringUUID(), nullable=False), + sa.Column('comment_id', models.types.StringUUID(), nullable=False), + sa.Column('content', sa.Text(), nullable=False), + sa.Column('created_by', models.types.StringUUID(), nullable=False), + sa.Column('created_at', sa.DateTime(), server_default=sa.text('CURRENT_TIMESTAMP'), nullable=False), + sa.Column('updated_at', sa.DateTime(), server_default=sa.text('CURRENT_TIMESTAMP'), nullable=False), + sa.ForeignKeyConstraint(['comment_id'], ['workflow_comments.id'], name=op.f('workflow_comment_replies_comment_id_fkey'), ondelete='CASCADE'), + sa.PrimaryKeyConstraint('id', name='workflow_comment_replies_pkey') + ) + with op.batch_alter_table('workflow_comment_replies', schema=None) as batch_op: + batch_op.create_index('comment_replies_comment_idx', ['comment_id'], unique=False) + batch_op.create_index('comment_replies_created_at_idx', ['created_at'], unique=False) + + op.create_table('workflow_comment_mentions', + sa.Column('id', models.types.StringUUID(), nullable=False), + sa.Column('comment_id', models.types.StringUUID(), nullable=False), + sa.Column('reply_id', models.types.StringUUID(), nullable=True), + sa.Column('mentioned_user_id', models.types.StringUUID(), nullable=False), + sa.ForeignKeyConstraint(['comment_id'], ['workflow_comments.id'], name=op.f('workflow_comment_mentions_comment_id_fkey'), ondelete='CASCADE'), + sa.ForeignKeyConstraint(['reply_id'], ['workflow_comment_replies.id'], name=op.f('workflow_comment_mentions_reply_id_fkey'), ondelete='CASCADE'), + sa.PrimaryKeyConstraint('id', name='workflow_comment_mentions_pkey') + ) + with op.batch_alter_table('workflow_comment_mentions', schema=None) as batch_op: + batch_op.create_index('comment_mentions_comment_idx', ['comment_id'], unique=False) + batch_op.create_index('comment_mentions_reply_idx', ['reply_id'], unique=False) + batch_op.create_index('comment_mentions_user_idx', ['mentioned_user_id'], unique=False) + + # ### end Alembic commands ### + + +def downgrade(): + # ### commands auto generated by Alembic - please adjust! ### + with op.batch_alter_table('workflow_comment_mentions', schema=None) as batch_op: + batch_op.drop_index('comment_mentions_user_idx') + batch_op.drop_index('comment_mentions_reply_idx') + batch_op.drop_index('comment_mentions_comment_idx') + + op.drop_table('workflow_comment_mentions') + with op.batch_alter_table('workflow_comment_replies', schema=None) as batch_op: + batch_op.drop_index('comment_replies_created_at_idx') + batch_op.drop_index('comment_replies_comment_idx') + + op.drop_table('workflow_comment_replies') + with op.batch_alter_table('workflow_comments', schema=None) as batch_op: + batch_op.drop_index('workflow_comments_created_at_idx') + batch_op.drop_index('workflow_comments_app_idx') + + op.drop_table('workflow_comments') + # ### end Alembic commands ### diff --git a/api/models/__init__.py b/api/models/__init__.py index fcae07f948..85be9ca3bd 100644 --- a/api/models/__init__.py +++ b/api/models/__init__.py @@ -9,6 +9,11 @@ from .account import ( TenantStatus, ) from .api_based_extension import APIBasedExtension, APIBasedExtensionPoint +from .comment import ( + WorkflowComment, + WorkflowCommentMention, + WorkflowCommentReply, +) from .dataset import ( AppDatasetJoin, Dataset, @@ -208,6 +213,9 @@ __all__ = [ "WorkflowAppLog", "WorkflowAppLogCreatedFrom", "WorkflowArchiveLog", + "WorkflowComment", + "WorkflowCommentMention", + "WorkflowCommentReply", "WorkflowNodeExecutionModel", "WorkflowNodeExecutionOffload", "WorkflowNodeExecutionTriggeredFrom", diff --git a/api/models/base.py b/api/models/base.py index b7023b9c8b..5acdf184f4 100644 --- a/api/models/base.py +++ b/api/models/base.py @@ -24,6 +24,8 @@ class TypeBase(MappedAsDataclass, DeclarativeBase): class DefaultFieldsMixin: + """Mixin for models that inherit from Base (non-dataclass).""" + id: Mapped[str] = mapped_column( StringUUID, primary_key=True, @@ -53,6 +55,42 @@ class DefaultFieldsMixin: return f"<{self.__class__.__name__}(id={self.id})>" +class DefaultFieldsDCMixin(MappedAsDataclass): + """Mixin for models that inherit from TypeBase (MappedAsDataclass).""" + + __abstract__ = True + + id: Mapped[str] = mapped_column( + StringUUID, + primary_key=True, + insert_default=lambda: str(uuidv7()), + default_factory=lambda: str(uuidv7()), + init=False, + ) + + created_at: Mapped[datetime] = mapped_column( + DateTime, + nullable=False, + insert_default=naive_utc_now, + default_factory=naive_utc_now, + init=False, + server_default=func.current_timestamp(), + ) + + updated_at: Mapped[datetime] = mapped_column( + DateTime, + nullable=False, + insert_default=naive_utc_now, + default_factory=naive_utc_now, + init=False, + server_default=func.current_timestamp(), + onupdate=func.current_timestamp(), + ) + + def __repr__(self) -> str: + return f"<{self.__class__.__name__}(id={self.id})>" + + def gen_uuidv4_string() -> str: """gen_uuidv4_string generate a UUIDv4 string. diff --git a/api/models/comment.py b/api/models/comment.py new file mode 100644 index 0000000000..308339e6f6 --- /dev/null +++ b/api/models/comment.py @@ -0,0 +1,218 @@ +"""Workflow comment models.""" + +from datetime import datetime +from typing import Optional + +from sqlalchemy import Index, func +from sqlalchemy.orm import Mapped, mapped_column, relationship + +from .account import Account +from .base import Base +from .engine import db +from .types import StringUUID + + +class WorkflowComment(Base): + """Workflow comment model for canvas commenting functionality. + + Comments are associated with apps rather than specific workflow versions, + since an app has only one draft workflow at a time and comments should persist + across workflow version changes. + + Attributes: + id: Comment ID + tenant_id: Workspace ID + app_id: App ID (primary association, comments belong to apps) + position_x: X coordinate on canvas + position_y: Y coordinate on canvas + content: Comment content + created_by: Creator account ID + created_at: Creation time + updated_at: Last update time + resolved: Whether comment is resolved + resolved_at: Resolution time + resolved_by: Resolver account ID + """ + + __tablename__ = "workflow_comments" + __table_args__ = ( + db.PrimaryKeyConstraint("id", name="workflow_comments_pkey"), + Index("workflow_comments_app_idx", "tenant_id", "app_id"), + Index("workflow_comments_created_at_idx", "created_at"), + ) + + id: Mapped[str] = mapped_column(StringUUID, server_default=db.text("uuidv7()")) + tenant_id: Mapped[str] = mapped_column(StringUUID, nullable=False) + app_id: Mapped[str] = mapped_column(StringUUID, nullable=False) + position_x: Mapped[float] = mapped_column(db.Float) + position_y: Mapped[float] = mapped_column(db.Float) + content: Mapped[str] = mapped_column(db.Text, nullable=False) + created_by: Mapped[str] = mapped_column(StringUUID, nullable=False) + created_at: Mapped[datetime] = mapped_column(db.DateTime, nullable=False, server_default=func.current_timestamp()) + updated_at: Mapped[datetime] = mapped_column( + db.DateTime, nullable=False, server_default=func.current_timestamp(), onupdate=func.current_timestamp() + ) + resolved: Mapped[bool] = mapped_column(db.Boolean, nullable=False, server_default=db.text("false")) + resolved_at: Mapped[datetime | None] = mapped_column(db.DateTime) + resolved_by: Mapped[str | None] = mapped_column(StringUUID) + + # Relationships + replies: Mapped[list["WorkflowCommentReply"]] = relationship( + "WorkflowCommentReply", back_populates="comment", cascade="all, delete-orphan" + ) + mentions: Mapped[list["WorkflowCommentMention"]] = relationship( + "WorkflowCommentMention", back_populates="comment", cascade="all, delete-orphan" + ) + + @property + def created_by_account(self): + """Get creator account.""" + if hasattr(self, "_created_by_account_cache"): + return self._created_by_account_cache + return db.session.get(Account, self.created_by) + + def cache_created_by_account(self, account: Account | None) -> None: + """Cache creator account to avoid extra queries.""" + self._created_by_account_cache = account + + @property + def resolved_by_account(self): + """Get resolver account.""" + if hasattr(self, "_resolved_by_account_cache"): + return self._resolved_by_account_cache + if self.resolved_by: + return db.session.get(Account, self.resolved_by) + return None + + def cache_resolved_by_account(self, account: Account | None) -> None: + """Cache resolver account to avoid extra queries.""" + self._resolved_by_account_cache = account + + @property + def reply_count(self): + """Get reply count.""" + return len(self.replies) + + @property + def mention_count(self): + """Get mention count.""" + return len(self.mentions) + + @property + def participants(self): + """Get all participants (creator + repliers + mentioned users).""" + participant_ids: set[str] = set() + participants: list[Account] = [] + + # Use account properties to reuse preloaded caches and avoid hidden N+1. + if self.created_by not in participant_ids: + participant_ids.add(self.created_by) + created_by_account = self.created_by_account + if created_by_account: + participants.append(created_by_account) + + for reply in self.replies: + if reply.created_by in participant_ids: + continue + participant_ids.add(reply.created_by) + reply_account = reply.created_by_account + if reply_account: + participants.append(reply_account) + + for mention in self.mentions: + if mention.mentioned_user_id in participant_ids: + continue + participant_ids.add(mention.mentioned_user_id) + mentioned_account = mention.mentioned_user_account + if mentioned_account: + participants.append(mentioned_account) + + return participants + + +class WorkflowCommentReply(Base): + """Workflow comment reply model. + + Attributes: + id: Reply ID + comment_id: Parent comment ID + content: Reply content + created_by: Creator account ID + created_at: Creation time + """ + + __tablename__ = "workflow_comment_replies" + __table_args__ = ( + db.PrimaryKeyConstraint("id", name="workflow_comment_replies_pkey"), + Index("comment_replies_comment_idx", "comment_id"), + Index("comment_replies_created_at_idx", "created_at"), + ) + + id: Mapped[str] = mapped_column(StringUUID, server_default=db.text("uuidv7()")) + comment_id: Mapped[str] = mapped_column( + StringUUID, db.ForeignKey("workflow_comments.id", ondelete="CASCADE"), nullable=False + ) + content: Mapped[str] = mapped_column(db.Text, nullable=False) + created_by: Mapped[str] = mapped_column(StringUUID, nullable=False) + created_at: Mapped[datetime] = mapped_column(db.DateTime, nullable=False, server_default=func.current_timestamp()) + updated_at: Mapped[datetime] = mapped_column( + db.DateTime, nullable=False, server_default=func.current_timestamp(), onupdate=func.current_timestamp() + ) + # Relationships + comment: Mapped["WorkflowComment"] = relationship("WorkflowComment", back_populates="replies") + + @property + def created_by_account(self): + """Get creator account.""" + if hasattr(self, "_created_by_account_cache"): + return self._created_by_account_cache + return db.session.get(Account, self.created_by) + + def cache_created_by_account(self, account: Account | None) -> None: + """Cache creator account to avoid extra queries.""" + self._created_by_account_cache = account + + +class WorkflowCommentMention(Base): + """Workflow comment mention model. + + Mentions are only for internal accounts since end users + cannot access workflow canvas and commenting features. + + Attributes: + id: Mention ID + comment_id: Parent comment ID + mentioned_user_id: Mentioned account ID + """ + + __tablename__ = "workflow_comment_mentions" + __table_args__ = ( + db.PrimaryKeyConstraint("id", name="workflow_comment_mentions_pkey"), + Index("comment_mentions_comment_idx", "comment_id"), + Index("comment_mentions_reply_idx", "reply_id"), + Index("comment_mentions_user_idx", "mentioned_user_id"), + ) + + id: Mapped[str] = mapped_column(StringUUID, server_default=db.text("uuidv7()")) + comment_id: Mapped[str] = mapped_column( + StringUUID, db.ForeignKey("workflow_comments.id", ondelete="CASCADE"), nullable=False + ) + reply_id: Mapped[str | None] = mapped_column( + StringUUID, db.ForeignKey("workflow_comment_replies.id", ondelete="CASCADE"), nullable=True + ) + mentioned_user_id: Mapped[str] = mapped_column(StringUUID, nullable=False) + + # Relationships + comment: Mapped["WorkflowComment"] = relationship("WorkflowComment", back_populates="mentions") + reply: Mapped[Optional["WorkflowCommentReply"]] = relationship("WorkflowCommentReply") + + @property + def mentioned_user_account(self): + """Get mentioned account.""" + if hasattr(self, "_mentioned_user_account_cache"): + return self._mentioned_user_account_cache + return db.session.get(Account, self.mentioned_user_id) + + def cache_mentioned_user_account(self, account: Account | None) -> None: + """Cache mentioned account to avoid extra queries.""" + self._mentioned_user_account_cache = account diff --git a/api/models/dataset.py b/api/models/dataset.py index 97604848af..50301dd2d7 100644 --- a/api/models/dataset.py +++ b/api/models/dataset.py @@ -108,6 +108,56 @@ class ExternalKnowledgeApiDict(TypedDict): created_at: str +class DocumentDict(TypedDict): + id: str + tenant_id: str + dataset_id: str + position: int + data_source_type: str + data_source_info: str | None + dataset_process_rule_id: str | None + batch: str + name: str + created_from: str + created_by: str + created_api_request_id: str | None + created_at: datetime + processing_started_at: datetime | None + file_id: str | None + word_count: int | None + parsing_completed_at: datetime | None + cleaning_completed_at: datetime | None + splitting_completed_at: datetime | None + tokens: int | None + indexing_latency: float | None + completed_at: datetime | None + is_paused: bool | None + paused_by: str | None + paused_at: datetime | None + error: str | None + stopped_at: datetime | None + indexing_status: str + enabled: bool + disabled_at: datetime | None + disabled_by: str | None + archived: bool + archived_reason: str | None + archived_by: str | None + archived_at: datetime | None + updated_at: datetime + doc_type: str | None + doc_metadata: Any + doc_form: IndexStructureType + doc_language: str | None + display_status: str | None + data_source_info_dict: dict[str, Any] + average_segment_length: int + dataset_process_rule: ProcessRuleDict | None + dataset: None + segment_count: int | None + hit_count: int | None + + class DatasetPermissionEnum(enum.StrEnum): ONLY_ME = "only_me" ALL_TEAM = "all_team_members" @@ -303,13 +353,17 @@ class Dataset(Base): if self.provider != "external": return None external_knowledge_binding = db.session.scalar( - select(ExternalKnowledgeBindings).where(ExternalKnowledgeBindings.dataset_id == self.id) + select(ExternalKnowledgeBindings).where( + ExternalKnowledgeBindings.dataset_id == self.id, + ExternalKnowledgeBindings.tenant_id == self.tenant_id, + ) ) if not external_knowledge_binding: return None external_knowledge_api = db.session.scalar( select(ExternalKnowledgeApis).where( - ExternalKnowledgeApis.id == external_knowledge_binding.external_knowledge_api_id + ExternalKnowledgeApis.id == external_knowledge_binding.external_knowledge_api_id, + ExternalKnowledgeApis.tenant_id == self.tenant_id, ) ) if external_knowledge_api is None or external_knowledge_api.settings is None: @@ -675,8 +729,8 @@ class Document(Base): ) return built_in_fields - def to_dict(self) -> dict[str, Any]: - return { + def to_dict(self) -> DocumentDict: + result: DocumentDict = { "id": self.id, "tenant_id": self.tenant_id, "dataset_id": self.dataset_id, @@ -721,10 +775,11 @@ class Document(Base): "data_source_info_dict": self.data_source_info_dict, "average_segment_length": self.average_segment_length, "dataset_process_rule": self.dataset_process_rule.to_dict() if self.dataset_process_rule else None, - "dataset": None, # Dataset class doesn't have a to_dict method + "dataset": None, "segment_count": self.segment_count, "hit_count": self.hit_count, } + return result @classmethod def from_dict(cls, data: dict[str, Any]): @@ -1250,6 +1305,7 @@ class TidbAuthBinding(TypeBase): ) account: Mapped[str] = mapped_column(String(255), nullable=False) password: Mapped[str] = mapped_column(String(255), nullable=False) + qdrant_endpoint: Mapped[str | None] = mapped_column(String(512), nullable=True, default=None) created_at: Mapped[datetime] = mapped_column( DateTime, nullable=False, server_default=func.current_timestamp(), init=False ) @@ -1496,7 +1552,7 @@ class PipelineBuiltInTemplate(TypeBase): name: Mapped[str] = mapped_column(sa.String(255), nullable=False) description: Mapped[str] = mapped_column(LongText, nullable=False) chunk_structure: Mapped[str] = mapped_column(sa.String(255), nullable=False) - icon: Mapped[dict] = mapped_column(sa.JSON, nullable=False) + icon: Mapped[dict[str, Any]] = mapped_column(sa.JSON, nullable=False) yaml_content: Mapped[str] = mapped_column(LongText, nullable=False) copyright: Mapped[str] = mapped_column(sa.String(255), nullable=False) privacy_policy: Mapped[str] = mapped_column(sa.String(255), nullable=False) @@ -1529,7 +1585,7 @@ class PipelineCustomizedTemplate(TypeBase): name: Mapped[str] = mapped_column(sa.String(255), nullable=False) description: Mapped[str] = mapped_column(LongText, nullable=False) chunk_structure: Mapped[str] = mapped_column(sa.String(255), nullable=False) - icon: Mapped[dict] = mapped_column(sa.JSON, nullable=False) + icon: Mapped[dict[str, Any]] = mapped_column(sa.JSON, nullable=False) position: Mapped[int] = mapped_column(sa.Integer, nullable=False) yaml_content: Mapped[str] = mapped_column(LongText, nullable=False) install_count: Mapped[int] = mapped_column(sa.Integer, nullable=False) @@ -1602,7 +1658,7 @@ class DocumentPipelineExecutionLog(TypeBase): datasource_type: Mapped[str] = mapped_column(sa.String(255), nullable=False) datasource_info: Mapped[str] = mapped_column(LongText, nullable=False) datasource_node_id: Mapped[str] = mapped_column(sa.String(255), nullable=False) - input_data: Mapped[dict] = mapped_column(sa.JSON, nullable=False) + input_data: Mapped[dict[str, Any]] = mapped_column(sa.JSON, nullable=False) created_by: Mapped[str | None] = mapped_column(StringUUID, nullable=True) created_at: Mapped[datetime] = mapped_column( sa.DateTime, nullable=False, server_default=func.current_timestamp(), init=False @@ -1633,7 +1689,7 @@ class PipelineRecommendedPlugin(TypeBase): ) -class SegmentAttachmentBinding(Base): +class SegmentAttachmentBinding(TypeBase): __tablename__ = "segment_attachment_bindings" __table_args__ = ( sa.PrimaryKeyConstraint("id", name="segment_attachment_binding_pkey"), @@ -1646,13 +1702,17 @@ class SegmentAttachmentBinding(Base): ), sa.Index("segment_attachment_binding_attachment_idx", "attachment_id"), ) - id: Mapped[str] = mapped_column(StringUUID, default=lambda: str(uuidv7())) + id: Mapped[str] = mapped_column( + StringUUID, insert_default=lambda: str(uuidv7()), default_factory=lambda: str(uuidv7()), init=False + ) tenant_id: Mapped[str] = mapped_column(StringUUID, nullable=False) dataset_id: Mapped[str] = mapped_column(StringUUID, nullable=False) document_id: Mapped[str] = mapped_column(StringUUID, nullable=False) segment_id: Mapped[str] = mapped_column(StringUUID, nullable=False) attachment_id: Mapped[str] = mapped_column(StringUUID, nullable=False) - created_at: Mapped[datetime] = mapped_column(sa.DateTime, nullable=False, server_default=func.current_timestamp()) + created_at: Mapped[datetime] = mapped_column( + sa.DateTime, nullable=False, server_default=func.current_timestamp(), init=False + ) class DocumentSegmentSummary(Base): diff --git a/api/models/human_input.py b/api/models/human_input.py index 79c5d62f6a..b4c7a634b6 100644 --- a/api/models/human_input.py +++ b/api/models/human_input.py @@ -3,11 +3,11 @@ from enum import StrEnum from typing import Annotated, Literal, Self, final import sqlalchemy as sa -from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from pydantic import BaseModel, Field from sqlalchemy.orm import Mapped, mapped_column, relationship from core.workflow.human_input_compat import DeliveryMethodType +from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from libs.helper import generate_string from .base import Base, DefaultFieldsMixin diff --git a/api/models/model.py b/api/models/model.py index 43ddf344d2..7fe0731098 100644 --- a/api/models/model.py +++ b/api/models/model.py @@ -14,9 +14,6 @@ from uuid import uuid4 import sqlalchemy as sa from flask import request from flask_login import UserMixin # type: ignore[import-untyped] -from graphon.enums import WorkflowExecutionStatus -from graphon.file import FILE_MODEL_IDENTITY, File, FileTransferMethod, FileType -from graphon.file import helpers as file_helpers from sqlalchemy import BigInteger, Float, Index, PrimaryKeyConstraint, String, exists, func, select, text from sqlalchemy.orm import Mapped, Session, mapped_column, sessionmaker @@ -24,6 +21,9 @@ from configs import dify_config from constants import DEFAULT_FILE_NUMBER_LIMITS from core.tools.signature import sign_tool_file from extensions.storage.storage_type import StorageType +from graphon.enums import WorkflowExecutionStatus +from graphon.file import FILE_MODEL_IDENTITY, File, FileTransferMethod, FileType +from graphon.file import helpers as file_helpers from libs.helper import generate_string # type: ignore[import-not-found] from libs.uuid_utils import uuidv7 from models.utils.file_input_compat import build_file_from_input_mapping @@ -674,28 +674,24 @@ class AppModelConfig(TypeBase): def suggested_questions_list(self) -> list[str]: return json.loads(self.suggested_questions) if self.suggested_questions else [] + def _get_enabled_config(self, value: str | None, *, default_enabled: bool = False) -> EnabledConfig: + return cast(EnabledConfig, json.loads(value) if value else {"enabled": default_enabled}) + @property def suggested_questions_after_answer_dict(self) -> EnabledConfig: - return cast( - EnabledConfig, - json.loads(self.suggested_questions_after_answer) - if self.suggested_questions_after_answer - else {"enabled": False}, - ) + return self._get_enabled_config(self.suggested_questions_after_answer) @property def speech_to_text_dict(self) -> EnabledConfig: - return cast(EnabledConfig, json.loads(self.speech_to_text) if self.speech_to_text else {"enabled": False}) + return self._get_enabled_config(self.speech_to_text) @property def text_to_speech_dict(self) -> EnabledConfig: - return cast(EnabledConfig, json.loads(self.text_to_speech) if self.text_to_speech else {"enabled": False}) + return self._get_enabled_config(self.text_to_speech) @property def retriever_resource_dict(self) -> EnabledConfig: - return cast( - EnabledConfig, json.loads(self.retriever_resource) if self.retriever_resource else {"enabled": True} - ) + return self._get_enabled_config(self.retriever_resource, default_enabled=True) @property def annotation_reply_dict(self) -> AnnotationReplyConfig: @@ -722,7 +718,7 @@ class AppModelConfig(TypeBase): @property def more_like_this_dict(self) -> EnabledConfig: - return cast(EnabledConfig, json.loads(self.more_like_this) if self.more_like_this else {"enabled": False}) + return self._get_enabled_config(self.more_like_this) @property def sensitive_word_avoidance_dict(self) -> SensitiveWordAvoidanceConfig: @@ -813,60 +809,36 @@ class AppModelConfig(TypeBase): "file_upload": self.file_upload_dict, } + @staticmethod + def _dump_optional(value: Any) -> str | None: + return json.dumps(value) if value else None + def from_model_config_dict(self, model_config: AppModelConfigDict): self.opening_statement = model_config.get("opening_statement") - self.suggested_questions = ( - json.dumps(model_config.get("suggested_questions")) if model_config.get("suggested_questions") else None - ) - self.suggested_questions_after_answer = ( - json.dumps(model_config.get("suggested_questions_after_answer")) - if model_config.get("suggested_questions_after_answer") - else None - ) - self.speech_to_text = ( - json.dumps(model_config.get("speech_to_text")) if model_config.get("speech_to_text") else None - ) - self.text_to_speech = ( - json.dumps(model_config.get("text_to_speech")) if model_config.get("text_to_speech") else None - ) - self.more_like_this = ( - json.dumps(model_config.get("more_like_this")) if model_config.get("more_like_this") else None - ) - self.sensitive_word_avoidance = ( - json.dumps(model_config.get("sensitive_word_avoidance")) - if model_config.get("sensitive_word_avoidance") - else None - ) - self.external_data_tools = ( - json.dumps(model_config.get("external_data_tools")) if model_config.get("external_data_tools") else None - ) - self.model = json.dumps(model_config.get("model")) if model_config.get("model") else None - self.user_input_form = ( - json.dumps(model_config.get("user_input_form")) if model_config.get("user_input_form") else None + self.suggested_questions = self._dump_optional(model_config.get("suggested_questions")) + self.suggested_questions_after_answer = self._dump_optional( + model_config.get("suggested_questions_after_answer") ) + self.speech_to_text = self._dump_optional(model_config.get("speech_to_text")) + self.text_to_speech = self._dump_optional(model_config.get("text_to_speech")) + self.more_like_this = self._dump_optional(model_config.get("more_like_this")) + self.sensitive_word_avoidance = self._dump_optional(model_config.get("sensitive_word_avoidance")) + self.external_data_tools = self._dump_optional(model_config.get("external_data_tools")) + self.model = self._dump_optional(model_config.get("model")) + self.user_input_form = self._dump_optional(model_config.get("user_input_form")) self.dataset_query_variable = model_config.get("dataset_query_variable") self.pre_prompt = model_config.get("pre_prompt") - self.agent_mode = json.dumps(model_config.get("agent_mode")) if model_config.get("agent_mode") else None - self.retriever_resource = ( - json.dumps(model_config.get("retriever_resource")) if model_config.get("retriever_resource") else None - ) + self.agent_mode = self._dump_optional(model_config.get("agent_mode")) + self.retriever_resource = self._dump_optional(model_config.get("retriever_resource")) self.prompt_type = PromptType(model_config.get("prompt_type", "simple")) - self.chat_prompt_config = ( - json.dumps(model_config.get("chat_prompt_config")) if model_config.get("chat_prompt_config") else None - ) - self.completion_prompt_config = ( - json.dumps(model_config.get("completion_prompt_config")) - if model_config.get("completion_prompt_config") - else None - ) - self.dataset_configs = ( - json.dumps(model_config.get("dataset_configs")) if model_config.get("dataset_configs") else None - ) - self.file_upload = json.dumps(model_config.get("file_upload")) if model_config.get("file_upload") else None + self.chat_prompt_config = self._dump_optional(model_config.get("chat_prompt_config")) + self.completion_prompt_config = self._dump_optional(model_config.get("completion_prompt_config")) + self.dataset_configs = self._dump_optional(model_config.get("dataset_configs")) + self.file_upload = self._dump_optional(model_config.get("file_upload")) return self -class RecommendedApp(Base): # bug +class RecommendedApp(TypeBase): __tablename__ = "recommended_apps" __table_args__ = ( sa.PrimaryKeyConstraint("id", name="recommended_app_pkey"), @@ -874,20 +846,37 @@ class RecommendedApp(Base): # bug sa.Index("recommended_app_is_listed_idx", "is_listed", "language"), ) - id = mapped_column(StringUUID, primary_key=True, default=lambda: str(uuid4())) - app_id = mapped_column(StringUUID, nullable=False) - description = mapped_column(sa.JSON, nullable=False) + id: Mapped[str] = mapped_column( + StringUUID, + primary_key=True, + insert_default=lambda: str(uuid4()), + default_factory=lambda: str(uuid4()), + init=False, + ) + app_id: Mapped[str] = mapped_column(StringUUID, nullable=False) + description: Mapped[Any] = mapped_column(sa.JSON, nullable=False) copyright: Mapped[str] = mapped_column(String(255), nullable=False) privacy_policy: Mapped[str] = mapped_column(String(255), nullable=False) - custom_disclaimer: Mapped[str] = mapped_column(LongText, default="") category: Mapped[str] = mapped_column(String(255), nullable=False) + custom_disclaimer: Mapped[str] = mapped_column(LongText, default="") position: Mapped[int] = mapped_column(sa.Integer, nullable=False, default=0) is_listed: Mapped[bool] = mapped_column(sa.Boolean, nullable=False, default=True) install_count: Mapped[int] = mapped_column(sa.Integer, nullable=False, default=0) - language = mapped_column(String(255), nullable=False, server_default=sa.text("'en-US'")) - created_at = mapped_column(sa.DateTime, nullable=False, server_default=func.current_timestamp()) - updated_at = mapped_column( - sa.DateTime, nullable=False, server_default=func.current_timestamp(), onupdate=func.current_timestamp() + language: Mapped[str] = mapped_column( + String(255), + nullable=False, + server_default=sa.text("'en-US'"), + default="en-US", + ) + created_at: Mapped[datetime] = mapped_column( + sa.DateTime, nullable=False, server_default=func.current_timestamp(), init=False + ) + updated_at: Mapped[datetime] = mapped_column( + sa.DateTime, + nullable=False, + server_default=func.current_timestamp(), + onupdate=func.current_timestamp(), + init=False, ) @property @@ -926,7 +915,7 @@ class InstalledApp(TypeBase): return db.session.scalar(select(Tenant).where(Tenant.id == self.tenant_id)) -class TrialApp(Base): +class TrialApp(TypeBase): __tablename__ = "trial_apps" __table_args__ = ( sa.PrimaryKeyConstraint("id", name="trial_app_pkey"), @@ -935,18 +924,22 @@ class TrialApp(Base): sa.UniqueConstraint("app_id", name="unique_trail_app_id"), ) - id = mapped_column(StringUUID, default=gen_uuidv4_string) - app_id = mapped_column(StringUUID, nullable=False) - tenant_id = mapped_column(StringUUID, nullable=False) - created_at = mapped_column(sa.DateTime, nullable=False, server_default=func.current_timestamp()) - trial_limit = mapped_column(sa.Integer, nullable=False, default=3) + id: Mapped[str] = mapped_column( + StringUUID, insert_default=gen_uuidv4_string, default_factory=gen_uuidv4_string, init=False + ) + app_id: Mapped[str] = mapped_column(StringUUID, nullable=False) + tenant_id: Mapped[str] = mapped_column(StringUUID, nullable=False) + created_at: Mapped[datetime] = mapped_column( + sa.DateTime, nullable=False, server_default=func.current_timestamp(), init=False + ) + trial_limit: Mapped[int] = mapped_column(sa.Integer, nullable=False, default=3) @property def app(self) -> App | None: return db.session.scalar(select(App).where(App.id == self.app_id)) -class AccountTrialAppRecord(Base): +class AccountTrialAppRecord(TypeBase): __tablename__ = "account_trial_app_records" __table_args__ = ( sa.PrimaryKeyConstraint("id", name="user_trial_app_pkey"), @@ -954,11 +947,15 @@ class AccountTrialAppRecord(Base): sa.Index("account_trial_app_record_app_id_idx", "app_id"), sa.UniqueConstraint("account_id", "app_id", name="unique_account_trial_app_record"), ) - id = mapped_column(StringUUID, default=gen_uuidv4_string) - account_id = mapped_column(StringUUID, nullable=False) - app_id = mapped_column(StringUUID, nullable=False) - count = mapped_column(sa.Integer, nullable=False, default=0) - created_at = mapped_column(sa.DateTime, nullable=False, server_default=func.current_timestamp()) + id: Mapped[str] = mapped_column( + StringUUID, insert_default=gen_uuidv4_string, default_factory=gen_uuidv4_string, init=False + ) + account_id: Mapped[str] = mapped_column(StringUUID, nullable=False) + app_id: Mapped[str] = mapped_column(StringUUID, nullable=False) + count: Mapped[int] = mapped_column(sa.Integer, nullable=False, default=0) + created_at: Mapped[datetime] = mapped_column( + sa.DateTime, nullable=False, server_default=func.current_timestamp(), init=False + ) @property def app(self) -> App | None: @@ -1010,7 +1007,7 @@ class OAuthProviderApp(TypeBase): app_icon: Mapped[str] = mapped_column(String(255), nullable=False) client_id: Mapped[str] = mapped_column(String(255), nullable=False) client_secret: Mapped[str] = mapped_column(String(255), nullable=False) - app_label: Mapped[dict] = mapped_column(sa.JSON, nullable=False, default_factory=dict) + app_label: Mapped[dict[str, Any]] = mapped_column(sa.JSON, nullable=False, default_factory=dict) redirect_uris: Mapped[list] = mapped_column(sa.JSON, nullable=False, default_factory=list) scope: Mapped[str] = mapped_column( String(255), @@ -1081,7 +1078,7 @@ class Conversation(Base): messages = db.relationship("Message", backref="conversation", lazy="select", passive_deletes="all") message_annotations = db.relationship( - "MessageAnnotation", backref="conversation", lazy="select", passive_deletes="all" + lambda: MessageAnnotation, backref="conversation", lazy="select", passive_deletes="all" ) is_deleted: Mapped[bool] = mapped_column(sa.Boolean, nullable=False, server_default=sa.text("false")) @@ -1632,52 +1629,53 @@ class Message(Base): files: list[File] = [] for message_file in message_files: - if message_file.transfer_method == FileTransferMethod.LOCAL_FILE: - if message_file.upload_file_id is None: - raise ValueError(f"MessageFile {message_file.id} is a local file but has no upload_file_id") - file = file_factory.build_from_mapping( - mapping={ + match message_file.transfer_method: + case FileTransferMethod.LOCAL_FILE: + if message_file.upload_file_id is None: + raise ValueError(f"MessageFile {message_file.id} is a local file but has no upload_file_id") + file = file_factory.build_from_mapping( + mapping={ + "id": message_file.id, + "type": message_file.type, + "transfer_method": message_file.transfer_method, + "upload_file_id": message_file.upload_file_id, + }, + tenant_id=current_app.tenant_id, + access_controller=_get_file_access_controller(), + ) + case FileTransferMethod.REMOTE_URL: + if message_file.url is None: + raise ValueError(f"MessageFile {message_file.id} is a remote url but has no url") + file = file_factory.build_from_mapping( + mapping={ + "id": message_file.id, + "type": message_file.type, + "transfer_method": message_file.transfer_method, + "upload_file_id": message_file.upload_file_id, + "url": message_file.url, + }, + tenant_id=current_app.tenant_id, + access_controller=_get_file_access_controller(), + ) + case FileTransferMethod.TOOL_FILE: + if message_file.upload_file_id is None: + assert message_file.url is not None + message_file.upload_file_id = message_file.url.split("/")[-1].split(".")[0] + mapping = { "id": message_file.id, "type": message_file.type, "transfer_method": message_file.transfer_method, - "upload_file_id": message_file.upload_file_id, - }, - tenant_id=current_app.tenant_id, - access_controller=_get_file_access_controller(), - ) - elif message_file.transfer_method == FileTransferMethod.REMOTE_URL: - if message_file.url is None: - raise ValueError(f"MessageFile {message_file.id} is a remote url but has no url") - file = file_factory.build_from_mapping( - mapping={ - "id": message_file.id, - "type": message_file.type, - "transfer_method": message_file.transfer_method, - "upload_file_id": message_file.upload_file_id, - "url": message_file.url, - }, - tenant_id=current_app.tenant_id, - access_controller=_get_file_access_controller(), - ) - elif message_file.transfer_method == FileTransferMethod.TOOL_FILE: - if message_file.upload_file_id is None: - assert message_file.url is not None - message_file.upload_file_id = message_file.url.split("/")[-1].split(".")[0] - mapping = { - "id": message_file.id, - "type": message_file.type, - "transfer_method": message_file.transfer_method, - "tool_file_id": message_file.upload_file_id, - } - file = file_factory.build_from_mapping( - mapping=mapping, - tenant_id=current_app.tenant_id, - access_controller=_get_file_access_controller(), - ) - else: - raise ValueError( - f"MessageFile {message_file.id} has an invalid transfer_method {message_file.transfer_method}" - ) + "tool_file_id": message_file.upload_file_id, + } + file = file_factory.build_from_mapping( + mapping=mapping, + tenant_id=current_app.tenant_id, + access_controller=_get_file_access_controller(), + ) + case FileTransferMethod.DATASOURCE_FILE: + raise ValueError( + f"MessageFile {message_file.id} has an invalid transfer_method {message_file.transfer_method}" + ) files.append(file) result = cast( @@ -1839,7 +1837,7 @@ class MessageFile(TypeBase): ) -class MessageAnnotation(Base): +class MessageAnnotation(TypeBase): __tablename__ = "message_annotations" __table_args__ = ( sa.PrimaryKeyConstraint("id", name="message_annotation_pkey"), @@ -1848,17 +1846,25 @@ class MessageAnnotation(Base): sa.Index("message_annotation_message_idx", "message_id"), ) - id: Mapped[str] = mapped_column(StringUUID, default=lambda: str(uuid4())) + id: Mapped[str] = mapped_column( + StringUUID, insert_default=lambda: str(uuid4()), default_factory=lambda: str(uuid4()), init=False + ) app_id: Mapped[str] = mapped_column(StringUUID) - conversation_id: Mapped[str | None] = mapped_column(StringUUID, sa.ForeignKey("conversations.id")) - message_id: Mapped[str | None] = mapped_column(StringUUID) question: Mapped[str] = mapped_column(LongText, nullable=False) content: Mapped[str] = mapped_column(LongText, nullable=False) - hit_count: Mapped[int] = mapped_column(sa.Integer, nullable=False, server_default=sa.text("0")) account_id: Mapped[str] = mapped_column(StringUUID, nullable=False) - created_at: Mapped[datetime] = mapped_column(sa.DateTime, nullable=False, server_default=func.current_timestamp()) + conversation_id: Mapped[str | None] = mapped_column(StringUUID, sa.ForeignKey("conversations.id"), default=None) + message_id: Mapped[str | None] = mapped_column(StringUUID, default=None) + hit_count: Mapped[int] = mapped_column(sa.Integer, nullable=False, server_default=sa.text("0"), default=0) + created_at: Mapped[datetime] = mapped_column( + sa.DateTime, nullable=False, server_default=func.current_timestamp(), init=False + ) updated_at: Mapped[datetime] = mapped_column( - sa.DateTime, nullable=False, server_default=func.current_timestamp(), onupdate=func.current_timestamp() + sa.DateTime, + nullable=False, + server_default=func.current_timestamp(), + onupdate=func.current_timestamp(), + init=False, ) @property @@ -2489,7 +2495,7 @@ class TraceAppConfig(TypeBase): ) app_id: Mapped[str] = mapped_column(StringUUID, nullable=False) tracing_provider: Mapped[str | None] = mapped_column(String(255), nullable=True) - tracing_config: Mapped[dict | None] = mapped_column(sa.JSON, nullable=True) + tracing_config: Mapped[dict[str, Any] | None] = mapped_column(sa.JSON, nullable=True) created_at: Mapped[datetime] = mapped_column( sa.DateTime, nullable=False, server_default=func.current_timestamp(), init=False ) diff --git a/api/models/oauth.py b/api/models/oauth.py index 1db2552469..bd04d890d3 100644 --- a/api/models/oauth.py +++ b/api/models/oauth.py @@ -1,4 +1,5 @@ from datetime import datetime +from typing import Any import sqlalchemy as sa from sqlalchemy import func @@ -22,7 +23,7 @@ class DatasourceOauthParamConfig(TypeBase): ) plugin_id: Mapped[str] = mapped_column(sa.String(255), nullable=False) provider: Mapped[str] = mapped_column(sa.String(255), nullable=False) - system_credentials: Mapped[dict] = mapped_column(AdjustedJSON, nullable=False) + system_credentials: Mapped[dict[str, Any]] = mapped_column(AdjustedJSON, nullable=False) class DatasourceProvider(TypeBase): @@ -40,7 +41,7 @@ class DatasourceProvider(TypeBase): provider: Mapped[str] = mapped_column(sa.String(128), nullable=False) plugin_id: Mapped[str] = mapped_column(sa.String(255), nullable=False) auth_type: Mapped[str] = mapped_column(sa.String(255), nullable=False) - encrypted_credentials: Mapped[dict] = mapped_column(AdjustedJSON, nullable=False) + encrypted_credentials: Mapped[dict[str, Any]] = mapped_column(AdjustedJSON, nullable=False) avatar_url: Mapped[str] = mapped_column(LongText, nullable=True, default="default") is_default: Mapped[bool] = mapped_column(sa.Boolean, nullable=False, server_default=sa.text("false"), default=False) expires_at: Mapped[int] = mapped_column(sa.Integer, nullable=False, server_default="-1", default=-1) @@ -70,7 +71,7 @@ class DatasourceOauthTenantParamConfig(TypeBase): tenant_id: Mapped[str] = mapped_column(StringUUID, nullable=False) provider: Mapped[str] = mapped_column(sa.String(255), nullable=False) plugin_id: Mapped[str] = mapped_column(sa.String(255), nullable=False) - client_params: Mapped[dict] = mapped_column(AdjustedJSON, nullable=False, default_factory=dict) + client_params: Mapped[dict[str, Any]] = mapped_column(AdjustedJSON, nullable=False, default_factory=dict) enabled: Mapped[bool] = mapped_column(sa.Boolean, nullable=False, default=False) created_at: Mapped[datetime] = mapped_column( diff --git a/api/models/provider.py b/api/models/provider.py index 8270961b31..2bb67d605b 100644 --- a/api/models/provider.py +++ b/api/models/provider.py @@ -6,10 +6,10 @@ from functools import cached_property from uuid import uuid4 import sqlalchemy as sa -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import DateTime, String, func, select, text from sqlalchemy.orm import Mapped, mapped_column +from graphon.model_runtime.entities.model_entities import ModelType from libs.uuid_utils import uuidv7 from .base import TypeBase diff --git a/api/models/source.py b/api/models/source.py index a8addbe342..8fce7df205 100644 --- a/api/models/source.py +++ b/api/models/source.py @@ -1,5 +1,6 @@ import json from datetime import datetime +from typing import Any, TypedDict from uuid import uuid4 import sqlalchemy as sa @@ -24,7 +25,7 @@ class DataSourceOauthBinding(TypeBase): tenant_id: Mapped[str] = mapped_column(StringUUID, nullable=False) access_token: Mapped[str] = mapped_column(String(255), nullable=False) provider: Mapped[str] = mapped_column(String(255), nullable=False) - source_info: Mapped[dict] = mapped_column(AdjustedJSON, nullable=False) + source_info: Mapped[dict[str, Any]] = mapped_column(AdjustedJSON, nullable=False) created_at: Mapped[datetime] = mapped_column( DateTime, nullable=False, server_default=func.current_timestamp(), init=False ) @@ -38,6 +39,17 @@ class DataSourceOauthBinding(TypeBase): disabled: Mapped[bool] = mapped_column(sa.Boolean, nullable=True, server_default=sa.text("false"), default=False) +class DataSourceApiKeyAuthBindingDict(TypedDict): + id: str + tenant_id: str + category: str + provider: str + credentials: Any + created_at: float + updated_at: float + disabled: bool + + class DataSourceApiKeyAuthBinding(TypeBase): __tablename__ = "data_source_api_key_auth_bindings" __table_args__ = ( @@ -65,8 +77,8 @@ class DataSourceApiKeyAuthBinding(TypeBase): ) disabled: Mapped[bool] = mapped_column(sa.Boolean, nullable=True, server_default=sa.text("false"), default=False) - def to_dict(self): - return { + def to_dict(self) -> DataSourceApiKeyAuthBindingDict: + result: DataSourceApiKeyAuthBindingDict = { "id": self.id, "tenant_id": self.tenant_id, "category": self.category, @@ -76,3 +88,4 @@ class DataSourceApiKeyAuthBinding(TypeBase): "updated_at": self.updated_at.timestamp(), "disabled": self.disabled, } + return result diff --git a/api/models/types.py b/api/models/types.py index c1d9c3845a..4f35c31a27 100644 --- a/api/models/types.py +++ b/api/models/types.py @@ -103,10 +103,14 @@ class AdjustedJSON(TypeDecorator[dict | list | None]): else: return dialect.type_descriptor(sa.JSON()) - def process_bind_param(self, value: dict | list | None, dialect: Dialect) -> dict | list | None: + def process_bind_param( + self, value: dict[str, Any] | list[Any] | None, dialect: Dialect + ) -> dict[str, Any] | list[Any] | None: return value - def process_result_value(self, value: dict | list | None, dialect: Dialect) -> dict | list | None: + def process_result_value( + self, value: dict[str, Any] | list[Any] | None, dialect: Dialect + ) -> dict[str, Any] | list[Any] | None: return value diff --git a/api/models/utils/file_input_compat.py b/api/models/utils/file_input_compat.py index f71583c1cd..a2dc8f6157 100644 --- a/api/models/utils/file_input_compat.py +++ b/api/models/utils/file_input_compat.py @@ -4,9 +4,8 @@ from collections.abc import Callable, Mapping from functools import lru_cache from typing import Any -from graphon.file import File, FileTransferMethod - from core.workflow.file_reference import parse_file_reference +from graphon.file import File, FileTransferMethod @lru_cache(maxsize=1) @@ -66,12 +65,15 @@ def build_file_from_stored_mapping( record_id = resolve_file_record_id(mapping) transfer_method = FileTransferMethod.value_of(mapping["transfer_method"]) - if transfer_method == FileTransferMethod.TOOL_FILE and record_id: - mapping["tool_file_id"] = record_id - elif transfer_method in [FileTransferMethod.LOCAL_FILE, FileTransferMethod.REMOTE_URL] and record_id: - mapping["upload_file_id"] = record_id - elif transfer_method == FileTransferMethod.DATASOURCE_FILE and record_id: - mapping["datasource_file_id"] = record_id + match transfer_method: + case FileTransferMethod.TOOL_FILE if record_id: + mapping["tool_file_id"] = record_id + case FileTransferMethod.LOCAL_FILE | FileTransferMethod.REMOTE_URL if record_id: + mapping["upload_file_id"] = record_id + case FileTransferMethod.DATASOURCE_FILE if record_id: + mapping["datasource_file_id"] = record_id + case _: + pass if transfer_method == FileTransferMethod.REMOTE_URL and record_id is None: remote_url = mapping.get("remote_url") diff --git a/api/models/workflow.py b/api/models/workflow.py index 1063016370..dfda03c2ee 100644 --- a/api/models/workflow.py +++ b/api/models/workflow.py @@ -4,23 +4,10 @@ import logging from collections.abc import Generator, Mapping, Sequence from datetime import datetime from enum import StrEnum -from typing import TYPE_CHECKING, Any, Optional, TypedDict, Union, cast +from typing import TYPE_CHECKING, Any, Optional, TypedDict, cast from uuid import uuid4 import sqlalchemy as sa -from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter -from graphon.entities.pause_reason import HumanInputRequired, PauseReason, PauseReasonType, SchedulingPause -from graphon.enums import ( - BuiltinNodeTypes, - NodeType, - WorkflowExecutionStatus, - WorkflowNodeExecutionMetadataKey, - WorkflowNodeExecutionStatus, -) -from graphon.file import File -from graphon.file.constants import maybe_file_object -from graphon.variables import utils as variable_utils -from graphon.variables.variables import FloatVariable, IntegerVariable, RAGPipelineVariable, StringVariable from sqlalchemy import ( DateTime, Index, @@ -44,6 +31,19 @@ from core.workflow.variable_prefixes import ( ) from extensions.ext_storage import Storage from factories.variable_factory import TypeMismatchError, build_segment_with_type +from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter +from graphon.entities.pause_reason import HumanInputRequired, PauseReason, PauseReasonType, SchedulingPause +from graphon.enums import ( + BuiltinNodeTypes, + NodeType, + WorkflowExecutionStatus, + WorkflowNodeExecutionMetadataKey, + WorkflowNodeExecutionStatus, +) +from graphon.file import File +from graphon.file.constants import maybe_file_object +from graphon.variables import utils as variable_utils +from graphon.variables.variables import FloatVariable, IntegerVariable, RAGPipelineVariable, StringVariable from libs.datetime_utils import naive_utc_now from libs.uuid_utils import uuidv7 @@ -53,15 +53,14 @@ if TYPE_CHECKING: from .model import AppMode, UploadFile -from graphon.variables import SecretVariable, Segment, SegmentType, VariableBase - from constants import DEFAULT_FILE_NUMBER_LIMITS, HIDDEN_VALUE from core.helper import encrypter from factories import variable_factory +from graphon.variables import SecretVariable, Segment, SegmentType, VariableBase from libs import helper from .account import Account -from .base import Base, DefaultFieldsMixin, TypeBase +from .base import Base, DefaultFieldsDCMixin, TypeBase from .engine import db from .enums import CreatorUserRole, DraftVariableType, ExecutionOffLoadType, WorkflowRunTriggeredFrom from .types import EnumText, LongText, StringUUID @@ -121,7 +120,7 @@ class WorkflowType(StrEnum): raise ValueError(f"invalid workflow type value {value}") @classmethod - def from_app_mode(cls, app_mode: Union[str, "AppMode"]) -> "WorkflowType": + def from_app_mode(cls, app_mode: "str | AppMode") -> "WorkflowType": """ Get workflow type from app mode. @@ -490,7 +489,7 @@ class Workflow(Base): # bug :return: hash """ - entity = {"graph": self.graph_dict, "features": self.features_dict} + entity = {"graph": self.graph_dict} return helper.generate_text_hash(json.dumps(entity, sort_keys=True)) @@ -671,6 +670,29 @@ class Workflow(Base): # bug return str(d) +class WorkflowRunDict(TypedDict): + id: str + tenant_id: str + app_id: str + workflow_id: str + type: WorkflowType + triggered_from: WorkflowRunTriggeredFrom + version: str + graph: Mapping[str, Any] + inputs: Mapping[str, Any] + status: WorkflowExecutionStatus + outputs: Mapping[str, Any] + error: str | None + elapsed_time: float + total_tokens: int + total_steps: int + created_by_role: CreatorUserRole + created_by: str + created_at: datetime + finished_at: datetime | None + exceptions_count: int + + class WorkflowRun(Base): """ Workflow Run @@ -742,8 +764,8 @@ class WorkflowRun(Base): exceptions_count: Mapped[int] = mapped_column(sa.Integer, server_default=sa.text("0"), nullable=True) pause: Mapped[Optional["WorkflowPause"]] = orm.relationship( - "WorkflowPause", - primaryjoin="WorkflowRun.id == foreign(WorkflowPause.workflow_run_id)", + lambda: WorkflowPause, + primaryjoin=lambda: WorkflowRun.id == orm.foreign(WorkflowPause.workflow_run_id), uselist=False, # require explicit preloading. lazy="raise", @@ -790,29 +812,29 @@ class WorkflowRun(Base): def workflow(self): return db.session.scalar(select(Workflow).where(Workflow.id == self.workflow_id)) - def to_dict(self): - return { - "id": self.id, - "tenant_id": self.tenant_id, - "app_id": self.app_id, - "workflow_id": self.workflow_id, - "type": self.type, - "triggered_from": self.triggered_from, - "version": self.version, - "graph": self.graph_dict, - "inputs": self.inputs_dict, - "status": self.status, - "outputs": self.outputs_dict, - "error": self.error, - "elapsed_time": self.elapsed_time, - "total_tokens": self.total_tokens, - "total_steps": self.total_steps, - "created_by_role": self.created_by_role, - "created_by": self.created_by, - "created_at": self.created_at, - "finished_at": self.finished_at, - "exceptions_count": self.exceptions_count, - } + def to_dict(self) -> WorkflowRunDict: + return WorkflowRunDict( + id=self.id, + tenant_id=self.tenant_id, + app_id=self.app_id, + workflow_id=self.workflow_id, + type=self.type, + triggered_from=self.triggered_from, + version=self.version, + graph=self.graph_dict, + inputs=self.inputs_dict, + status=self.status, + outputs=self.outputs_dict, + error=self.error, + elapsed_time=self.elapsed_time, + total_tokens=self.total_tokens, + total_steps=self.total_steps, + created_by_role=self.created_by_role, + created_by=self.created_by, + created_at=self.created_at, + finished_at=self.finished_at, + exceptions_count=self.exceptions_count, + ) @classmethod def from_dict(cls, data: dict[str, Any]) -> "WorkflowRun": @@ -1051,7 +1073,7 @@ class WorkflowNodeExecutionModel(Base): # This model is expected to have `offlo ) return extras - def _get_offload_by_type(self, type_: ExecutionOffLoadType) -> Optional["WorkflowNodeExecutionOffload"]: + def _get_offload_by_type(self, type_: ExecutionOffLoadType) -> "WorkflowNodeExecutionOffload | None": return next(iter([i for i in self.offload_data if i.type_ == type_]), None) @property @@ -1196,6 +1218,18 @@ class WorkflowAppLogCreatedFrom(StrEnum): raise ValueError(f"invalid workflow app log created from value {value}") +class WorkflowAppLogDict(TypedDict): + id: str + tenant_id: str + app_id: str + workflow_id: str + workflow_run_id: str + created_from: WorkflowAppLogCreatedFrom + created_by_role: CreatorUserRole + created_by: str + created_at: datetime + + class WorkflowAppLog(TypeBase): """ Workflow App execution log, excluding workflow debugging records. @@ -1273,8 +1307,8 @@ class WorkflowAppLog(TypeBase): created_by_role = CreatorUserRole(self.created_by_role) return db.session.get(EndUser, self.created_by) if created_by_role == CreatorUserRole.END_USER else None - def to_dict(self): - return { + def to_dict(self) -> WorkflowAppLogDict: + result: WorkflowAppLogDict = { "id": self.id, "tenant_id": self.tenant_id, "app_id": self.app_id, @@ -1285,6 +1319,7 @@ class WorkflowAppLog(TypeBase): "created_by": self.created_by, "created_at": self.created_at, } + return result class WorkflowArchiveLog(TypeBase): @@ -1625,21 +1660,22 @@ class WorkflowDraftVariable(Base): # Rebuild them through the file factory so tenant ownership, signed URLs, # and storage-backed metadata come from canonical records instead of the # serialized JSON blob. - if segment_type == SegmentType.FILE: - if isinstance(value, File): - return build_segment_with_type(segment_type, value) - elif isinstance(value, dict): - file = self._rebuild_file_types(value) - return build_segment_with_type(segment_type, file) - else: - raise TypeMismatchError(f"expected dict or File for FileSegment, got {type(value)}") - if segment_type == SegmentType.ARRAY_FILE: - if not isinstance(value, list): - raise TypeMismatchError(f"expected list for ArrayFileSegment, got {type(value)}") - file_list = self._rebuild_file_types(value) - return build_segment_with_type(segment_type=segment_type, value=file_list) - - return build_segment_with_type(segment_type=segment_type, value=value) + match segment_type: + case SegmentType.FILE: + if isinstance(value, File): + return build_segment_with_type(segment_type, value) + elif isinstance(value, dict): + file = self._rebuild_file_types(value) + return build_segment_with_type(segment_type, file) + else: + raise TypeMismatchError(f"expected dict or File for FileSegment, got {type(value)}") + case SegmentType.ARRAY_FILE: + if not isinstance(value, list): + raise TypeMismatchError(f"expected list for ArrayFileSegment, got {type(value)}") + file_list = self._rebuild_file_types(value) + return build_segment_with_type(segment_type=segment_type, value=file_list) + case _: + return build_segment_with_type(segment_type=segment_type, value=value) @staticmethod def rebuild_file_types(value: Any): @@ -1672,21 +1708,22 @@ class WorkflowDraftVariable(Base): # Extends `variable_factory.build_segment_with_type` functionality by # reconstructing `FileSegment`` or `ArrayFileSegment`` objects from # their serialized dictionary or list representations, respectively. - if segment_type == SegmentType.FILE: - if isinstance(value, File): - return build_segment_with_type(segment_type, value) - elif isinstance(value, dict): - file = cls.rebuild_file_types(value) - return build_segment_with_type(segment_type, file) - else: - raise TypeMismatchError(f"expected dict or File for FileSegment, got {type(value)}") - if segment_type == SegmentType.ARRAY_FILE: - if not isinstance(value, list): - raise TypeMismatchError(f"expected list for ArrayFileSegment, got {type(value)}") - file_list = cls.rebuild_file_types(value) - return build_segment_with_type(segment_type=segment_type, value=file_list) - - return build_segment_with_type(segment_type=segment_type, value=value) + match segment_type: + case SegmentType.FILE: + if isinstance(value, File): + return build_segment_with_type(segment_type, value) + elif isinstance(value, dict): + file = cls.rebuild_file_types(value) + return build_segment_with_type(segment_type, file) + else: + raise TypeMismatchError(f"expected dict or File for FileSegment, got {type(value)}") + case SegmentType.ARRAY_FILE: + if not isinstance(value, list): + raise TypeMismatchError(f"expected list for ArrayFileSegment, got {type(value)}") + file_list = cls.rebuild_file_types(value) + return build_segment_with_type(segment_type=segment_type, value=file_list) + case _: + return build_segment_with_type(segment_type=segment_type, value=value) def get_value(self) -> Segment: """Decode the serialized value into its corresponding `Segment` object. @@ -1939,7 +1976,7 @@ def is_system_variable_editable(name: str) -> bool: return name in _EDITABLE_SYSTEM_VARIABLE -class WorkflowPause(DefaultFieldsMixin, Base): +class WorkflowPause(DefaultFieldsDCMixin, TypeBase): """ WorkflowPause records the paused state and related metadata for a specific workflow run. @@ -1978,6 +2015,11 @@ class WorkflowPause(DefaultFieldsMixin, Base): nullable=False, ) + # state_object_key stores the object key referencing the serialized runtime state + # of the `GraphEngine`. This object captures the complete execution context of the + # workflow at the moment it was paused, enabling accurate resumption. + state_object_key: Mapped[str] = mapped_column(String(length=255), nullable=False) + # `resumed_at` records the timestamp when the suspended workflow was resumed. # It is set to `NULL` if the workflow has not been resumed. # @@ -1986,25 +2028,23 @@ class WorkflowPause(DefaultFieldsMixin, Base): resumed_at: Mapped[datetime | None] = mapped_column( sa.DateTime, nullable=True, + default=None, ) - # state_object_key stores the object key referencing the serialized runtime state - # of the `GraphEngine`. This object captures the complete execution context of the - # workflow at the moment it was paused, enabling accurate resumption. - state_object_key: Mapped[str] = mapped_column(String(length=255), nullable=False) - - # Relationship to WorkflowRun + # Relationship to WorkflowRun (uses lambda to resolve across Base/TypeBase registries) workflow_run: Mapped["WorkflowRun"] = orm.relationship( + lambda: WorkflowRun, foreign_keys=[workflow_run_id], # require explicit preloading. lazy="raise", uselist=False, - primaryjoin="WorkflowPause.workflow_run_id == WorkflowRun.id", + primaryjoin=lambda: WorkflowPause.workflow_run_id == WorkflowRun.id, back_populates="pause", + init=False, ) -class WorkflowPauseReason(DefaultFieldsMixin, Base): +class WorkflowPauseReason(DefaultFieldsDCMixin, TypeBase): __tablename__ = "workflow_pause_reasons" # `pause_id` represents the identifier of the pause, @@ -2047,16 +2087,20 @@ class WorkflowPauseReason(DefaultFieldsMixin, Base): lazy="raise", uselist=False, primaryjoin="WorkflowPauseReason.pause_id == WorkflowPause.id", + init=False, ) @classmethod - def from_entity(cls, pause_reason: PauseReason) -> "WorkflowPauseReason": + def from_entity(cls, *, pause_id: str, pause_reason: PauseReason) -> "WorkflowPauseReason": if isinstance(pause_reason, HumanInputRequired): return cls( - type_=PauseReasonType.HUMAN_INPUT_REQUIRED, form_id=pause_reason.form_id, node_id=pause_reason.node_id + pause_id=pause_id, + type_=PauseReasonType.HUMAN_INPUT_REQUIRED, + form_id=pause_reason.form_id, + node_id=pause_reason.node_id, ) elif isinstance(pause_reason, SchedulingPause): - return cls(type_=PauseReasonType.SCHEDULED_PAUSE, message=pause_reason.message, node_id="") + return cls(pause_id=pause_id, type_=PauseReasonType.SCHEDULED_PAUSE, message=pause_reason.message) else: raise AssertionError(f"Unknown pause reason type: {pause_reason}") diff --git a/api/providers/README.md b/api/providers/README.md new file mode 100644 index 0000000000..a00ec8bc52 --- /dev/null +++ b/api/providers/README.md @@ -0,0 +1,12 @@ +# Providers + +This directory holds **optional workspace packages** that plug into Dify’s API core. Providers are responsible for implementing the interfaces and registering themselves to the API core. Provider mechanism allows building the software with selected set of providers so as to enhance the security and flexibility of distributions. + +## Developing Providers + +- [VDB Providers](vdb/README.md) + +## Tests + +Provider tests often live next to the package, e.g. `providers///tests/unit_tests/`. Shared fixtures may live under `providers/` (e.g. `conftest.py`). + diff --git a/api/providers/vdb/README.md b/api/providers/vdb/README.md new file mode 100644 index 0000000000..b5b4197f63 --- /dev/null +++ b/api/providers/vdb/README.md @@ -0,0 +1,58 @@ +# VDB providers + +This directory contains all VDB providers. + +## Architecture +1. **Core** (`api/core/rag/datasource/vdb/`) defines the contracts and loads plugins. +2. **Each provider** (`api/providers/vdb//`) implements those contracts and registers an entry point. +3. At runtime, **`importlib.metadata.entry_points`** resolves the backend name (e.g. `pgvector`) to a factory class. The registry caches loaded classes (see `vector_backend_registry.py`). + +### Interfaces + +| Piece | Role | +|--------|----------| +| `AbstractVectorFactory` | You subclass this. Implement `init_vector(dataset, attributes, embeddings) -> BaseVector`. Optionally use `gen_index_struct_dict()` for new datasets. | +| `BaseVector` | Your store class subclasses this: `create`, `add_texts`, `search_by_vector`, `delete`, etc. | +| `VectorType` | `StrEnum` of supported backend **string ids**. Add a member when you introduce a new backend that should be selectable like existing ones. | +| Discovery | Loads `dify.vector_backends` entry points and caches `get_vector_factory_class(vector_type)`. | + +The high-level caller is `Vector` in `vector_factory.py`: it reads the configured or dataset-specific vector type, calls `get_vector_factory_class`, instantiates the factory, and uses the returned `BaseVector` implementation. + +### Entry point name must match the vector type string + +Entry points are registered under the group **`dify.vector_backends`**. The **entry point name** (left-hand side) must be exactly the string used as `vector_type` everywhere else—typically the **`VectorType` enum value** (e.g. `PGVECTOR = "pgvector"` → entry point name `pgvector`; `TIDB_ON_QDRANT = "tidb_on_qdrant"` → `tidb_on_qdrant`). + +In `pyproject.toml`: + +```toml +[project.entry-points."dify.vector_backends"] +pgvector = "dify_vdb_pgvector.pgvector:PGVectorFactory" +``` + +The value is **`module:attribute`**: a importable module path and the class implementing `AbstractVectorFactory`. + +### How registration works + +1. On first use, `get_vector_factory_class(vector_type)` looks up `vector_type` in a process cache. +2. If missing, it scans **`entry_points().select(group="dify.vector_backends")`** for an entry whose **`name` equals `vector_type`**. +3. It loads that entry (`ep.load()`), which must return the **factory class** (not an instance). +4. There is an optional internal map `_BUILTIN_VECTOR_FACTORY_TARGETS` for non-distribution builtins; **normal VDB plugins use entry points only**. + +After you change a provider’s `pyproject.toml` (entry points or dependencies), run **`uv sync`** in `api/` so the installed environment’s dist-info matches the project metadata. + +### Package layout (VDB) + +Each backend usually follows: + +- `api/providers/vdb//pyproject.toml` — project name `dify-vdb-`, dependencies, entry points. +- `api/providers/vdb//src/dify_vdb_/` — implementation (e.g. `PGVector`, `PGVectorFactory`). + +See `vdb/pgvector/` as a reference implementation. + +### Wiring a new backend into the API workspace + +The API uses a **uv workspace** (`api/pyproject.toml`): + +1. **`[tool.uv.workspace]`** — `members = ["providers/vdb/*"]` already includes every subdirectory under `vdb/`; new folders there are workspace members. +2. **`[tool.uv.sources]`** — add a line for your package: `dify-vdb-mine = { workspace = true }`. +3. **`[project.optional-dependencies]`** — add a group such as `vdb-mine = ["dify-vdb-mine"]`, and list `dify-vdb-mine` under `vdb-all` if it should install with the default bundle. \ No newline at end of file diff --git a/api/providers/vdb/conftest.py b/api/providers/vdb/conftest.py new file mode 100644 index 0000000000..c4b1cdef29 --- /dev/null +++ b/api/providers/vdb/conftest.py @@ -0,0 +1,22 @@ +from unittest.mock import MagicMock + +import pytest + +from extensions import ext_redis + + +@pytest.fixture(autouse=True) +def _init_mock_redis(): + """Ensure redis_client has a backing client so __getattr__ never raises.""" + if ext_redis.redis_client._client is None: + ext_redis.redis_client.initialize(MagicMock()) + + +@pytest.fixture +def setup_mock_redis(monkeypatch: pytest.MonkeyPatch): + monkeypatch.setattr(ext_redis.redis_client, "get", MagicMock(return_value=None)) + monkeypatch.setattr(ext_redis.redis_client, "set", MagicMock(return_value=None)) + mock_redis_lock = MagicMock() + mock_redis_lock.__enter__ = MagicMock() + mock_redis_lock.__exit__ = MagicMock() + monkeypatch.setattr(ext_redis.redis_client, "lock", mock_redis_lock) diff --git a/api/providers/vdb/vdb-alibabacloud-mysql/pyproject.toml b/api/providers/vdb/vdb-alibabacloud-mysql/pyproject.toml new file mode 100644 index 0000000000..bbc0e06ffa --- /dev/null +++ b/api/providers/vdb/vdb-alibabacloud-mysql/pyproject.toml @@ -0,0 +1,13 @@ +[project] +name = "dify-vdb-alibabacloud-mysql" +version = "0.0.1" +dependencies = [ + "mysql-connector-python>=9.3.0", +] +description = "Dify vector store backend (dify-vdb-alibabacloud-mysql)." + +[project.entry-points."dify.vector_backends"] +alibabacloud_mysql = "dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector:AlibabaCloudMySQLVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/alibabacloud_mysql/__init__.py b/api/providers/vdb/vdb-alibabacloud-mysql/src/dify_vdb_alibabacloud_mysql/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/alibabacloud_mysql/__init__.py rename to api/providers/vdb/vdb-alibabacloud-mysql/src/dify_vdb_alibabacloud_mysql/__init__.py diff --git a/api/core/rag/datasource/vdb/alibabacloud_mysql/alibabacloud_mysql_vector.py b/api/providers/vdb/vdb-alibabacloud-mysql/src/dify_vdb_alibabacloud_mysql/alibabacloud_mysql_vector.py similarity index 99% rename from api/core/rag/datasource/vdb/alibabacloud_mysql/alibabacloud_mysql_vector.py rename to api/providers/vdb/vdb-alibabacloud-mysql/src/dify_vdb_alibabacloud_mysql/alibabacloud_mysql_vector.py index 6e76827a42..37ffd11063 100644 --- a/api/core/rag/datasource/vdb/alibabacloud_mysql/alibabacloud_mysql_vector.py +++ b/api/providers/vdb/vdb-alibabacloud-mysql/src/dify_vdb_alibabacloud_mysql/alibabacloud_mysql_vector.py @@ -35,7 +35,7 @@ class AlibabaCloudMySQLVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values.get("host"): raise ValueError("config ALIBABACLOUD_MYSQL_HOST is required") if not values.get("port"): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/alibabacloud_mysql/test_alibabacloud_mysql_factory.py b/api/providers/vdb/vdb-alibabacloud-mysql/tests/unit_tests/test_alibabacloud_mysql_factory.py similarity index 94% rename from api/tests/unit_tests/core/rag/datasource/vdb/alibabacloud_mysql/test_alibabacloud_mysql_factory.py rename to api/providers/vdb/vdb-alibabacloud-mysql/tests/unit_tests/test_alibabacloud_mysql_factory.py index e063a49f22..a907f918c3 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/alibabacloud_mysql/test_alibabacloud_mysql_factory.py +++ b/api/providers/vdb/vdb-alibabacloud-mysql/tests/unit_tests/test_alibabacloud_mysql_factory.py @@ -1,10 +1,9 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch +import dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector as alibaba_module import pytest - -import core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector as alibaba_module -from core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector import AlibabaCloudMySQLVectorFactory +from dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector import AlibabaCloudMySQLVectorFactory def test_validate_distance_function_accepts_supported_values(): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/alibabacloud_mysql/test_alibabacloud_mysql_vector.py b/api/providers/vdb/vdb-alibabacloud-mysql/tests/unit_tests/test_alibabacloud_mysql_vector.py similarity index 87% rename from api/tests/unit_tests/core/rag/datasource/vdb/alibabacloud_mysql/test_alibabacloud_mysql_vector.py rename to api/providers/vdb/vdb-alibabacloud-mysql/tests/unit_tests/test_alibabacloud_mysql_vector.py index 8ccd739e64..54eeb78ca9 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/alibabacloud_mysql/test_alibabacloud_mysql_vector.py +++ b/api/providers/vdb/vdb-alibabacloud-mysql/tests/unit_tests/test_alibabacloud_mysql_vector.py @@ -3,11 +3,11 @@ import unittest from unittest.mock import MagicMock, patch import pytest - -from core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector import ( +from dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector import ( AlibabaCloudMySQLVector, AlibabaCloudMySQLVectorConfig, ) + from core.rag.models.document import Document try: @@ -49,9 +49,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): # Sample embeddings self.sample_embeddings = [[0.1, 0.2, 0.3, 0.4], [0.5, 0.6, 0.7, 0.8]] - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_init(self, mock_pool_class): """Test AlibabaCloudMySQLVector initialization.""" # Mock the connection pool @@ -76,10 +74,8 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert alibabacloud_mysql_vector.distance_function == "cosine" assert alibabacloud_mysql_vector.pool is not None - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) - @patch("core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.redis_client") + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.redis_client") def test_create_collection(self, mock_redis, mock_pool_class): """Test collection creation.""" # Mock Redis operations @@ -110,9 +106,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert mock_cursor.execute.call_count >= 3 # CREATE TABLE + 2 indexes mock_redis.set.assert_called_once() - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_vector_support_check_success(self, mock_pool_class): """Test successful vector support check.""" # Mock the connection pool @@ -129,9 +123,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): vector_store = AlibabaCloudMySQLVector(self.collection_name, self.config) assert vector_store is not None - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_vector_support_check_failure(self, mock_pool_class): """Test vector support check failure.""" # Mock the connection pool @@ -149,9 +141,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert "RDS MySQL Vector functions are not available" in str(context.value) - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_vector_support_check_function_error(self, mock_pool_class): """Test vector support check with function not found error.""" # Mock the connection pool @@ -170,10 +160,8 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert "RDS MySQL Vector functions are not available" in str(context.value) - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) - @patch("core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.redis_client") + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.redis_client") def test_create_documents(self, mock_redis, mock_pool_class): """Test creating documents with embeddings.""" # Setup mocks @@ -186,9 +174,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert "doc1" in result assert "doc2" in result - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_add_texts(self, mock_pool_class): """Test adding texts to the vector store.""" # Mock the connection pool @@ -207,9 +193,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert len(result) == 2 mock_cursor.executemany.assert_called_once() - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_text_exists(self, mock_pool_class): """Test checking if text exists.""" # Mock the connection pool @@ -236,9 +220,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert "SELECT id FROM" in last_call[0][0] assert last_call[0][1] == ("doc1",) - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_text_not_exists(self, mock_pool_class): """Test checking if text does not exist.""" # Mock the connection pool @@ -260,9 +242,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert not exists - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_get_by_ids(self, mock_pool_class): """Test getting documents by IDs.""" # Mock the connection pool @@ -288,9 +268,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert docs[0].page_content == "Test document 1" assert docs[1].page_content == "Test document 2" - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_get_by_ids_empty_list(self, mock_pool_class): """Test getting documents with empty ID list.""" # Mock the connection pool @@ -308,9 +286,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert len(docs) == 0 - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_delete_by_ids(self, mock_pool_class): """Test deleting documents by IDs.""" # Mock the connection pool @@ -334,9 +310,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert "DELETE FROM" in delete_call[0][0] assert delete_call[0][1] == ["doc1", "doc2"] - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_delete_by_ids_empty_list(self, mock_pool_class): """Test deleting with empty ID list.""" # Mock the connection pool @@ -357,9 +331,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): delete_calls = [call for call in execute_calls if "DELETE" in str(call)] assert len(delete_calls) == 0 - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_delete_by_ids_table_not_exists(self, mock_pool_class): """Test deleting when table doesn't exist.""" # Mock the connection pool @@ -384,9 +356,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): # Should not raise an exception vector_store.delete_by_ids(["doc1"]) - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_delete_by_metadata_field(self, mock_pool_class): """Test deleting documents by metadata field.""" # Mock the connection pool @@ -410,9 +380,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert "JSON_UNQUOTE(JSON_EXTRACT(meta" in delete_call[0][0] assert delete_call[0][1] == ("$.document_id", "dataset1") - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_search_by_vector_cosine(self, mock_pool_class): """Test vector search with cosine distance.""" # Mock the connection pool @@ -437,9 +405,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert abs(docs[0].metadata["score"] - 0.9) < 0.1 # 1 - 0.1 = 0.9 assert docs[0].metadata["distance"] == 0.1 - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_search_by_vector_euclidean(self, mock_pool_class): """Test vector search with euclidean distance.""" config = AlibabaCloudMySQLVectorConfig( @@ -472,9 +438,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert len(docs) == 1 assert abs(docs[0].metadata["score"] - 1.0 / 3.0) < 0.01 # 1/(1+2) = 1/3 - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_search_by_vector_with_filter(self, mock_pool_class): """Test vector search with document ID filter.""" # Mock the connection pool @@ -499,9 +463,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): search_call = search_calls[0] assert "WHERE JSON_UNQUOTE" in search_call[0][0] - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_search_by_vector_with_score_threshold(self, mock_pool_class): """Test vector search with score threshold.""" # Mock the connection pool @@ -536,9 +498,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert len(docs) == 1 assert docs[0].page_content == "High similarity document" - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_search_by_vector_invalid_top_k(self, mock_pool_class): """Test vector search with invalid top_k.""" # Mock the connection pool @@ -560,9 +520,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): with pytest.raises(ValueError): vector_store.search_by_vector(query_vector, top_k="invalid") - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_search_by_full_text(self, mock_pool_class): """Test full-text search.""" # Mock the connection pool @@ -591,9 +549,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): assert docs[0].page_content == "This document contains machine learning content" assert docs[0].metadata["score"] == 1.5 - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_search_by_full_text_with_filter(self, mock_pool_class): """Test full-text search with document ID filter.""" # Mock the connection pool @@ -617,9 +573,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): search_call = search_calls[0] assert "AND JSON_UNQUOTE" in search_call[0][0] - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_search_by_full_text_invalid_top_k(self, mock_pool_class): """Test full-text search with invalid top_k.""" # Mock the connection pool @@ -640,9 +594,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): with pytest.raises(ValueError): vector_store.search_by_full_text("test", top_k="invalid") - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_delete_collection(self, mock_pool_class): """Test deleting the entire collection.""" # Mock the connection pool @@ -665,9 +617,7 @@ class TestAlibabaCloudMySQLVector(unittest.TestCase): drop_call = drop_calls[0] assert f"DROP TABLE IF EXISTS {self.collection_name.lower()}" in drop_call[0][0] - @patch( - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool" - ) + @patch("dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector.mysql.connector.pooling.MySQLConnectionPool") def test_unsupported_distance_function(self, mock_pool_class): """Test that Pydantic validation rejects unsupported distance functions.""" # Test that creating config with unsupported distance function raises ValidationError diff --git a/api/providers/vdb/vdb-analyticdb/pyproject.toml b/api/providers/vdb/vdb-analyticdb/pyproject.toml new file mode 100644 index 0000000000..af5def3061 --- /dev/null +++ b/api/providers/vdb/vdb-analyticdb/pyproject.toml @@ -0,0 +1,15 @@ +[project] +name = "dify-vdb-analyticdb" +version = "0.0.1" +dependencies = [ + "alibabacloud_gpdb20160503~=5.2.0", + "alibabacloud_tea_openapi~=0.4.3", + "clickhouse-connect~=0.15.0", +] +description = "Dify vector store backend (dify-vdb-analyticdb)." + +[project.entry-points."dify.vector_backends"] +analyticdb = "dify_vdb_analyticdb.analyticdb_vector:AnalyticdbVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/analyticdb/__init__.py b/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/analyticdb/__init__.py rename to api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/__init__.py diff --git a/api/core/rag/datasource/vdb/analyticdb/analyticdb_vector.py b/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector.py similarity index 93% rename from api/core/rag/datasource/vdb/analyticdb/analyticdb_vector.py rename to api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector.py index ddb549ba9d..e56bb74ba3 100644 --- a/api/core/rag/datasource/vdb/analyticdb/analyticdb_vector.py +++ b/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector.py @@ -2,16 +2,16 @@ import json from typing import Any from configs import dify_config -from core.rag.datasource.vdb.analyticdb.analyticdb_vector_openapi import ( - AnalyticdbVectorOpenAPI, - AnalyticdbVectorOpenAPIConfig, -) -from core.rag.datasource.vdb.analyticdb.analyticdb_vector_sql import AnalyticdbVectorBySql, AnalyticdbVectorBySqlConfig from core.rag.datasource.vdb.vector_base import BaseVector from core.rag.datasource.vdb.vector_factory import AbstractVectorFactory from core.rag.datasource.vdb.vector_type import VectorType from core.rag.embedding.embedding_base import Embeddings from core.rag.models.document import Document +from dify_vdb_analyticdb.analyticdb_vector_openapi import ( + AnalyticdbVectorOpenAPI, + AnalyticdbVectorOpenAPIConfig, +) +from dify_vdb_analyticdb.analyticdb_vector_sql import AnalyticdbVectorBySql, AnalyticdbVectorBySqlConfig from models.dataset import Dataset @@ -37,11 +37,12 @@ class AnalyticdbVector(BaseVector): def create(self, texts: list[Document], embeddings: list[list[float]], **kwargs): dimension = len(embeddings[0]) - self.analyticdb_vector._create_collection_if_not_exists(dimension) + self.analyticdb_vector.create_collection_if_not_exists(dimension) self.analyticdb_vector.add_texts(texts, embeddings) - def add_texts(self, documents: list[Document], embeddings: list[list[float]], **kwargs): + def add_texts(self, documents: list[Document], embeddings: list[list[float]], **kwargs) -> list[str]: self.analyticdb_vector.add_texts(documents, embeddings) + return [] def text_exists(self, id: str) -> bool: return self.analyticdb_vector.text_exists(id) diff --git a/api/core/rag/datasource/vdb/analyticdb/analyticdb_vector_openapi.py b/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector_openapi.py similarity index 99% rename from api/core/rag/datasource/vdb/analyticdb/analyticdb_vector_openapi.py rename to api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector_openapi.py index fb6eaa370a..f13d9c0817 100644 --- a/api/core/rag/datasource/vdb/analyticdb/analyticdb_vector_openapi.py +++ b/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector_openapi.py @@ -34,7 +34,7 @@ class AnalyticdbVectorOpenAPIConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["access_key_id"]: raise ValueError("config ANALYTICDB_KEY_ID is required") if not values["access_key_secret"]: @@ -123,7 +123,7 @@ class AnalyticdbVectorOpenAPI: else: raise ValueError(f"failed to create namespace {self.config.namespace}: {e}") - def _create_collection_if_not_exists(self, embedding_dimension: int): + def create_collection_if_not_exists(self, embedding_dimension: int): from alibabacloud_gpdb20160503 import models as gpdb_20160503_models from Tea.exceptions import TeaException diff --git a/api/core/rag/datasource/vdb/analyticdb/analyticdb_vector_sql.py b/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector_sql.py similarity index 98% rename from api/core/rag/datasource/vdb/analyticdb/analyticdb_vector_sql.py rename to api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector_sql.py index 12126f32d6..b2908ebdae 100644 --- a/api/core/rag/datasource/vdb/analyticdb/analyticdb_vector_sql.py +++ b/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector_sql.py @@ -1,5 +1,6 @@ import json import uuid +from collections.abc import Iterator from contextlib import contextmanager from typing import Any @@ -23,7 +24,7 @@ class AnalyticdbVectorBySqlConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config ANALYTICDB_HOST is required") if not values["port"]: @@ -74,7 +75,7 @@ class AnalyticdbVectorBySql: ) @contextmanager - def _get_cursor(self): + def _get_cursor(self) -> Iterator[Any]: assert self.pool is not None, "Connection pool is not initialized" conn = self.pool.getconn() cur = conn.cursor() @@ -130,7 +131,7 @@ class AnalyticdbVectorBySql: ) cur.execute(f"CREATE SCHEMA IF NOT EXISTS {self.config.namespace}") - def _create_collection_if_not_exists(self, embedding_dimension: int): + def create_collection_if_not_exists(self, embedding_dimension: int): cache_key = f"vector_indexing_{self._collection_name}" lock_name = f"{cache_key}_lock" with redis_client.lock(lock_name, timeout=20): diff --git a/api/tests/integration_tests/vdb/analyticdb/test_analyticdb.py b/api/providers/vdb/vdb-analyticdb/tests/integration_tests/test_analyticdb.py similarity index 79% rename from api/tests/integration_tests/vdb/analyticdb/test_analyticdb.py rename to api/providers/vdb/vdb-analyticdb/tests/integration_tests/test_analyticdb.py index 0981523809..2bb413dcc1 100644 --- a/api/tests/integration_tests/vdb/analyticdb/test_analyticdb.py +++ b/api/providers/vdb/vdb-analyticdb/tests/integration_tests/test_analyticdb.py @@ -1,9 +1,8 @@ -from core.rag.datasource.vdb.analyticdb.analyticdb_vector import AnalyticdbVector -from core.rag.datasource.vdb.analyticdb.analyticdb_vector_openapi import AnalyticdbVectorOpenAPIConfig -from core.rag.datasource.vdb.analyticdb.analyticdb_vector_sql import AnalyticdbVectorBySqlConfig -from tests.integration_tests.vdb.test_vector_store import AbstractVectorTest +from dify_vdb_analyticdb.analyticdb_vector import AnalyticdbVector +from dify_vdb_analyticdb.analyticdb_vector_openapi import AnalyticdbVectorOpenAPIConfig +from dify_vdb_analyticdb.analyticdb_vector_sql import AnalyticdbVectorBySqlConfig -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) +from core.rag.datasource.vdb.vector_integration_test_support import AbstractVectorTest class AnalyticdbVectorTest(AbstractVectorTest): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/analyticdb/test_analyticdb_vector.py b/api/providers/vdb/vdb-analyticdb/tests/unit_tests/test_analyticdb_vector.py similarity index 92% rename from api/tests/unit_tests/core/rag/datasource/vdb/analyticdb/test_analyticdb_vector.py rename to api/providers/vdb/vdb-analyticdb/tests/unit_tests/test_analyticdb_vector.py index 545565cdf4..d1d471761d 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/analyticdb/test_analyticdb_vector.py +++ b/api/providers/vdb/vdb-analyticdb/tests/unit_tests/test_analyticdb_vector.py @@ -1,12 +1,12 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch +import dify_vdb_analyticdb.analyticdb_vector as analyticdb_module import pytest +from dify_vdb_analyticdb.analyticdb_vector import AnalyticdbVector, AnalyticdbVectorFactory +from dify_vdb_analyticdb.analyticdb_vector_openapi import AnalyticdbVectorOpenAPIConfig +from dify_vdb_analyticdb.analyticdb_vector_sql import AnalyticdbVectorBySqlConfig -import core.rag.datasource.vdb.analyticdb.analyticdb_vector as analyticdb_module -from core.rag.datasource.vdb.analyticdb.analyticdb_vector import AnalyticdbVector, AnalyticdbVectorFactory -from core.rag.datasource.vdb.analyticdb.analyticdb_vector_openapi import AnalyticdbVectorOpenAPIConfig -from core.rag.datasource.vdb.analyticdb.analyticdb_vector_sql import AnalyticdbVectorBySqlConfig from core.rag.models.document import Document @@ -71,7 +71,7 @@ def test_vector_methods_delegate_to_underlying_implementation(): assert vector.search_by_full_text("hello", top_k=2) == runner.search_by_full_text.return_value vector.delete() - runner._create_collection_if_not_exists.assert_called_once_with(2) + runner.create_collection_if_not_exists.assert_called_once_with(2) runner.add_texts.assert_any_call(texts, [[0.1, 0.2]]) runner.delete_by_ids.assert_called_once_with(["d1"]) runner.delete_by_metadata_field.assert_called_once_with("document_id", "doc-1") diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/analyticdb/test_analyticdb_vector_openapi.py b/api/providers/vdb/vdb-analyticdb/tests/unit_tests/test_analyticdb_vector_openapi.py similarity index 97% rename from api/tests/unit_tests/core/rag/datasource/vdb/analyticdb/test_analyticdb_vector_openapi.py rename to api/providers/vdb/vdb-analyticdb/tests/unit_tests/test_analyticdb_vector_openapi.py index 45777774d0..d2d735ae3e 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/analyticdb/test_analyticdb_vector_openapi.py +++ b/api/providers/vdb/vdb-analyticdb/tests/unit_tests/test_analyticdb_vector_openapi.py @@ -4,13 +4,13 @@ import types from types import SimpleNamespace from unittest.mock import MagicMock +import dify_vdb_analyticdb.analyticdb_vector_openapi as openapi_module import pytest - -import core.rag.datasource.vdb.analyticdb.analyticdb_vector_openapi as openapi_module -from core.rag.datasource.vdb.analyticdb.analyticdb_vector_openapi import ( +from dify_vdb_analyticdb.analyticdb_vector_openapi import ( AnalyticdbVectorOpenAPI, AnalyticdbVectorOpenAPIConfig, ) + from core.rag.models.document import Document @@ -249,7 +249,7 @@ def test_create_collection_if_not_exists_creates_when_missing(monkeypatch): vector._client = MagicMock() vector._client.describe_collection.side_effect = stubs.TeaException(statusCode=404) - vector._create_collection_if_not_exists(embedding_dimension=1024) + vector.create_collection_if_not_exists(embedding_dimension=1024) vector._client.create_collection.assert_called_once() openapi_module.redis_client.set.assert_called_once() @@ -268,7 +268,7 @@ def test_create_collection_if_not_exists_skips_when_cached(monkeypatch): vector.config = _config() vector._client = MagicMock() - vector._create_collection_if_not_exists(embedding_dimension=1024) + vector.create_collection_if_not_exists(embedding_dimension=1024) vector._client.describe_collection.assert_not_called() vector._client.create_collection.assert_not_called() @@ -290,7 +290,7 @@ def test_create_collection_if_not_exists_raises_on_non_404_errors(monkeypatch): vector._client.describe_collection.side_effect = stubs.TeaException(statusCode=500) with pytest.raises(ValueError, match="failed to create collection collection_1"): - vector._create_collection_if_not_exists(embedding_dimension=512) + vector.create_collection_if_not_exists(embedding_dimension=512) def test_openapi_add_delete_and_search_methods(monkeypatch): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/analyticdb/test_analyticdb_vector_sql.py b/api/providers/vdb/vdb-analyticdb/tests/unit_tests/test_analyticdb_vector_sql.py similarity index 98% rename from api/tests/unit_tests/core/rag/datasource/vdb/analyticdb/test_analyticdb_vector_sql.py rename to api/providers/vdb/vdb-analyticdb/tests/unit_tests/test_analyticdb_vector_sql.py index 8f1206696b..49a2ae72d0 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/analyticdb/test_analyticdb_vector_sql.py +++ b/api/providers/vdb/vdb-analyticdb/tests/unit_tests/test_analyticdb_vector_sql.py @@ -2,14 +2,14 @@ from contextlib import contextmanager from types import SimpleNamespace from unittest.mock import MagicMock +import dify_vdb_analyticdb.analyticdb_vector_sql as sql_module import psycopg2.errors import pytest - -import core.rag.datasource.vdb.analyticdb.analyticdb_vector_sql as sql_module -from core.rag.datasource.vdb.analyticdb.analyticdb_vector_sql import ( +from dify_vdb_analyticdb.analyticdb_vector_sql import ( AnalyticdbVectorBySql, AnalyticdbVectorBySqlConfig, ) + from core.rag.models.document import Document @@ -374,7 +374,7 @@ def test_create_collection_if_not_exists_creates_table_indexes_and_cache(monkeyp vector._get_cursor = _cursor_context - vector._create_collection_if_not_exists(embedding_dimension=3) + vector.create_collection_if_not_exists(embedding_dimension=3) assert any("CREATE TABLE IF NOT EXISTS dify.collection" in call.args[0] for call in cursor.execute.call_args_list) assert any("CREATE INDEX collection_embedding_idx" in call.args[0] for call in cursor.execute.call_args_list) @@ -404,7 +404,7 @@ def test_create_collection_if_not_exists_raises_for_non_existing_error(monkeypat vector._get_cursor = _cursor_context with pytest.raises(RuntimeError, match="permission denied"): - vector._create_collection_if_not_exists(embedding_dimension=3) + vector.create_collection_if_not_exists(embedding_dimension=3) def test_delete_methods_raise_when_error_is_not_missing_table(): diff --git a/api/providers/vdb/vdb-baidu/pyproject.toml b/api/providers/vdb/vdb-baidu/pyproject.toml new file mode 100644 index 0000000000..bacff08793 --- /dev/null +++ b/api/providers/vdb/vdb-baidu/pyproject.toml @@ -0,0 +1,13 @@ +[project] +name = "dify-vdb-baidu" +version = "0.0.1" +dependencies = [ + "pymochow==2.4.0", +] +description = "Dify vector store backend (dify-vdb-baidu)." + +[project.entry-points."dify.vector_backends"] +baidu = "dify_vdb_baidu.baidu_vector:BaiduVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/baidu/__init__.py b/api/providers/vdb/vdb-baidu/src/dify_vdb_baidu/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/baidu/__init__.py rename to api/providers/vdb/vdb-baidu/src/dify_vdb_baidu/__init__.py diff --git a/api/core/rag/datasource/vdb/baidu/baidu_vector.py b/api/providers/vdb/vdb-baidu/src/dify_vdb_baidu/baidu_vector.py similarity index 99% rename from api/core/rag/datasource/vdb/baidu/baidu_vector.py rename to api/providers/vdb/vdb-baidu/src/dify_vdb_baidu/baidu_vector.py index 99ab0d82f2..bdd5a42c87 100644 --- a/api/core/rag/datasource/vdb/baidu/baidu_vector.py +++ b/api/providers/vdb/vdb-baidu/src/dify_vdb_baidu/baidu_vector.py @@ -59,7 +59,7 @@ class BaiduConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["endpoint"]: raise ValueError("config BAIDU_VECTOR_DB_ENDPOINT is required") if not values["account"]: diff --git a/api/tests/integration_tests/vdb/__mock/baiduvectordb.py b/api/providers/vdb/vdb-baidu/tests/integration_tests/conftest.py similarity index 100% rename from api/tests/integration_tests/vdb/__mock/baiduvectordb.py rename to api/providers/vdb/vdb-baidu/tests/integration_tests/conftest.py diff --git a/api/tests/integration_tests/vdb/baidu/test_baidu.py b/api/providers/vdb/vdb-baidu/tests/integration_tests/test_baidu.py similarity index 73% rename from api/tests/integration_tests/vdb/baidu/test_baidu.py rename to api/providers/vdb/vdb-baidu/tests/integration_tests/test_baidu.py index 716f88af67..2c1d0e3554 100644 --- a/api/tests/integration_tests/vdb/baidu/test_baidu.py +++ b/api/providers/vdb/vdb-baidu/tests/integration_tests/test_baidu.py @@ -1,10 +1,6 @@ -from core.rag.datasource.vdb.baidu.baidu_vector import BaiduConfig, BaiduVector -from tests.integration_tests.vdb.test_vector_store import AbstractVectorTest, get_example_text +from dify_vdb_baidu.baidu_vector import BaiduConfig, BaiduVector -pytest_plugins = ( - "tests.integration_tests.vdb.test_vector_store", - "tests.integration_tests.vdb.__mock.baiduvectordb", -) +from core.rag.datasource.vdb.vector_integration_test_support import AbstractVectorTest, get_example_text class BaiduVectorTest(AbstractVectorTest): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/baidu/test_baidu_vector.py b/api/providers/vdb/vdb-baidu/tests/unit_tests/test_baidu_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/baidu/test_baidu_vector.py rename to api/providers/vdb/vdb-baidu/tests/unit_tests/test_baidu_vector.py index 487d021697..851c09f47a 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/baidu/test_baidu_vector.py +++ b/api/providers/vdb/vdb-baidu/tests/unit_tests/test_baidu_vector.py @@ -124,7 +124,7 @@ def _build_fake_pymochow_modules(): def baidu_module(monkeypatch): for name, module in _build_fake_pymochow_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.baidu.baidu_vector as module + import dify_vdb_baidu.baidu_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-chroma/pyproject.toml b/api/providers/vdb/vdb-chroma/pyproject.toml new file mode 100644 index 0000000000..b37ee2a588 --- /dev/null +++ b/api/providers/vdb/vdb-chroma/pyproject.toml @@ -0,0 +1,13 @@ +[project] +name = "dify-vdb-chroma" +version = "0.0.1" +dependencies = [ + "chromadb==0.5.20", +] +description = "Dify vector store backend (dify-vdb-chroma)." + +[project.entry-points."dify.vector_backends"] +chroma = "dify_vdb_chroma.chroma_vector:ChromaVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/chroma/__init__.py b/api/providers/vdb/vdb-chroma/src/dify_vdb_chroma/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/chroma/__init__.py rename to api/providers/vdb/vdb-chroma/src/dify_vdb_chroma/__init__.py diff --git a/api/core/rag/datasource/vdb/chroma/chroma_vector.py b/api/providers/vdb/vdb-chroma/src/dify_vdb_chroma/chroma_vector.py similarity index 94% rename from api/core/rag/datasource/vdb/chroma/chroma_vector.py rename to api/providers/vdb/vdb-chroma/src/dify_vdb_chroma/chroma_vector.py index 73787c2f00..5b0cfbea15 100644 --- a/api/core/rag/datasource/vdb/chroma/chroma_vector.py +++ b/api/providers/vdb/vdb-chroma/src/dify_vdb_chroma/chroma_vector.py @@ -2,7 +2,7 @@ import json from typing import Any, TypedDict import chromadb -from chromadb import QueryResult, Settings +from chromadb import QueryResult, Settings # pyright: ignore[reportPrivateImportUsage] from pydantic import BaseModel from configs import dify_config @@ -106,14 +106,15 @@ class ChromaVector(BaseVector): def search_by_vector(self, query_vector: list[float], **kwargs: Any) -> list[Document]: collection = self._client.get_or_create_collection(self._collection_name) document_ids_filter = kwargs.get("document_ids_filter") + results: QueryResult if document_ids_filter: - results: QueryResult = collection.query( + results = collection.query( query_embeddings=query_vector, n_results=kwargs.get("top_k", 4), where={"document_id": {"$in": document_ids_filter}}, # type: ignore ) else: - results: QueryResult = collection.query(query_embeddings=query_vector, n_results=kwargs.get("top_k", 4)) # type: ignore + results = collection.query(query_embeddings=query_vector, n_results=kwargs.get("top_k", 4)) # type: ignore score_threshold = float(kwargs.get("score_threshold") or 0.0) # Check if results contain data @@ -165,8 +166,8 @@ class ChromaVectorFactory(AbstractVectorFactory): config=ChromaConfig( host=dify_config.CHROMA_HOST or "", port=dify_config.CHROMA_PORT, - tenant=dify_config.CHROMA_TENANT or chromadb.DEFAULT_TENANT, - database=dify_config.CHROMA_DATABASE or chromadb.DEFAULT_DATABASE, + tenant=dify_config.CHROMA_TENANT or chromadb.DEFAULT_TENANT, # pyright: ignore[reportPrivateImportUsage] + database=dify_config.CHROMA_DATABASE or chromadb.DEFAULT_DATABASE, # pyright: ignore[reportPrivateImportUsage] auth_provider=dify_config.CHROMA_AUTH_PROVIDER, auth_credentials=dify_config.CHROMA_AUTH_CREDENTIALS, ), diff --git a/api/tests/integration_tests/vdb/chroma/test_chroma.py b/api/providers/vdb/vdb-chroma/tests/integration_tests/test_chroma.py similarity index 80% rename from api/tests/integration_tests/vdb/chroma/test_chroma.py rename to api/providers/vdb/vdb-chroma/tests/integration_tests/test_chroma.py index 52beba9979..87c259f3d0 100644 --- a/api/tests/integration_tests/vdb/chroma/test_chroma.py +++ b/api/providers/vdb/vdb-chroma/tests/integration_tests/test_chroma.py @@ -1,13 +1,11 @@ import chromadb +from dify_vdb_chroma.chroma_vector import ChromaConfig, ChromaVector -from core.rag.datasource.vdb.chroma.chroma_vector import ChromaConfig, ChromaVector -from tests.integration_tests.vdb.test_vector_store import ( +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, get_example_text, ) -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) - class ChromaVectorTest(AbstractVectorTest): def __init__(self): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/chroma/test_chroma_vector.py b/api/providers/vdb/vdb-chroma/tests/unit_tests/test_chroma_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/chroma/test_chroma_vector.py rename to api/providers/vdb/vdb-chroma/tests/unit_tests/test_chroma_vector.py index 44427b7d87..b209c9df96 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/chroma/test_chroma_vector.py +++ b/api/providers/vdb/vdb-chroma/tests/unit_tests/test_chroma_vector.py @@ -47,7 +47,7 @@ def _build_fake_chroma_modules(): def chroma_module(monkeypatch): fake_chroma = _build_fake_chroma_modules() monkeypatch.setitem(sys.modules, "chromadb", fake_chroma) - import core.rag.datasource.vdb.chroma.chroma_vector as module + import dify_vdb_chroma.chroma_vector as module return importlib.reload(module) diff --git a/api/core/rag/datasource/vdb/clickzetta/README.md b/api/providers/vdb/vdb-clickzetta/README.md similarity index 99% rename from api/core/rag/datasource/vdb/clickzetta/README.md rename to api/providers/vdb/vdb-clickzetta/README.md index 969d4e40a0..faa76707ce 100644 --- a/api/core/rag/datasource/vdb/clickzetta/README.md +++ b/api/providers/vdb/vdb-clickzetta/README.md @@ -198,4 +198,4 @@ Clickzetta supports advanced full-text search with multiple analyzers: - [Clickzetta Vector Search Documentation](https://yunqi.tech/documents/vector-search) - [Clickzetta Inverted Index Documentation](https://yunqi.tech/documents/inverted-index) -- [Clickzetta SQL Functions](https://yunqi.tech/documents/sql-reference) +- [Clickzetta SQL Functions](https://yunqi.tech/documents/sql-reference) \ No newline at end of file diff --git a/api/providers/vdb/vdb-clickzetta/pyproject.toml b/api/providers/vdb/vdb-clickzetta/pyproject.toml new file mode 100644 index 0000000000..aea94fdb2a --- /dev/null +++ b/api/providers/vdb/vdb-clickzetta/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-clickzetta" +version = "0.0.1" + +dependencies = [ + "clickzetta-connector-python>=0.8.102", +] +description = "Dify vector store backend (dify-vdb-clickzetta)." + +[project.entry-points."dify.vector_backends"] +clickzetta = "dify_vdb_clickzetta.clickzetta_vector:ClickzettaVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/clickzetta/__init__.py b/api/providers/vdb/vdb-clickzetta/src/dify_vdb_clickzetta/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/clickzetta/__init__.py rename to api/providers/vdb/vdb-clickzetta/src/dify_vdb_clickzetta/__init__.py diff --git a/api/core/rag/datasource/vdb/clickzetta/clickzetta_vector.py b/api/providers/vdb/vdb-clickzetta/src/dify_vdb_clickzetta/clickzetta_vector.py similarity index 99% rename from api/core/rag/datasource/vdb/clickzetta/clickzetta_vector.py rename to api/providers/vdb/vdb-clickzetta/src/dify_vdb_clickzetta/clickzetta_vector.py index a4dddc68f0..72b8c5e9eb 100644 --- a/api/core/rag/datasource/vdb/clickzetta/clickzetta_vector.py +++ b/api/providers/vdb/vdb-clickzetta/src/dify_vdb_clickzetta/clickzetta_vector.py @@ -51,7 +51,7 @@ class ClickzettaConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): """ Validate the configuration values. """ diff --git a/api/tests/integration_tests/vdb/clickzetta/README.md b/api/providers/vdb/vdb-clickzetta/tests/README.md similarity index 100% rename from api/tests/integration_tests/vdb/clickzetta/README.md rename to api/providers/vdb/vdb-clickzetta/tests/README.md diff --git a/api/tests/integration_tests/vdb/clickzetta/test_clickzetta.py b/api/providers/vdb/vdb-clickzetta/tests/integration_tests/test_clickzetta.py similarity index 92% rename from api/tests/integration_tests/vdb/clickzetta/test_clickzetta.py rename to api/providers/vdb/vdb-clickzetta/tests/integration_tests/test_clickzetta.py index 21de8be6e3..1c6819f9f1 100644 --- a/api/tests/integration_tests/vdb/clickzetta/test_clickzetta.py +++ b/api/providers/vdb/vdb-clickzetta/tests/integration_tests/test_clickzetta.py @@ -2,10 +2,10 @@ import contextlib import os import pytest +from dify_vdb_clickzetta.clickzetta_vector import ClickzettaConfig, ClickzettaVector -from core.rag.datasource.vdb.clickzetta.clickzetta_vector import ClickzettaConfig, ClickzettaVector +from core.rag.datasource.vdb.vector_integration_test_support import AbstractVectorTest, get_example_text from core.rag.models.document import Document -from tests.integration_tests.vdb.test_vector_store import AbstractVectorTest, get_example_text, setup_mock_redis class TestClickzettaVector(AbstractVectorTest): @@ -14,9 +14,8 @@ class TestClickzettaVector(AbstractVectorTest): """ @pytest.fixture - def vector_store(self): + def vector_store(self, setup_mock_redis): """Create a Clickzetta vector store instance for testing.""" - # Skip test if Clickzetta credentials are not configured if not os.getenv("CLICKZETTA_USERNAME"): pytest.skip("CLICKZETTA_USERNAME is not configured") if not os.getenv("CLICKZETTA_PASSWORD"): @@ -32,21 +31,19 @@ class TestClickzettaVector(AbstractVectorTest): workspace=os.getenv("CLICKZETTA_WORKSPACE", "quick_start"), vcluster=os.getenv("CLICKZETTA_VCLUSTER", "default_ap"), schema=os.getenv("CLICKZETTA_SCHEMA", "dify_test"), - batch_size=10, # Small batch size for testing + batch_size=10, enable_inverted_index=True, analyzer_type="chinese", analyzer_mode="smart", vector_distance_function="cosine_distance", ) - with setup_mock_redis(): - vector = ClickzettaVector(collection_name="test_collection_" + str(os.getpid()), config=config) + vector = ClickzettaVector(collection_name="test_collection_" + str(os.getpid()), config=config) - yield vector + yield vector - # Cleanup: delete the test collection - with contextlib.suppress(Exception): - vector.delete() + with contextlib.suppress(Exception): + vector.delete() def test_clickzetta_vector_basic_operations(self, vector_store): """Test basic CRUD operations on Clickzetta vector store.""" diff --git a/api/tests/integration_tests/vdb/clickzetta/test_docker_integration.py b/api/providers/vdb/vdb-clickzetta/tests/integration_tests/test_docker_integration.py similarity index 55% rename from api/tests/integration_tests/vdb/clickzetta/test_docker_integration.py rename to api/providers/vdb/vdb-clickzetta/tests/integration_tests/test_docker_integration.py index 60e3f30f26..a5d32f5e81 100644 --- a/api/tests/integration_tests/vdb/clickzetta/test_docker_integration.py +++ b/api/providers/vdb/vdb-clickzetta/tests/integration_tests/test_docker_integration.py @@ -3,16 +3,19 @@ Test Clickzetta integration in Docker environment """ +import logging import os import time import httpx from clickzetta import connect +logger = logging.getLogger(__name__) + def test_clickzetta_connection(): """Test direct connection to Clickzetta""" - print("=== Testing direct Clickzetta connection ===") + logger.info("=== Testing direct Clickzetta connection ===") try: conn = connect( username=os.getenv("CLICKZETTA_USERNAME", "test_user"), @@ -25,100 +28,93 @@ def test_clickzetta_connection(): ) with conn.cursor() as cursor: - # Test basic connectivity cursor.execute("SELECT 1 as test") result = cursor.fetchone() - print(f"✓ Connection test: {result}") + logger.info("✓ Connection test: %s", result) - # Check if our test table exists cursor.execute("SHOW TABLES IN dify") tables = cursor.fetchall() - print(f"✓ Existing tables: {[t[1] for t in tables if t[0] == 'dify']}") + logger.info("✓ Existing tables: %s", [t[1] for t in tables if t[0] == "dify"]) - # Check if test collection exists test_collection = "collection_test_dataset" if test_collection in [t[1] for t in tables if t[0] == "dify"]: cursor.execute(f"DESCRIBE dify.{test_collection}") columns = cursor.fetchall() - print(f"✓ Table structure for {test_collection}:") + logger.info("✓ Table structure for %s:", test_collection) for col in columns: - print(f" - {col[0]}: {col[1]}") + logger.info(" - %s: %s", col[0], col[1]) - # Check for indexes cursor.execute(f"SHOW INDEXES IN dify.{test_collection}") indexes = cursor.fetchall() - print(f"✓ Indexes on {test_collection}:") + logger.info("✓ Indexes on %s:", test_collection) for idx in indexes: - print(f" - {idx}") + logger.info(" - %s", idx) return True - except Exception as e: - print(f"✗ Connection test failed: {e}") + except Exception: + logger.exception("✗ Connection test failed") return False def test_dify_api(): """Test Dify API with Clickzetta backend""" - print("\n=== Testing Dify API ===") + logger.info("\n=== Testing Dify API ===") base_url = "http://localhost:5001" - # Wait for API to be ready max_retries = 30 for i in range(max_retries): try: response = httpx.get(f"{base_url}/console/api/health") if response.status_code == 200: - print("✓ Dify API is ready") + logger.info("✓ Dify API is ready") break except: if i == max_retries - 1: - print("✗ Dify API is not responding") + logger.exception("✗ Dify API is not responding") return False time.sleep(2) - # Check vector store configuration try: - # This is a simplified check - in production, you'd use proper auth - print("✓ Dify is configured to use Clickzetta as vector store") + logger.info("✓ Dify is configured to use Clickzetta as vector store") return True - except Exception as e: - print(f"✗ API test failed: {e}") + except Exception: + logger.exception("✗ API test failed") return False def verify_table_structure(): """Verify the table structure meets Dify requirements""" - print("\n=== Verifying Table Structure ===") + logger.info("\n=== Verifying Table Structure ===") expected_columns = { "id": "VARCHAR", "page_content": "VARCHAR", - "metadata": "VARCHAR", # JSON stored as VARCHAR in Clickzetta + "metadata": "VARCHAR", "vector": "ARRAY", } expected_metadata_fields = ["doc_id", "doc_hash", "document_id", "dataset_id"] - print("✓ Expected table structure:") + logger.info("✓ Expected table structure:") for col, dtype in expected_columns.items(): - print(f" - {col}: {dtype}") + logger.info(" - %s: %s", col, dtype) - print("\n✓ Required metadata fields:") + logger.info("\n✓ Required metadata fields:") for field in expected_metadata_fields: - print(f" - {field}") + logger.info(" - %s", field) - print("\n✓ Index requirements:") - print(" - Vector index (HNSW) on 'vector' column") - print(" - Full-text index on 'page_content' (optional)") - print(" - Functional index on metadata->>'$.doc_id' (recommended)") - print(" - Functional index on metadata->>'$.document_id' (recommended)") + logger.info("\n✓ Index requirements:") + logger.info(" - Vector index (HNSW) on 'vector' column") + logger.info(" - Full-text index on 'page_content' (optional)") + logger.info(" - Functional index on metadata->>'$.doc_id' (recommended)") + logger.info(" - Functional index on metadata->>'$.document_id' (recommended)") return True def main(): """Run all tests""" - print("Starting Clickzetta integration tests for Dify Docker\n") + logger.info("Starting Clickzetta integration tests for Dify Docker\n") tests = [ ("Direct Clickzetta Connection", test_clickzetta_connection), @@ -131,33 +127,34 @@ def main(): try: success = test_func() results.append((test_name, success)) - except Exception as e: - print(f"\n✗ {test_name} crashed: {e}") + except Exception: + logger.exception("\n✗ %s crashed", test_name) results.append((test_name, False)) - # Summary - print("\n" + "=" * 50) - print("Test Summary:") - print("=" * 50) + logger.info("\n%s", "=" * 50) + logger.info("Test Summary:") + logger.info("=" * 50) passed = sum(1 for _, success in results if success) total = len(results) for test_name, success in results: status = "✅ PASSED" if success else "❌ FAILED" - print(f"{test_name}: {status}") + logger.info("%s: %s", test_name, status) - print(f"\nTotal: {passed}/{total} tests passed") + logger.info("\nTotal: %s/%s tests passed", passed, total) if passed == total: - print("\n🎉 All tests passed! Clickzetta is ready for Dify Docker deployment.") - print("\nNext steps:") - print("1. Run: cd docker && docker-compose -f docker-compose.yaml -f docker-compose.clickzetta.yaml up -d") - print("2. Access Dify at http://localhost:3000") - print("3. Create a dataset and test vector storage with Clickzetta") + logger.info("\n🎉 All tests passed! Clickzetta is ready for Dify Docker deployment.") + logger.info("\nNext steps:") + logger.info( + "1. Run: cd docker && docker-compose -f docker-compose.yaml -f docker-compose.clickzetta.yaml up -d" + ) + logger.info("2. Access Dify at http://localhost:3000") + logger.info("3. Create a dataset and test vector storage with Clickzetta") return 0 else: - print("\n⚠️ Some tests failed. Please check the errors above.") + logger.error("\n⚠️ Some tests failed. Please check the errors above.") return 1 diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/clickzetta/test_clickzetta_vector.py b/api/providers/vdb/vdb-clickzetta/tests/unit_tests/test_clickzetta_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/clickzetta/test_clickzetta_vector.py rename to api/providers/vdb/vdb-clickzetta/tests/unit_tests/test_clickzetta_vector.py index 0ce5c04dd6..a7473f1b91 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/clickzetta/test_clickzetta_vector.py +++ b/api/providers/vdb/vdb-clickzetta/tests/unit_tests/test_clickzetta_vector.py @@ -47,7 +47,7 @@ def _build_fake_clickzetta_module(): @pytest.fixture def clickzetta_module(monkeypatch): monkeypatch.setitem(sys.modules, "clickzetta", _build_fake_clickzetta_module()) - import core.rag.datasource.vdb.clickzetta.clickzetta_vector as module + import dify_vdb_clickzetta.clickzetta_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-couchbase/pyproject.toml b/api/providers/vdb/vdb-couchbase/pyproject.toml new file mode 100644 index 0000000000..6bc348b2eb --- /dev/null +++ b/api/providers/vdb/vdb-couchbase/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-couchbase" +version = "0.0.1" + +dependencies = [ + "couchbase~=4.6.0", +] +description = "Dify vector store backend (dify-vdb-couchbase)." + +[project.entry-points."dify.vector_backends"] +couchbase = "dify_vdb_couchbase.couchbase_vector:CouchbaseVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/couchbase/__init__.py b/api/providers/vdb/vdb-couchbase/src/dify_vdb_couchbase/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/couchbase/__init__.py rename to api/providers/vdb/vdb-couchbase/src/dify_vdb_couchbase/__init__.py diff --git a/api/core/rag/datasource/vdb/couchbase/couchbase_vector.py b/api/providers/vdb/vdb-couchbase/src/dify_vdb_couchbase/couchbase_vector.py similarity index 99% rename from api/core/rag/datasource/vdb/couchbase/couchbase_vector.py rename to api/providers/vdb/vdb-couchbase/src/dify_vdb_couchbase/couchbase_vector.py index 9a4a65cf6f..815ac30c0b 100644 --- a/api/core/rag/datasource/vdb/couchbase/couchbase_vector.py +++ b/api/providers/vdb/vdb-couchbase/src/dify_vdb_couchbase/couchbase_vector.py @@ -36,7 +36,7 @@ class CouchbaseConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values.get("connection_string"): raise ValueError("config COUCHBASE_CONNECTION_STRING is required") if not values.get("user"): diff --git a/api/tests/integration_tests/vdb/couchbase/test_couchbase.py b/api/providers/vdb/vdb-couchbase/tests/integration_tests/test_couchbase.py similarity index 80% rename from api/tests/integration_tests/vdb/couchbase/test_couchbase.py rename to api/providers/vdb/vdb-couchbase/tests/integration_tests/test_couchbase.py index 0371f04233..918dae328f 100644 --- a/api/tests/integration_tests/vdb/couchbase/test_couchbase.py +++ b/api/providers/vdb/vdb-couchbase/tests/integration_tests/test_couchbase.py @@ -1,12 +1,14 @@ +import logging import subprocess import time -from core.rag.datasource.vdb.couchbase.couchbase_vector import CouchbaseConfig, CouchbaseVector -from tests.integration_tests.vdb.test_vector_store import ( +from dify_vdb_couchbase.couchbase_vector import CouchbaseConfig, CouchbaseVector + +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, ) -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) +logger = logging.getLogger(__name__) def wait_for_healthy_container(service_name="couchbase-server", timeout=300): @@ -16,10 +18,10 @@ def wait_for_healthy_container(service_name="couchbase-server", timeout=300): ["docker", "inspect", "--format", "{{.State.Health.Status}}", service_name], capture_output=True, text=True ) if result.stdout.strip() == "healthy": - print(f"{service_name} is healthy!") + logger.info("%s is healthy!", service_name) return True else: - print(f"Waiting for {service_name} to be healthy...") + logger.info("Waiting for %s to be healthy...", service_name) time.sleep(10) raise TimeoutError(f"{service_name} did not become healthy in time") diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/couchbase/test_couchbase_vector.py b/api/providers/vdb/vdb-couchbase/tests/unit_tests/test_couchbase_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/couchbase/test_couchbase_vector.py rename to api/providers/vdb/vdb-couchbase/tests/unit_tests/test_couchbase_vector.py index 9fea187615..7e5c40b8f2 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/couchbase/test_couchbase_vector.py +++ b/api/providers/vdb/vdb-couchbase/tests/unit_tests/test_couchbase_vector.py @@ -154,7 +154,7 @@ def couchbase_module(monkeypatch): for name, module in _build_fake_couchbase_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.couchbase.couchbase_vector as module + import dify_vdb_couchbase.couchbase_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-elasticsearch/pyproject.toml b/api/providers/vdb/vdb-elasticsearch/pyproject.toml new file mode 100644 index 0000000000..d40908f92d --- /dev/null +++ b/api/providers/vdb/vdb-elasticsearch/pyproject.toml @@ -0,0 +1,15 @@ +[project] +name = "dify-vdb-elasticsearch" +version = "0.0.1" + +dependencies = [ + "elasticsearch==8.14.0", +] +description = "Dify vector store backend (dify-vdb-elasticsearch)." + +[project.entry-points."dify.vector_backends"] +elasticsearch = "dify_vdb_elasticsearch.elasticsearch_vector:ElasticSearchVectorFactory" +elasticsearch-ja = "dify_vdb_elasticsearch.elasticsearch_ja_vector:ElasticSearchJaVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/elasticsearch/__init__.py b/api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/elasticsearch/__init__.py rename to api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/__init__.py diff --git a/api/core/rag/datasource/vdb/elasticsearch/elasticsearch_ja_vector.py b/api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/elasticsearch_ja_vector.py similarity index 97% rename from api/core/rag/datasource/vdb/elasticsearch/elasticsearch_ja_vector.py rename to api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/elasticsearch_ja_vector.py index 1e7fe52666..e2f390402a 100644 --- a/api/core/rag/datasource/vdb/elasticsearch/elasticsearch_ja_vector.py +++ b/api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/elasticsearch_ja_vector.py @@ -4,14 +4,14 @@ from typing import Any from flask import current_app -from core.rag.datasource.vdb.elasticsearch.elasticsearch_vector import ( +from core.rag.datasource.vdb.field import Field +from core.rag.datasource.vdb.vector_type import VectorType +from core.rag.embedding.embedding_base import Embeddings +from dify_vdb_elasticsearch.elasticsearch_vector import ( ElasticSearchConfig, ElasticSearchVector, ElasticSearchVectorFactory, ) -from core.rag.datasource.vdb.field import Field -from core.rag.datasource.vdb.vector_type import VectorType -from core.rag.embedding.embedding_base import Embeddings from extensions.ext_redis import redis_client from models.dataset import Dataset @@ -23,7 +23,7 @@ class ElasticSearchJaVector(ElasticSearchVector): self, embeddings: list[list[float]], metadatas: list[dict[Any, Any]] | None = None, - index_params: dict | None = None, + index_params: dict[str, Any] | None = None, ): lock_name = f"vector_indexing_lock_{self._collection_name}" with redis_client.lock(lock_name, timeout=20): diff --git a/api/core/rag/datasource/vdb/elasticsearch/elasticsearch_vector.py b/api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/elasticsearch_vector.py similarity index 99% rename from api/core/rag/datasource/vdb/elasticsearch/elasticsearch_vector.py rename to api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/elasticsearch_vector.py index 1470713b88..11463b6c58 100644 --- a/api/core/rag/datasource/vdb/elasticsearch/elasticsearch_vector.py +++ b/api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/elasticsearch_vector.py @@ -43,7 +43,7 @@ class ElasticSearchConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): use_cloud = values.get("use_cloud", False) cloud_url = values.get("cloud_url") @@ -258,7 +258,7 @@ class ElasticSearchVector(BaseVector): self, embeddings: list[list[float]], metadatas: list[dict[Any, Any]] | None = None, - index_params: dict | None = None, + index_params: dict[str, Any] | None = None, ): lock_name = f"vector_indexing_lock_{self._collection_name}" with redis_client.lock(lock_name, timeout=20): diff --git a/api/tests/integration_tests/vdb/elasticsearch/test_elasticsearch.py b/api/providers/vdb/vdb-elasticsearch/tests/integration_tests/test_elasticsearch.py similarity index 71% rename from api/tests/integration_tests/vdb/elasticsearch/test_elasticsearch.py rename to api/providers/vdb/vdb-elasticsearch/tests/integration_tests/test_elasticsearch.py index 970d2cce1a..c8b679e021 100644 --- a/api/tests/integration_tests/vdb/elasticsearch/test_elasticsearch.py +++ b/api/providers/vdb/vdb-elasticsearch/tests/integration_tests/test_elasticsearch.py @@ -1,10 +1,9 @@ -from core.rag.datasource.vdb.elasticsearch.elasticsearch_vector import ElasticSearchConfig, ElasticSearchVector -from tests.integration_tests.vdb.test_vector_store import ( +from dify_vdb_elasticsearch.elasticsearch_vector import ElasticSearchConfig, ElasticSearchVector + +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, ) -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) - class ElasticSearchVectorTest(AbstractVectorTest): def __init__(self): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/elasticsearch/test_elasticsearch_ja_vector.py b/api/providers/vdb/vdb-elasticsearch/tests/unit_tests/test_elasticsearch_ja_vector.py similarity index 96% rename from api/tests/unit_tests/core/rag/datasource/vdb/elasticsearch/test_elasticsearch_ja_vector.py rename to api/providers/vdb/vdb-elasticsearch/tests/unit_tests/test_elasticsearch_ja_vector.py index edd29a4649..f81ed6beea 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/elasticsearch/test_elasticsearch_ja_vector.py +++ b/api/providers/vdb/vdb-elasticsearch/tests/unit_tests/test_elasticsearch_ja_vector.py @@ -32,8 +32,8 @@ def elasticsearch_ja_module(monkeypatch): for name, module in _build_fake_elasticsearch_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.elasticsearch.elasticsearch_ja_vector as ja_module - import core.rag.datasource.vdb.elasticsearch.elasticsearch_vector as base_module + import dify_vdb_elasticsearch.elasticsearch_ja_vector as ja_module + import dify_vdb_elasticsearch.elasticsearch_vector as base_module importlib.reload(base_module) return importlib.reload(ja_module) diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/elasticsearch/test_elasticsearch_vector.py b/api/providers/vdb/vdb-elasticsearch/tests/unit_tests/test_elasticsearch_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/elasticsearch/test_elasticsearch_vector.py rename to api/providers/vdb/vdb-elasticsearch/tests/unit_tests/test_elasticsearch_vector.py index 9ecf0caa24..48f1f6dc26 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/elasticsearch/test_elasticsearch_vector.py +++ b/api/providers/vdb/vdb-elasticsearch/tests/unit_tests/test_elasticsearch_vector.py @@ -42,7 +42,7 @@ def elasticsearch_module(monkeypatch): for name, module in _build_fake_elasticsearch_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.elasticsearch.elasticsearch_vector as module + import dify_vdb_elasticsearch.elasticsearch_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-hologres/pyproject.toml b/api/providers/vdb/vdb-hologres/pyproject.toml new file mode 100644 index 0000000000..88044bf6d6 --- /dev/null +++ b/api/providers/vdb/vdb-hologres/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-hologres" +version = "0.0.1" + +dependencies = [ + "holo-search-sdk>=0.4.2", +] +description = "Dify vector store backend (dify-vdb-hologres)." + +[project.entry-points."dify.vector_backends"] +hologres = "dify_vdb_hologres.hologres_vector:HologresVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/hologres/__init__.py b/api/providers/vdb/vdb-hologres/src/dify_vdb_hologres/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/hologres/__init__.py rename to api/providers/vdb/vdb-hologres/src/dify_vdb_hologres/__init__.py diff --git a/api/core/rag/datasource/vdb/hologres/hologres_vector.py b/api/providers/vdb/vdb-hologres/src/dify_vdb_hologres/hologres_vector.py similarity index 97% rename from api/core/rag/datasource/vdb/hologres/hologres_vector.py rename to api/providers/vdb/vdb-hologres/src/dify_vdb_hologres/hologres_vector.py index 13d48b5668..80c0ed582e 100644 --- a/api/core/rag/datasource/vdb/hologres/hologres_vector.py +++ b/api/providers/vdb/vdb-hologres/src/dify_vdb_hologres/hologres_vector.py @@ -1,7 +1,7 @@ import json import logging import time -from typing import Any +from typing import Any, cast import holo_search_sdk as holo # type: ignore from holo_search_sdk.types import BaseQuantizationType, DistanceType, TokenizerType @@ -43,7 +43,7 @@ class HologresVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values.get("host"): raise ValueError("config HOLOGRES_HOST is required") if not values.get("database"): @@ -351,9 +351,9 @@ class HologresVectorFactory(AbstractVectorFactory): access_key_id=dify_config.HOLOGRES_ACCESS_KEY_ID or "", access_key_secret=dify_config.HOLOGRES_ACCESS_KEY_SECRET or "", schema_name=dify_config.HOLOGRES_SCHEMA, - tokenizer=dify_config.HOLOGRES_TOKENIZER, - distance_method=dify_config.HOLOGRES_DISTANCE_METHOD, - base_quantization_type=dify_config.HOLOGRES_BASE_QUANTIZATION_TYPE, + tokenizer=cast(TokenizerType, dify_config.HOLOGRES_TOKENIZER), + distance_method=cast(DistanceType, dify_config.HOLOGRES_DISTANCE_METHOD), + base_quantization_type=cast(BaseQuantizationType, dify_config.HOLOGRES_BASE_QUANTIZATION_TYPE), max_degree=dify_config.HOLOGRES_MAX_DEGREE, ef_construction=dify_config.HOLOGRES_EF_CONSTRUCTION, ), diff --git a/api/tests/integration_tests/vdb/__mock/hologres.py b/api/providers/vdb/vdb-hologres/tests/integration_tests/conftest.py similarity index 82% rename from api/tests/integration_tests/vdb/__mock/hologres.py rename to api/providers/vdb/vdb-hologres/tests/integration_tests/conftest.py index b60cf358c0..d28ded0187 100644 --- a/api/tests/integration_tests/vdb/__mock/hologres.py +++ b/api/providers/vdb/vdb-hologres/tests/integration_tests/conftest.py @@ -7,13 +7,10 @@ import pytest from _pytest.monkeypatch import MonkeyPatch from psycopg import sql as psql -# Shared in-memory storage: {table_name: {doc_id: {"id", "text", "meta", "embedding"}}} _mock_tables: dict[str, dict[str, dict[str, Any]]] = {} class MockSearchQuery: - """Mock query builder for search_vector and search_text results.""" - def __init__(self, table_name: str, search_type: str): self._table_name = table_name self._search_type = search_type @@ -32,17 +29,13 @@ class MockSearchQuery: return self def _apply_filter(self, row: dict[str, Any]) -> bool: - """Apply the filter SQL to check if a row matches.""" if self._filter_sql is None: return True - # Extract literals (the document IDs) from the filter SQL - # Filter format: meta->>'document_id' IN ('doc1', 'doc2') literals = [v for t, v in _extract_identifiers_and_literals(self._filter_sql) if t == "literal"] if not literals: return True - # Get the document_id from the row's meta field meta = row.get("meta", "{}") if isinstance(meta, str): meta = json.loads(meta) @@ -54,22 +47,17 @@ class MockSearchQuery: data = _mock_tables.get(self._table_name, {}) results = [] for row in list(data.values())[: self._limit_val]: - # Apply filter if present if not self._apply_filter(row): continue if self._search_type == "vector": - # row format expected by _process_vector_results: (distance, id, text, meta) results.append((0.1, row["id"], row["text"], row["meta"])) else: - # row format expected by _process_full_text_results: (id, text, meta, embedding, score) results.append((row["id"], row["text"], row["meta"], row.get("embedding", []), 0.9)) return results class MockTable: - """Mock table object returned by client.open_table().""" - def __init__(self, table_name: str): self._table_name = table_name @@ -97,7 +85,6 @@ class MockTable: def _extract_sql_template(query) -> str: - """Extract the SQL template string from a psycopg Composed object.""" if isinstance(query, psql.Composed): for part in query: if isinstance(part, psql.SQL): @@ -108,7 +95,6 @@ def _extract_sql_template(query) -> str: def _extract_identifiers_and_literals(query) -> list[Any]: - """Extract Identifier and Literal values from a psycopg Composed object.""" values: list[Any] = [] if isinstance(query, psql.Composed): for part in query: @@ -117,7 +103,6 @@ def _extract_identifiers_and_literals(query) -> list[Any]: elif isinstance(part, psql.Literal): values.append(("literal", part._obj)) elif isinstance(part, psql.Composed): - # Handles SQL(...).join(...) for IN clauses for sub in part: if isinstance(sub, psql.Literal): values.append(("literal", sub._obj)) @@ -125,8 +110,6 @@ def _extract_identifiers_and_literals(query) -> list[Any]: class MockHologresClient: - """Mock holo_search_sdk client that stores data in memory.""" - def connect(self): pass @@ -141,21 +124,18 @@ class MockHologresClient: params = _extract_identifiers_and_literals(query) if "CREATE TABLE" in template.upper(): - # Extract table name from first identifier table_name = next((v for t, v in params if t == "ident"), "unknown") if table_name not in _mock_tables: _mock_tables[table_name] = {} return None if "SELECT 1" in template: - # text_exists: SELECT 1 FROM {table} WHERE id = {id} LIMIT 1 table_name = next((v for t, v in params if t == "ident"), "") doc_id = next((v for t, v in params if t == "literal"), "") data = _mock_tables.get(table_name, {}) return [(1,)] if doc_id in data else [] if "SELECT id" in template: - # get_ids_by_metadata_field: SELECT id FROM {table} WHERE meta->>{key} = {value} table_name = next((v for t, v in params if t == "ident"), "") literals = [v for t, v in params if t == "literal"] key = literals[0] if len(literals) > 0 else "" @@ -166,12 +146,10 @@ class MockHologresClient: if "DELETE" in template.upper(): table_name = next((v for t, v in params if t == "ident"), "") if "id IN" in template: - # delete_by_ids ids_to_delete = [v for t, v in params if t == "literal"] for did in ids_to_delete: _mock_tables.get(table_name, {}).pop(did, None) elif "meta->>" in template: - # delete_by_metadata_field literals = [v for t, v in params if t == "literal"] key = literals[0] if len(literals) > 0 else "" value = literals[1] if len(literals) > 1 else "" @@ -190,7 +168,6 @@ class MockHologresClient: def mock_connect(**kwargs): - """Replacement for holo_search_sdk.connect() that returns a mock client.""" return MockHologresClient() diff --git a/api/tests/integration_tests/vdb/hologres/test_hologres.py b/api/providers/vdb/vdb-hologres/tests/integration_tests/test_hologres.py similarity index 94% rename from api/tests/integration_tests/vdb/hologres/test_hologres.py rename to api/providers/vdb/vdb-hologres/tests/integration_tests/test_hologres.py index d81e18841e..04024be4ae 100644 --- a/api/tests/integration_tests/vdb/hologres/test_hologres.py +++ b/api/providers/vdb/vdb-hologres/tests/integration_tests/test_hologres.py @@ -2,16 +2,11 @@ import os import uuid from typing import cast +from dify_vdb_hologres.hologres_vector import HologresVector, HologresVectorConfig from holo_search_sdk.types import BaseQuantizationType, DistanceType, TokenizerType -from core.rag.datasource.vdb.hologres.hologres_vector import HologresVector, HologresVectorConfig +from core.rag.datasource.vdb.vector_integration_test_support import AbstractVectorTest, get_example_text from core.rag.models.document import Document -from tests.integration_tests.vdb.test_vector_store import AbstractVectorTest, get_example_text - -pytest_plugins = ( - "tests.integration_tests.vdb.test_vector_store", - "tests.integration_tests.vdb.__mock.hologres", -) MOCK = os.getenv("MOCK_SWITCH", "false").lower() == "true" diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/hologres/test_hologres_vector.py b/api/providers/vdb/vdb-hologres/tests/unit_tests/test_hologres_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/hologres/test_hologres_vector.py rename to api/providers/vdb/vdb-hologres/tests/unit_tests/test_hologres_vector.py index 5d9e744ded..f9a557ecce 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/hologres/test_hologres_vector.py +++ b/api/providers/vdb/vdb-hologres/tests/unit_tests/test_hologres_vector.py @@ -42,7 +42,7 @@ def hologres_module(monkeypatch): for name, module in _build_fake_hologres_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.hologres.hologres_vector as module + import dify_vdb_hologres.hologres_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-huawei-cloud/pyproject.toml b/api/providers/vdb/vdb-huawei-cloud/pyproject.toml new file mode 100644 index 0000000000..71af56786c --- /dev/null +++ b/api/providers/vdb/vdb-huawei-cloud/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-huawei-cloud" +version = "0.0.1" + +dependencies = [ + "elasticsearch==8.14.0", +] +description = "Dify vector store backend (dify-vdb-huawei-cloud)." + +[project.entry-points."dify.vector_backends"] +huawei_cloud = "dify_vdb_huawei_cloud.huawei_cloud_vector:HuaweiCloudVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/huawei/__init__.py b/api/providers/vdb/vdb-huawei-cloud/src/dify_vdb_huawei_cloud/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/huawei/__init__.py rename to api/providers/vdb/vdb-huawei-cloud/src/dify_vdb_huawei_cloud/__init__.py diff --git a/api/core/rag/datasource/vdb/huawei/huawei_cloud_vector.py b/api/providers/vdb/vdb-huawei-cloud/src/dify_vdb_huawei_cloud/huawei_cloud_vector.py similarity index 91% rename from api/core/rag/datasource/vdb/huawei/huawei_cloud_vector.py rename to api/providers/vdb/vdb-huawei-cloud/src/dify_vdb_huawei_cloud/huawei_cloud_vector.py index df02c584ed..d51075d2e8 100644 --- a/api/core/rag/datasource/vdb/huawei/huawei_cloud_vector.py +++ b/api/providers/vdb/vdb-huawei-cloud/src/dify_vdb_huawei_cloud/huawei_cloud_vector.py @@ -5,6 +5,7 @@ from typing import Any from elasticsearch import Elasticsearch from pydantic import BaseModel, model_validator +from typing_extensions import TypedDict from configs import dify_config from core.rag.datasource.vdb.field import Field @@ -19,6 +20,16 @@ from models.dataset import Dataset logger = logging.getLogger(__name__) +class HuaweiElasticsearchParamsDict(TypedDict, total=False): + hosts: list[str] + verify_certs: bool + ssl_show_warn: bool + request_timeout: int + retry_on_timeout: bool + max_retries: int + basic_auth: tuple[str, str] + + def create_ssl_context() -> ssl.SSLContext: ssl_context = ssl.create_default_context() ssl_context.check_hostname = False @@ -33,20 +44,20 @@ class HuaweiCloudVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["hosts"]: raise ValueError("config HOSTS is required") return values - def to_elasticsearch_params(self) -> dict[str, Any]: - params = { - "hosts": self.hosts.split(","), - "verify_certs": False, - "ssl_show_warn": False, - "request_timeout": 30000, - "retry_on_timeout": True, - "max_retries": 10, - } + def to_elasticsearch_params(self) -> HuaweiElasticsearchParamsDict: + params = HuaweiElasticsearchParamsDict( + hosts=self.hosts.split(","), + verify_certs=False, + ssl_show_warn=False, + request_timeout=30000, + retry_on_timeout=True, + max_retries=10, + ) if self.username and self.password: params["basic_auth"] = (self.username, self.password) return params @@ -158,7 +169,7 @@ class HuaweiCloudVector(BaseVector): self, embeddings: list[list[float]], metadatas: list[dict[Any, Any]] | None = None, - index_params: dict | None = None, + index_params: dict[str, Any] | None = None, ): lock_name = f"vector_indexing_lock_{self._collection_name}" with redis_client.lock(lock_name, timeout=20): diff --git a/api/tests/integration_tests/vdb/__mock/huaweicloudvectordb.py b/api/providers/vdb/vdb-huawei-cloud/tests/integration_tests/conftest.py similarity index 100% rename from api/tests/integration_tests/vdb/__mock/huaweicloudvectordb.py rename to api/providers/vdb/vdb-huawei-cloud/tests/integration_tests/conftest.py diff --git a/api/tests/integration_tests/vdb/huawei/test_huawei_cloud.py b/api/providers/vdb/vdb-huawei-cloud/tests/integration_tests/test_huawei_cloud.py similarity index 69% rename from api/tests/integration_tests/vdb/huawei/test_huawei_cloud.py rename to api/providers/vdb/vdb-huawei-cloud/tests/integration_tests/test_huawei_cloud.py index 01f511358a..bb5f5b72ef 100644 --- a/api/tests/integration_tests/vdb/huawei/test_huawei_cloud.py +++ b/api/providers/vdb/vdb-huawei-cloud/tests/integration_tests/test_huawei_cloud.py @@ -1,10 +1,6 @@ -from core.rag.datasource.vdb.huawei.huawei_cloud_vector import HuaweiCloudVector, HuaweiCloudVectorConfig -from tests.integration_tests.vdb.test_vector_store import AbstractVectorTest, get_example_text +from dify_vdb_huawei_cloud.huawei_cloud_vector import HuaweiCloudVector, HuaweiCloudVectorConfig -pytest_plugins = ( - "tests.integration_tests.vdb.test_vector_store", - "tests.integration_tests.vdb.__mock.huaweicloudvectordb", -) +from core.rag.datasource.vdb.vector_integration_test_support import AbstractVectorTest, get_example_text class HuaweiCloudVectorTest(AbstractVectorTest): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/huawei/test_huawei_cloud_vector.py b/api/providers/vdb/vdb-huawei-cloud/tests/unit_tests/test_huawei_cloud_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/huawei/test_huawei_cloud_vector.py rename to api/providers/vdb/vdb-huawei-cloud/tests/unit_tests/test_huawei_cloud_vector.py index 9d23dfcf63..ba3f14912b 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/huawei/test_huawei_cloud_vector.py +++ b/api/providers/vdb/vdb-huawei-cloud/tests/unit_tests/test_huawei_cloud_vector.py @@ -33,7 +33,7 @@ def huawei_module(monkeypatch): for name, module in _build_fake_elasticsearch_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.huawei.huawei_cloud_vector as module + import dify_vdb_huawei_cloud.huawei_cloud_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-iris/pyproject.toml b/api/providers/vdb/vdb-iris/pyproject.toml new file mode 100644 index 0000000000..6dd7a8e073 --- /dev/null +++ b/api/providers/vdb/vdb-iris/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-iris" +version = "0.0.1" + +dependencies = [ + "intersystems-irispython>=5.1.0", +] +description = "Dify vector store backend (dify-vdb-iris)." + +[project.entry-points."dify.vector_backends"] +iris = "dify_vdb_iris.iris_vector:IrisVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/iris/__init__.py b/api/providers/vdb/vdb-iris/src/dify_vdb_iris/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/iris/__init__.py rename to api/providers/vdb/vdb-iris/src/dify_vdb_iris/__init__.py diff --git a/api/core/rag/datasource/vdb/iris/iris_vector.py b/api/providers/vdb/vdb-iris/src/dify_vdb_iris/iris_vector.py similarity index 100% rename from api/core/rag/datasource/vdb/iris/iris_vector.py rename to api/providers/vdb/vdb-iris/src/dify_vdb_iris/iris_vector.py diff --git a/api/tests/integration_tests/vdb/iris/test_iris.py b/api/providers/vdb/vdb-iris/tests/integration_tests/test_iris.py similarity index 85% rename from api/tests/integration_tests/vdb/iris/test_iris.py rename to api/providers/vdb/vdb-iris/tests/integration_tests/test_iris.py index 4b2da8387b..8281e89c8a 100644 --- a/api/tests/integration_tests/vdb/iris/test_iris.py +++ b/api/providers/vdb/vdb-iris/tests/integration_tests/test_iris.py @@ -1,12 +1,11 @@ """Integration tests for IRIS vector database.""" -from core.rag.datasource.vdb.iris.iris_vector import IrisVector, IrisVectorConfig -from tests.integration_tests.vdb.test_vector_store import ( +from dify_vdb_iris.iris_vector import IrisVector, IrisVectorConfig + +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, ) -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) - class IrisVectorTest(AbstractVectorTest): """Test suite for IRIS vector store implementation.""" diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/iris/test_iris_vector.py b/api/providers/vdb/vdb-iris/tests/unit_tests/test_iris_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/iris/test_iris_vector.py rename to api/providers/vdb/vdb-iris/tests/unit_tests/test_iris_vector.py index 63338ca809..8c038e82b9 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/iris/test_iris_vector.py +++ b/api/providers/vdb/vdb-iris/tests/unit_tests/test_iris_vector.py @@ -26,7 +26,7 @@ def _build_fake_iris_module(): def iris_module(monkeypatch): monkeypatch.setitem(sys.modules, "iris", _build_fake_iris_module()) - import core.rag.datasource.vdb.iris.iris_vector as module + import dify_vdb_iris.iris_vector as module reloaded = importlib.reload(module) reloaded._pool_instance = None diff --git a/api/providers/vdb/vdb-lindorm/pyproject.toml b/api/providers/vdb/vdb-lindorm/pyproject.toml new file mode 100644 index 0000000000..0cffc67491 --- /dev/null +++ b/api/providers/vdb/vdb-lindorm/pyproject.toml @@ -0,0 +1,15 @@ +[project] +name = "dify-vdb-lindorm" +version = "0.0.1" + +dependencies = [ + "opensearch-py==3.1.0", + "tenacity>=8.0.0", +] +description = "Dify vector store backend (dify-vdb-lindorm)." + +[project.entry-points."dify.vector_backends"] +lindorm = "dify_vdb_lindorm.lindorm_vector:LindormVectorStoreFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/lindorm/__init__.py b/api/providers/vdb/vdb-lindorm/src/dify_vdb_lindorm/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/lindorm/__init__.py rename to api/providers/vdb/vdb-lindorm/src/dify_vdb_lindorm/__init__.py diff --git a/api/core/rag/datasource/vdb/lindorm/lindorm_vector.py b/api/providers/vdb/vdb-lindorm/src/dify_vdb_lindorm/lindorm_vector.py similarity index 96% rename from api/core/rag/datasource/vdb/lindorm/lindorm_vector.py rename to api/providers/vdb/vdb-lindorm/src/dify_vdb_lindorm/lindorm_vector.py index bfcb620618..9187ca943d 100644 --- a/api/core/rag/datasource/vdb/lindorm/lindorm_vector.py +++ b/api/providers/vdb/vdb-lindorm/src/dify_vdb_lindorm/lindorm_vector.py @@ -7,6 +7,7 @@ from opensearchpy import OpenSearch, helpers from opensearchpy.helpers import BulkIndexError from pydantic import BaseModel, model_validator from tenacity import retry, stop_after_attempt, wait_exponential +from typing_extensions import TypedDict from configs import dify_config from core.rag.datasource.vdb.field import Field @@ -26,6 +27,14 @@ ROUTING_FIELD = "routing_field" UGC_INDEX_PREFIX = "ugc_index" +class LindormOpenSearchParamsDict(TypedDict, total=False): + hosts: str | None + use_ssl: bool + pool_maxsize: int + timeout: int + http_auth: tuple[str, str] + + class LindormVectorStoreConfig(BaseModel): hosts: str | None username: str | None = None @@ -35,7 +44,7 @@ class LindormVectorStoreConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["hosts"]: raise ValueError("config URL is required") if not values["username"]: @@ -44,13 +53,13 @@ class LindormVectorStoreConfig(BaseModel): raise ValueError("config PASSWORD is required") return values - def to_opensearch_params(self) -> dict[str, Any]: - params: dict[str, Any] = { - "hosts": self.hosts, - "use_ssl": False, - "pool_maxsize": 128, - "timeout": 30, - } + def to_opensearch_params(self) -> LindormOpenSearchParamsDict: + params = LindormOpenSearchParamsDict( + hosts=self.hosts, + use_ssl=False, + pool_maxsize=128, + timeout=30, + ) if self.username and self.password: params["http_auth"] = (self.username, self.password) return params @@ -327,7 +336,10 @@ class LindormVectorStore(BaseVector): return docs def create_collection( - self, embeddings: list, metadatas: list[dict] | None = None, index_params: dict | None = None + self, + embeddings: list[list[float]], + metadatas: list[dict[str, Any]] | None = None, + index_params: dict[str, Any] | None = None, ): if not embeddings: raise ValueError(f"Embeddings list cannot be empty for collection create '{self._collection_name}'") diff --git a/api/tests/integration_tests/vdb/lindorm/test_lindorm.py b/api/providers/vdb/vdb-lindorm/tests/integration_tests/test_lindorm.py similarity index 88% rename from api/tests/integration_tests/vdb/lindorm/test_lindorm.py rename to api/providers/vdb/vdb-lindorm/tests/integration_tests/test_lindorm.py index b24498fdfd..0a0c2d2d59 100644 --- a/api/tests/integration_tests/vdb/lindorm/test_lindorm.py +++ b/api/providers/vdb/vdb-lindorm/tests/integration_tests/test_lindorm.py @@ -1,9 +1,8 @@ import os -from core.rag.datasource.vdb.lindorm.lindorm_vector import LindormVectorStore, LindormVectorStoreConfig -from tests.integration_tests.vdb.test_vector_store import AbstractVectorTest +from dify_vdb_lindorm.lindorm_vector import LindormVectorStore, LindormVectorStoreConfig -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) +from core.rag.datasource.vdb.vector_integration_test_support import AbstractVectorTest class Config: diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/lindorm/test_lindorm_vector.py b/api/providers/vdb/vdb-lindorm/tests/unit_tests/test_lindorm_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/lindorm/test_lindorm_vector.py rename to api/providers/vdb/vdb-lindorm/tests/unit_tests/test_lindorm_vector.py index 34357d5907..238145c1d6 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/lindorm/test_lindorm_vector.py +++ b/api/providers/vdb/vdb-lindorm/tests/unit_tests/test_lindorm_vector.py @@ -51,7 +51,7 @@ def lindorm_module(monkeypatch): for name, module in _build_fake_opensearch_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.lindorm.lindorm_vector as module + import dify_vdb_lindorm.lindorm_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-matrixone/pyproject.toml b/api/providers/vdb/vdb-matrixone/pyproject.toml new file mode 100644 index 0000000000..53363ed7d9 --- /dev/null +++ b/api/providers/vdb/vdb-matrixone/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-matrixone" +version = "0.0.1" + +dependencies = [ + "mo-vector~=0.1.13", +] +description = "Dify vector store backend (dify-vdb-matrixone)." + +[project.entry-points."dify.vector_backends"] +matrixone = "dify_vdb_matrixone.matrixone_vector:MatrixoneVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/matrixone/__init__.py b/api/providers/vdb/vdb-matrixone/src/dify_vdb_matrixone/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/matrixone/__init__.py rename to api/providers/vdb/vdb-matrixone/src/dify_vdb_matrixone/__init__.py diff --git a/api/core/rag/datasource/vdb/matrixone/matrixone_vector.py b/api/providers/vdb/vdb-matrixone/src/dify_vdb_matrixone/matrixone_vector.py similarity index 99% rename from api/core/rag/datasource/vdb/matrixone/matrixone_vector.py rename to api/providers/vdb/vdb-matrixone/src/dify_vdb_matrixone/matrixone_vector.py index c6ebccd204..75fb54e6f4 100644 --- a/api/core/rag/datasource/vdb/matrixone/matrixone_vector.py +++ b/api/providers/vdb/vdb-matrixone/src/dify_vdb_matrixone/matrixone_vector.py @@ -43,7 +43,7 @@ class MatrixoneConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config host is required") if not values["port"]: diff --git a/api/tests/integration_tests/vdb/matrixone/test_matrixone.py b/api/providers/vdb/vdb-matrixone/tests/integration_tests/test_matrixone.py similarity index 74% rename from api/tests/integration_tests/vdb/matrixone/test_matrixone.py rename to api/providers/vdb/vdb-matrixone/tests/integration_tests/test_matrixone.py index fe592f6699..d6f4781e65 100644 --- a/api/tests/integration_tests/vdb/matrixone/test_matrixone.py +++ b/api/providers/vdb/vdb-matrixone/tests/integration_tests/test_matrixone.py @@ -1,10 +1,9 @@ -from core.rag.datasource.vdb.matrixone.matrixone_vector import MatrixoneConfig, MatrixoneVector -from tests.integration_tests.vdb.test_vector_store import ( +from dify_vdb_matrixone.matrixone_vector import MatrixoneConfig, MatrixoneVector + +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, ) -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) - class MatrixoneVectorTest(AbstractVectorTest): def __init__(self): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/matrixone/test_matrixone_vector.py b/api/providers/vdb/vdb-matrixone/tests/unit_tests/test_matrixone_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/matrixone/test_matrixone_vector.py rename to api/providers/vdb/vdb-matrixone/tests/unit_tests/test_matrixone_vector.py index 55e7b9112e..c22f4304e5 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/matrixone/test_matrixone_vector.py +++ b/api/providers/vdb/vdb-matrixone/tests/unit_tests/test_matrixone_vector.py @@ -36,7 +36,7 @@ def matrixone_module(monkeypatch): for name, module in _build_fake_mo_vector_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.matrixone.matrixone_vector as module + import dify_vdb_matrixone.matrixone_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-milvus/pyproject.toml b/api/providers/vdb/vdb-milvus/pyproject.toml new file mode 100644 index 0000000000..57385a4431 --- /dev/null +++ b/api/providers/vdb/vdb-milvus/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-milvus" +version = "0.0.1" + +dependencies = [ + "pymilvus~=2.6.12", +] +description = "Dify vector store backend (dify-vdb-milvus)." + +[project.entry-points."dify.vector_backends"] +milvus = "dify_vdb_milvus.milvus_vector:MilvusVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/milvus/__init__.py b/api/providers/vdb/vdb-milvus/src/dify_vdb_milvus/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/milvus/__init__.py rename to api/providers/vdb/vdb-milvus/src/dify_vdb_milvus/__init__.py diff --git a/api/core/rag/datasource/vdb/milvus/milvus_vector.py b/api/providers/vdb/vdb-milvus/src/dify_vdb_milvus/milvus_vector.py similarity index 98% rename from api/core/rag/datasource/vdb/milvus/milvus_vector.py rename to api/providers/vdb/vdb-milvus/src/dify_vdb_milvus/milvus_vector.py index 7cdb2d3a99..46f3224a95 100644 --- a/api/core/rag/datasource/vdb/milvus/milvus_vector.py +++ b/api/providers/vdb/vdb-milvus/src/dify_vdb_milvus/milvus_vector.py @@ -45,7 +45,7 @@ class MilvusConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): """ Validate the configuration values. Raises ValueError if required fields are missing. @@ -302,7 +302,10 @@ class MilvusVector(BaseVector): ) def create_collection( - self, embeddings: list, metadatas: list[dict] | None = None, index_params: dict | None = None + self, + embeddings: list[list[float]], + metadatas: list[dict[str, Any]] | None = None, + index_params: dict[str, Any] | None = None, ): """ Create a new collection in Milvus with the specified schema and index parameters. diff --git a/api/tests/integration_tests/vdb/milvus/test_milvus.py b/api/providers/vdb/vdb-milvus/tests/integration_tests/test_milvus.py similarity index 80% rename from api/tests/integration_tests/vdb/milvus/test_milvus.py rename to api/providers/vdb/vdb-milvus/tests/integration_tests/test_milvus.py index b5fc4b4d10..084d808bed 100644 --- a/api/tests/integration_tests/vdb/milvus/test_milvus.py +++ b/api/providers/vdb/vdb-milvus/tests/integration_tests/test_milvus.py @@ -1,11 +1,10 @@ -from core.rag.datasource.vdb.milvus.milvus_vector import MilvusConfig, MilvusVector -from tests.integration_tests.vdb.test_vector_store import ( +from dify_vdb_milvus.milvus_vector import MilvusConfig, MilvusVector + +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, get_example_text, ) -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) - class MilvusVectorTest(AbstractVectorTest): def __init__(self): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/milvus/test_milvus.py b/api/providers/vdb/vdb-milvus/tests/unit_tests/test_milvus.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/milvus/test_milvus.py rename to api/providers/vdb/vdb-milvus/tests/unit_tests/test_milvus.py index 2ac2c40d38..36c0ed8f6f 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/milvus/test_milvus.py +++ b/api/providers/vdb/vdb-milvus/tests/unit_tests/test_milvus.py @@ -103,7 +103,7 @@ def milvus_module(monkeypatch): for name, module in _build_fake_pymilvus_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.milvus.milvus_vector as module + import dify_vdb_milvus.milvus_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-myscale/pyproject.toml b/api/providers/vdb/vdb-myscale/pyproject.toml new file mode 100644 index 0000000000..13e0f35d23 --- /dev/null +++ b/api/providers/vdb/vdb-myscale/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-myscale" +version = "0.0.1" + +dependencies = [ + "clickhouse-connect~=0.15.0", +] +description = "Dify vector store backend (dify-vdb-myscale)." + +[project.entry-points."dify.vector_backends"] +myscale = "dify_vdb_myscale.myscale_vector:MyScaleVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/myscale/__init__.py b/api/providers/vdb/vdb-myscale/src/dify_vdb_myscale/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/myscale/__init__.py rename to api/providers/vdb/vdb-myscale/src/dify_vdb_myscale/__init__.py diff --git a/api/core/rag/datasource/vdb/myscale/myscale_vector.py b/api/providers/vdb/vdb-myscale/src/dify_vdb_myscale/myscale_vector.py similarity index 100% rename from api/core/rag/datasource/vdb/myscale/myscale_vector.py rename to api/providers/vdb/vdb-myscale/src/dify_vdb_myscale/myscale_vector.py diff --git a/api/tests/integration_tests/vdb/myscale/test_myscale.py b/api/providers/vdb/vdb-myscale/tests/integration_tests/test_myscale.py similarity index 76% rename from api/tests/integration_tests/vdb/myscale/test_myscale.py rename to api/providers/vdb/vdb-myscale/tests/integration_tests/test_myscale.py index 74cefad2af..8ea42d5f45 100644 --- a/api/tests/integration_tests/vdb/myscale/test_myscale.py +++ b/api/providers/vdb/vdb-myscale/tests/integration_tests/test_myscale.py @@ -1,10 +1,9 @@ -from core.rag.datasource.vdb.myscale.myscale_vector import MyScaleConfig, MyScaleVector -from tests.integration_tests.vdb.test_vector_store import ( +from dify_vdb_myscale.myscale_vector import MyScaleConfig, MyScaleVector + +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, ) -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) - class MyScaleVectorTest(AbstractVectorTest): def __init__(self): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/myscale/test_myscale_vector.py b/api/providers/vdb/vdb-myscale/tests/unit_tests/test_myscale_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/myscale/test_myscale_vector.py rename to api/providers/vdb/vdb-myscale/tests/unit_tests/test_myscale_vector.py index a75ba82238..228ea92639 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/myscale/test_myscale_vector.py +++ b/api/providers/vdb/vdb-myscale/tests/unit_tests/test_myscale_vector.py @@ -42,7 +42,7 @@ def myscale_module(monkeypatch): fake_module = _build_fake_clickhouse_connect_module() monkeypatch.setitem(sys.modules, "clickhouse_connect", fake_module) - import core.rag.datasource.vdb.myscale.myscale_vector as module + import dify_vdb_myscale.myscale_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-oceanbase/pyproject.toml b/api/providers/vdb/vdb-oceanbase/pyproject.toml new file mode 100644 index 0000000000..887869a41c --- /dev/null +++ b/api/providers/vdb/vdb-oceanbase/pyproject.toml @@ -0,0 +1,16 @@ +[project] +name = "dify-vdb-oceanbase" +version = "0.0.1" + +dependencies = [ + "pyobvector~=0.2.17", + "mysql-connector-python>=9.3.0", +] +description = "Dify vector store backend (dify-vdb-oceanbase)." + +[project.entry-points."dify.vector_backends"] +oceanbase = "dify_vdb_oceanbase.oceanbase_vector:OceanBaseVectorFactory" +seekdb = "dify_vdb_oceanbase.oceanbase_vector:OceanBaseVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/oceanbase/__init__.py b/api/providers/vdb/vdb-oceanbase/src/dify_vdb_oceanbase/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/oceanbase/__init__.py rename to api/providers/vdb/vdb-oceanbase/src/dify_vdb_oceanbase/__init__.py diff --git a/api/core/rag/datasource/vdb/oceanbase/oceanbase_vector.py b/api/providers/vdb/vdb-oceanbase/src/dify_vdb_oceanbase/oceanbase_vector.py similarity index 99% rename from api/core/rag/datasource/vdb/oceanbase/oceanbase_vector.py rename to api/providers/vdb/vdb-oceanbase/src/dify_vdb_oceanbase/oceanbase_vector.py index 82f419871c..69dc42169a 100644 --- a/api/core/rag/datasource/vdb/oceanbase/oceanbase_vector.py +++ b/api/providers/vdb/vdb-oceanbase/src/dify_vdb_oceanbase/oceanbase_vector.py @@ -49,7 +49,7 @@ class OceanBaseVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config OCEANBASE_VECTOR_HOST is required") if not values["port"]: diff --git a/api/tests/integration_tests/vdb/oceanbase/bench_oceanbase.py b/api/providers/vdb/vdb-oceanbase/tests/integration_tests/bench_oceanbase.py similarity index 87% rename from api/tests/integration_tests/vdb/oceanbase/bench_oceanbase.py rename to api/providers/vdb/vdb-oceanbase/tests/integration_tests/bench_oceanbase.py index 8b57be08c5..50f6736942 100644 --- a/api/tests/integration_tests/vdb/oceanbase/bench_oceanbase.py +++ b/api/providers/vdb/vdb-oceanbase/tests/integration_tests/bench_oceanbase.py @@ -2,11 +2,12 @@ Benchmark: OceanBase vector store — old (single-row) vs new (batch) insertion, metadata query with/without functional index, and vector search across metrics. -Usage: - uv run --project api python -m tests.integration_tests.vdb.oceanbase.bench_oceanbase +Usage (from repo root): + uv run --project api python api/packages/dify-vdb-oceanbase/tests/bench_oceanbase.py """ import json +import logging import random import statistics import time @@ -16,6 +17,8 @@ from pyobvector import VECTOR, ObVecClient, cosine_distance, inner_product, l2_d from sqlalchemy import JSON, Column, String, text from sqlalchemy.dialects.mysql import LONGTEXT +logger = logging.getLogger(__name__) + # --------------------------------------------------------------------------- # Config # --------------------------------------------------------------------------- @@ -114,7 +117,7 @@ def bench_metadata_query(client, table, doc_id, with_index=False): try: client.perform_raw_text_sql(f"CREATE INDEX idx_metadata_doc_id ON `{table}` ((metadata->>'$.document_id'))") except Exception: - pass # already exists + logger.debug("Index idx_metadata_doc_id already exists, skipping creation") sql = text(f"SELECT id FROM `{table}` WHERE metadata->>'$.document_id' = :val") times = [] @@ -164,11 +167,11 @@ def main(): client = _make_client() client_pooled = _make_client(pool_size=5, max_overflow=10, pool_recycle=3600, pool_pre_ping=True) - print("=" * 70) - print("OceanBase Vector Store — Performance Benchmark") - print(f" Endpoint : {HOST}:{PORT}") - print(f" Vec dim : {VEC_DIM}") - print("=" * 70) + logger.info("=" * 70) + logger.info("OceanBase Vector Store — Performance Benchmark") + logger.info(" Endpoint : %s:%s", HOST, PORT) + logger.info(" Vec dim : %s", VEC_DIM) + logger.info("=" * 70) # ------------------------------------------------------------------ # 1. Insertion benchmark @@ -187,10 +190,10 @@ def main(): t_batch = bench_insert_batch(client_pooled, tbl_batch, rows, batch_size=100) speedup = t_single / t_batch if t_batch > 0 else float("inf") - print(f"\n[Insert {n_docs} docs]") - print(f" Single-row : {t_single:.2f}s") - print(f" Batch(100) : {t_batch:.2f}s") - print(f" Speedup : {speedup:.1f}x") + logger.info("\n[Insert %s docs]", n_docs) + logger.info(" Single-row : %.2fs", t_single) + logger.info(" Batch(100) : %.2fs", t_batch) + logger.info(" Speedup : %.1fx", speedup) # ------------------------------------------------------------------ # 2. Metadata query benchmark (use the 1000-doc batch table) @@ -203,16 +206,16 @@ def main(): res = conn.execute(text(f"SELECT metadata->>'$.document_id' FROM `{tbl_meta}` LIMIT 1")) doc_id_1000 = res.fetchone()[0] - print("\n[Metadata filter query — 1000 rows, by document_id]") + logger.info("\n[Metadata filter query — 1000 rows, by document_id]") times_no_idx = bench_metadata_query(client, tbl_meta, doc_id_1000, with_index=False) - print(f" Without index : {_fmt(times_no_idx)}") + logger.info(" Without index : %s", _fmt(times_no_idx)) times_with_idx = bench_metadata_query(client, tbl_meta, doc_id_1000, with_index=True) - print(f" With index : {_fmt(times_with_idx)}") + logger.info(" With index : %s", _fmt(times_with_idx)) # ------------------------------------------------------------------ # 3. Vector search benchmark — across metrics # ------------------------------------------------------------------ - print("\n[Vector search — top-10, 20 queries each, on 1000 rows]") + logger.info("\n[Vector search — top-10, 20 queries each, on 1000 rows]") for metric in ["l2", "cosine", "inner_product"]: tbl_vs = f"bench_vs_{metric}" @@ -222,7 +225,7 @@ def main(): rows_vs, _ = _gen_rows(1000) bench_insert_batch(client_pooled, tbl_vs, rows_vs, batch_size=100) times = bench_vector_search(client_pooled, tbl_vs, metric, topk=10, n_queries=20) - print(f" {metric:15s}: {_fmt(times)}") + logger.info(" %-15s: %s", metric, _fmt(times)) _drop(client_pooled, tbl_vs) # ------------------------------------------------------------------ @@ -232,9 +235,9 @@ def main(): _drop(client, f"bench_single_{n}") _drop(client, f"bench_batch_{n}") - print("\n" + "=" * 70) - print("Benchmark complete.") - print("=" * 70) + logger.info("\n%s", "=" * 70) + logger.info("Benchmark complete.") + logger.info("=" * 70) if __name__ == "__main__": diff --git a/api/tests/integration_tests/vdb/oceanbase/test_oceanbase.py b/api/providers/vdb/vdb-oceanbase/tests/integration_tests/test_oceanbase.py similarity index 82% rename from api/tests/integration_tests/vdb/oceanbase/test_oceanbase.py rename to api/providers/vdb/vdb-oceanbase/tests/integration_tests/test_oceanbase.py index 410de2c5ad..28f22d3cbc 100644 --- a/api/tests/integration_tests/vdb/oceanbase/test_oceanbase.py +++ b/api/providers/vdb/vdb-oceanbase/tests/integration_tests/test_oceanbase.py @@ -1,15 +1,13 @@ import pytest - -from core.rag.datasource.vdb.oceanbase.oceanbase_vector import ( +from dify_vdb_oceanbase.oceanbase_vector import ( OceanBaseVector, OceanBaseVectorConfig, ) -from tests.integration_tests.vdb.test_vector_store import ( + +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, ) -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) - @pytest.fixture def oceanbase_vector(): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/oceanbase/test_oceanbase_vector.py b/api/providers/vdb/vdb-oceanbase/tests/unit_tests/test_oceanbase_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/oceanbase/test_oceanbase_vector.py rename to api/providers/vdb/vdb-oceanbase/tests/unit_tests/test_oceanbase_vector.py index 27d8198ec0..31f9ff3e56 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/oceanbase/test_oceanbase_vector.py +++ b/api/providers/vdb/vdb-oceanbase/tests/unit_tests/test_oceanbase_vector.py @@ -56,7 +56,7 @@ def _build_fake_pyobvector_module(): def oceanbase_module(monkeypatch): monkeypatch.setitem(sys.modules, "pyobvector", _build_fake_pyobvector_module()) - import core.rag.datasource.vdb.oceanbase.oceanbase_vector as module + import dify_vdb_oceanbase.oceanbase_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-opengauss/pyproject.toml b/api/providers/vdb/vdb-opengauss/pyproject.toml new file mode 100644 index 0000000000..79be94b9e3 --- /dev/null +++ b/api/providers/vdb/vdb-opengauss/pyproject.toml @@ -0,0 +1,12 @@ +[project] +name = "dify-vdb-opengauss" +version = "0.0.1" + +dependencies = [] +description = "Dify vector store backend (dify-vdb-opengauss)." + +[project.entry-points."dify.vector_backends"] +opengauss = "dify_vdb_opengauss.opengauss:OpenGaussFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/opengauss/__init__.py b/api/providers/vdb/vdb-opengauss/src/dify_vdb_opengauss/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/opengauss/__init__.py rename to api/providers/vdb/vdb-opengauss/src/dify_vdb_opengauss/__init__.py diff --git a/api/core/rag/datasource/vdb/opengauss/opengauss.py b/api/providers/vdb/vdb-opengauss/src/dify_vdb_opengauss/opengauss.py similarity index 99% rename from api/core/rag/datasource/vdb/opengauss/opengauss.py rename to api/providers/vdb/vdb-opengauss/src/dify_vdb_opengauss/opengauss.py index f9dbfbeeaf..acd2471cf6 100644 --- a/api/core/rag/datasource/vdb/opengauss/opengauss.py +++ b/api/providers/vdb/vdb-opengauss/src/dify_vdb_opengauss/opengauss.py @@ -29,7 +29,7 @@ class OpenGaussConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config OPENGAUSS_HOST is required") if not values["port"]: diff --git a/api/tests/integration_tests/vdb/opengauss/test_opengauss.py b/api/providers/vdb/vdb-opengauss/tests/integration_tests/test_opengauss.py similarity index 82% rename from api/tests/integration_tests/vdb/opengauss/test_opengauss.py rename to api/providers/vdb/vdb-opengauss/tests/integration_tests/test_opengauss.py index 78436a19ee..8b444527d7 100644 --- a/api/tests/integration_tests/vdb/opengauss/test_opengauss.py +++ b/api/providers/vdb/vdb-opengauss/tests/integration_tests/test_opengauss.py @@ -1,14 +1,12 @@ import time import psycopg2 +from dify_vdb_opengauss.opengauss import OpenGauss, OpenGaussConfig -from core.rag.datasource.vdb.opengauss.opengauss import OpenGauss, OpenGaussConfig -from tests.integration_tests.vdb.test_vector_store import ( +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, ) -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) - class OpenGaussTest(AbstractVectorTest): def __init__(self): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/opengauss/test_opengauss.py b/api/providers/vdb/vdb-opengauss/tests/unit_tests/test_opengauss.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/opengauss/test_opengauss.py rename to api/providers/vdb/vdb-opengauss/tests/unit_tests/test_opengauss.py index 6641dbe4a0..09abd625fc 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/opengauss/test_opengauss.py +++ b/api/providers/vdb/vdb-opengauss/tests/unit_tests/test_opengauss.py @@ -41,7 +41,7 @@ def opengauss_module(monkeypatch): for name, module in _build_fake_psycopg2_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.opengauss.opengauss as module + import dify_vdb_opengauss.opengauss as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-opensearch/pyproject.toml b/api/providers/vdb/vdb-opensearch/pyproject.toml new file mode 100644 index 0000000000..56f303fdf5 --- /dev/null +++ b/api/providers/vdb/vdb-opensearch/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-opensearch" +version = "0.0.1" + +dependencies = [ + "opensearch-py==3.1.0", +] +description = "Dify vector store backend (dify-vdb-opensearch)." + +[project.entry-points."dify.vector_backends"] +opensearch = "dify_vdb_opensearch.opensearch_vector:OpenSearchVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/opensearch/__init__.py b/api/providers/vdb/vdb-opensearch/src/dify_vdb_opensearch/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/opensearch/__init__.py rename to api/providers/vdb/vdb-opensearch/src/dify_vdb_opensearch/__init__.py diff --git a/api/core/rag/datasource/vdb/opensearch/opensearch_vector.py b/api/providers/vdb/vdb-opensearch/src/dify_vdb_opensearch/opensearch_vector.py similarity index 93% rename from api/core/rag/datasource/vdb/opensearch/opensearch_vector.py rename to api/providers/vdb/vdb-opensearch/src/dify_vdb_opensearch/opensearch_vector.py index 2f77776807..843c495d82 100644 --- a/api/core/rag/datasource/vdb/opensearch/opensearch_vector.py +++ b/api/providers/vdb/vdb-opensearch/src/dify_vdb_opensearch/opensearch_vector.py @@ -6,6 +6,7 @@ from uuid import uuid4 from opensearchpy import OpenSearch, Urllib3AWSV4SignerAuth, Urllib3HttpConnection, helpers from opensearchpy.helpers import BulkIndexError from pydantic import BaseModel, model_validator +from typing_extensions import TypedDict from configs import dify_config from configs.middleware.vdb.opensearch_config import AuthMethod @@ -21,6 +22,20 @@ from models.dataset import Dataset logger = logging.getLogger(__name__) +class _OpenSearchHostDict(TypedDict): + host: str + port: int + + +class OpenSearchParamsDict(TypedDict, total=False): + hosts: list[_OpenSearchHostDict] + use_ssl: bool + verify_certs: bool + connection_class: type + pool_maxsize: int + http_auth: tuple[str | None, str | None] | Urllib3AWSV4SignerAuth + + class OpenSearchConfig(BaseModel): host: str port: int @@ -34,7 +49,7 @@ class OpenSearchConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values.get("host"): raise ValueError("config OPENSEARCH_HOST is required") if not values.get("port"): @@ -57,14 +72,14 @@ class OpenSearchConfig(BaseModel): service=self.aws_service, # type: ignore[arg-type] ) - def to_opensearch_params(self) -> dict[str, Any]: - params = { - "hosts": [{"host": self.host, "port": self.port}], - "use_ssl": self.secure, - "verify_certs": self.verify_certs, - "connection_class": Urllib3HttpConnection, - "pool_maxsize": 20, - } + def to_opensearch_params(self) -> OpenSearchParamsDict: + params = OpenSearchParamsDict( + hosts=[{"host": self.host, "port": self.port}], + use_ssl=self.secure, + verify_certs=self.verify_certs, + connection_class=Urllib3HttpConnection, + pool_maxsize=20, + ) if self.auth_method == "basic": logger.info("Using basic authentication for OpenSearch Vector DB") @@ -237,7 +252,10 @@ class OpenSearchVector(BaseVector): return docs def create_collection( - self, embeddings: list, metadatas: list[dict] | None = None, index_params: dict | None = None + self, + embeddings: list[list[float]], + metadatas: list[dict[str, Any]] | None = None, + index_params: dict[str, Any] | None = None, ): lock_name = f"vector_indexing_lock_{self._collection_name.lower()}" with redis_client.lock(lock_name, timeout=20): diff --git a/api/providers/vdb/vdb-opensearch/tests/unit_tests/test_opensearch.py b/api/providers/vdb/vdb-opensearch/tests/unit_tests/test_opensearch.py new file mode 100644 index 0000000000..f2ed7cb6fb --- /dev/null +++ b/api/providers/vdb/vdb-opensearch/tests/unit_tests/test_opensearch.py @@ -0,0 +1,332 @@ +import importlib +import sys +import types +from types import SimpleNamespace +from unittest.mock import MagicMock + +import pytest + +from core.rag.datasource.vdb.field import Field +from core.rag.models.document import Document +from extensions import ext_redis + + +def _build_fake_opensearch_modules(): + """Build fake opensearchpy modules to avoid the ``from events import Events`` + namespace collision (opensearch-py #756).""" + opensearchpy = types.ModuleType("opensearchpy") + opensearchpy_helpers = types.ModuleType("opensearchpy.helpers") + + class BulkIndexError(Exception): + def __init__(self, errors): + super().__init__("bulk error") + self.errors = errors + + class Urllib3AWSV4SignerAuth: + def __init__(self, credentials, region, service): + self.credentials = credentials + self.region = region + self.service = service + + class Urllib3HttpConnection: + pass + + class _IndicesClient: + def __init__(self): + self.exists = MagicMock(return_value=False) + self.create = MagicMock() + self.delete = MagicMock() + + class OpenSearch: + def __init__(self, **kwargs): + self.kwargs = kwargs + self.indices = _IndicesClient() + self.search = MagicMock(return_value={"hits": {"hits": []}}) + self.get = MagicMock() + + helpers = SimpleNamespace(bulk=MagicMock()) + + opensearchpy.OpenSearch = OpenSearch + opensearchpy.Urllib3AWSV4SignerAuth = Urllib3AWSV4SignerAuth + opensearchpy.Urllib3HttpConnection = Urllib3HttpConnection + opensearchpy.helpers = helpers + opensearchpy_helpers.BulkIndexError = BulkIndexError + + return { + "opensearchpy": opensearchpy, + "opensearchpy.helpers": opensearchpy_helpers, + } + + +@pytest.fixture +def opensearch_module(monkeypatch): + for name, module in _build_fake_opensearch_modules().items(): + monkeypatch.setitem(sys.modules, name, module) + + import dify_vdb_opensearch.opensearch_vector as module + + return importlib.reload(module) + + +def _config(module, **overrides): + values = { + "host": "localhost", + "port": 9200, + "secure": False, + "user": "admin", + "password": "password", + } + values.update(overrides) + return module.OpenSearchConfig.model_validate(values) + + +def get_example_text() -> str: + return "This is a sample text for testing purposes." + + +class TestOpenSearchConfig: + def test_to_opensearch_params(self, opensearch_module): + config = _config(opensearch_module, secure=True) + params = config.to_opensearch_params() + + assert params["hosts"] == [{"host": "localhost", "port": 9200}] + assert params["use_ssl"] is True + assert params["verify_certs"] is True + assert params["connection_class"].__name__ == "Urllib3HttpConnection" + assert params["http_auth"] == ("admin", "password") + + def test_to_opensearch_params_with_aws_managed_iam(self, opensearch_module, monkeypatch): + class _Session: + def get_credentials(self): + return "creds" + + boto3 = types.ModuleType("boto3") + boto3.Session = _Session + monkeypatch.setitem(sys.modules, "boto3", boto3) + + config = _config( + opensearch_module, + secure=True, + auth_method="aws_managed_iam", + aws_region="ap-southeast-2", + aws_service="aoss", + host="aoss-endpoint.ap-southeast-2.aoss.amazonaws.com", + port=9201, + ) + params = config.to_opensearch_params() + + assert params["hosts"] == [{"host": "aoss-endpoint.ap-southeast-2.aoss.amazonaws.com", "port": 9201}] + assert params["use_ssl"] is True + assert params["verify_certs"] is True + assert params["connection_class"].__name__ == "Urllib3HttpConnection" + assert params["http_auth"].credentials == "creds" + assert params["http_auth"].region == "ap-southeast-2" + assert params["http_auth"].service == "aoss" + + +class TestOpenSearchVector: + COLLECTION_NAME = "test_collection" + EXAMPLE_DOC_ID = "example_doc_id" + + def _make_vector(self, module): + vector = module.OpenSearchVector(self.COLLECTION_NAME, _config(module)) + vector._client = MagicMock() + return vector + + @pytest.mark.parametrize( + ("search_response", "expected_length", "expected_doc_id"), + [ + ( + { + "hits": { + "total": {"value": 1}, + "hits": [ + { + "_source": { + "page_content": get_example_text(), + "metadata": {"document_id": "example_doc_id"}, + } + } + ], + } + }, + 1, + "example_doc_id", + ), + ({"hits": {"total": {"value": 0}, "hits": []}}, 0, None), + ], + ) + def test_search_by_full_text(self, opensearch_module, search_response, expected_length, expected_doc_id): + vector = self._make_vector(opensearch_module) + vector._client.search.return_value = search_response + + hits = vector.search_by_full_text(query=get_example_text()) + assert len(hits) == expected_length + if expected_length > 0: + assert hits[0].metadata["document_id"] == expected_doc_id + + def test_search_by_vector(self, opensearch_module): + vector = self._make_vector(opensearch_module) + query_vector = [0.1] * 128 + mock_response = { + "hits": { + "total": {"value": 1}, + "hits": [ + { + "_source": { + Field.CONTENT_KEY: get_example_text(), + Field.METADATA_KEY: {"document_id": self.EXAMPLE_DOC_ID}, + }, + "_score": 1.0, + } + ], + } + } + vector._client.search.return_value = mock_response + + hits = vector.search_by_vector(query_vector=query_vector) + + assert len(hits) > 0 + assert hits[0].metadata["document_id"] == self.EXAMPLE_DOC_ID + + def test_get_ids_by_metadata_field(self, opensearch_module): + vector = self._make_vector(opensearch_module) + mock_response = {"hits": {"total": {"value": 1}, "hits": [{"_id": "mock_id"}]}} + vector._client.search.return_value = mock_response + + doc = Document(page_content="Test content", metadata={"document_id": self.EXAMPLE_DOC_ID}) + embedding = [0.1] * 128 + + opensearch_module.helpers.bulk.reset_mock() + vector.add_texts([doc], [embedding]) + + ids = vector.get_ids_by_metadata_field(key="document_id", value=self.EXAMPLE_DOC_ID) + assert len(ids) == 1 + assert ids[0] == "mock_id" + + def test_add_texts(self, opensearch_module): + vector = self._make_vector(opensearch_module) + vector._client.index.return_value = {"result": "created"} + + doc = Document(page_content="Test content", metadata={"document_id": self.EXAMPLE_DOC_ID}) + embedding = [0.1] * 128 + + opensearch_module.helpers.bulk.reset_mock() + vector.add_texts([doc], [embedding]) + + mock_response = {"hits": {"total": {"value": 1}, "hits": [{"_id": "mock_id"}]}} + vector._client.search.return_value = mock_response + + ids = vector.get_ids_by_metadata_field(key="document_id", value=self.EXAMPLE_DOC_ID) + assert len(ids) == 1 + assert ids[0] == "mock_id" + + def test_delete_nonexistent_index(self, opensearch_module): + """ignore_unavailable=True handles non-existent indices gracefully.""" + vector = self._make_vector(opensearch_module) + vector.delete() + + vector._client.indices.delete.assert_called_once_with( + index=self.COLLECTION_NAME.lower(), ignore_unavailable=True + ) + + def test_delete_existing_index(self, opensearch_module): + vector = self._make_vector(opensearch_module) + vector.delete() + + vector._client.indices.delete.assert_called_once_with( + index=self.COLLECTION_NAME.lower(), ignore_unavailable=True + ) + + +@pytest.fixture(scope="module") +def setup_mock_redis(): + ext_redis.redis_client.get = MagicMock(return_value=None) + ext_redis.redis_client.set = MagicMock(return_value=None) + + mock_redis_lock = MagicMock() + mock_redis_lock.__enter__ = MagicMock() + mock_redis_lock.__exit__ = MagicMock() + ext_redis.redis_client.lock = MagicMock(return_value=mock_redis_lock) + + +@pytest.mark.usefixtures("setup_mock_redis") +class TestOpenSearchVectorWithRedis: + COLLECTION_NAME = "test_collection" + EXAMPLE_DOC_ID = "example_doc_id" + + def _make_vector(self, module): + vector = module.OpenSearchVector(self.COLLECTION_NAME, _config(module)) + vector._client = MagicMock() + return vector + + def test_search_by_full_text(self, opensearch_module): + vector = self._make_vector(opensearch_module) + search_response = { + "hits": { + "total": {"value": 1}, + "hits": [ + {"_source": {"page_content": get_example_text(), "metadata": {"document_id": "example_doc_id"}}} + ], + } + } + vector._client.search.return_value = search_response + + hits = vector.search_by_full_text(query=get_example_text()) + assert len(hits) == 1 + assert hits[0].metadata["document_id"] == "example_doc_id" + + def test_get_ids_by_metadata_field(self, opensearch_module): + vector = self._make_vector(opensearch_module) + mock_response = {"hits": {"total": {"value": 1}, "hits": [{"_id": "mock_id"}]}} + vector._client.search.return_value = mock_response + + doc = Document(page_content="Test content", metadata={"document_id": self.EXAMPLE_DOC_ID}) + embedding = [0.1] * 128 + + opensearch_module.helpers.bulk.reset_mock() + vector.add_texts([doc], [embedding]) + + ids = vector.get_ids_by_metadata_field(key="document_id", value=self.EXAMPLE_DOC_ID) + assert len(ids) == 1 + assert ids[0] == "mock_id" + + def test_add_texts(self, opensearch_module): + vector = self._make_vector(opensearch_module) + vector._client.index.return_value = {"result": "created"} + + doc = Document(page_content="Test content", metadata={"document_id": self.EXAMPLE_DOC_ID}) + embedding = [0.1] * 128 + + opensearch_module.helpers.bulk.reset_mock() + vector.add_texts([doc], [embedding]) + + mock_response = {"hits": {"total": {"value": 1}, "hits": [{"_id": "mock_id"}]}} + vector._client.search.return_value = mock_response + + ids = vector.get_ids_by_metadata_field(key="document_id", value=self.EXAMPLE_DOC_ID) + assert len(ids) == 1 + assert ids[0] == "mock_id" + + def test_search_by_vector(self, opensearch_module): + vector = self._make_vector(opensearch_module) + query_vector = [0.1] * 128 + mock_response = { + "hits": { + "total": {"value": 1}, + "hits": [ + { + "_source": { + Field.CONTENT_KEY: get_example_text(), + Field.METADATA_KEY: {"document_id": self.EXAMPLE_DOC_ID}, + }, + "_score": 1.0, + } + ], + } + } + vector._client.search.return_value = mock_response + + hits = vector.search_by_vector(query_vector=query_vector) + assert len(hits) > 0 + assert hits[0].metadata["document_id"] == self.EXAMPLE_DOC_ID diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/opensearch/test_opensearch_vector.py b/api/providers/vdb/vdb-opensearch/tests/unit_tests/test_opensearch_vector.py similarity index 98% rename from api/tests/unit_tests/core/rag/datasource/vdb/opensearch/test_opensearch_vector.py rename to api/providers/vdb/vdb-opensearch/tests/unit_tests/test_opensearch_vector.py index 1030158dd1..1c2921f85b 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/opensearch/test_opensearch_vector.py +++ b/api/providers/vdb/vdb-opensearch/tests/unit_tests/test_opensearch_vector.py @@ -10,6 +10,8 @@ from pydantic import ValidationError from core.rag.models.document import Document +# TODO(wylswz): There's a known issue with namespace collision +# https://github.com/langgenius/dify/issues/34732 def _build_fake_opensearch_modules(): opensearchpy = types.ModuleType("opensearchpy") opensearchpy_helpers = types.ModuleType("opensearchpy.helpers") @@ -60,7 +62,7 @@ def opensearch_module(monkeypatch): for name, module in _build_fake_opensearch_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.opensearch.opensearch_vector as module + import dify_vdb_opensearch.opensearch_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-oracle/pyproject.toml b/api/providers/vdb/vdb-oracle/pyproject.toml new file mode 100644 index 0000000000..6747485041 --- /dev/null +++ b/api/providers/vdb/vdb-oracle/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-oracle" +version = "0.0.1" + +dependencies = [ + "oracledb==3.4.2", +] +description = "Dify vector store backend (dify-vdb-oracle)." + +[project.entry-points."dify.vector_backends"] +oracle = "dify_vdb_oracle.oraclevector:OracleVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/oracle/__init__.py b/api/providers/vdb/vdb-oracle/src/dify_vdb_oracle/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/oracle/__init__.py rename to api/providers/vdb/vdb-oracle/src/dify_vdb_oracle/__init__.py diff --git a/api/core/rag/datasource/vdb/oracle/oraclevector.py b/api/providers/vdb/vdb-oracle/src/dify_vdb_oracle/oraclevector.py similarity index 99% rename from api/core/rag/datasource/vdb/oracle/oraclevector.py rename to api/providers/vdb/vdb-oracle/src/dify_vdb_oracle/oraclevector.py index cb05c22b55..70377c82c8 100644 --- a/api/core/rag/datasource/vdb/oracle/oraclevector.py +++ b/api/providers/vdb/vdb-oracle/src/dify_vdb_oracle/oraclevector.py @@ -36,7 +36,7 @@ class OracleVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["user"]: raise ValueError("config ORACLE_USER is required") if not values["password"]: diff --git a/api/tests/integration_tests/vdb/oracle/test_oraclevector.py b/api/providers/vdb/vdb-oracle/tests/integration_tests/test_oraclevector.py similarity index 76% rename from api/tests/integration_tests/vdb/oracle/test_oraclevector.py rename to api/providers/vdb/vdb-oracle/tests/integration_tests/test_oraclevector.py index 8920dc97eb..aceb41289c 100644 --- a/api/tests/integration_tests/vdb/oracle/test_oraclevector.py +++ b/api/providers/vdb/vdb-oracle/tests/integration_tests/test_oraclevector.py @@ -1,11 +1,10 @@ -from core.rag.datasource.vdb.oracle.oraclevector import OracleVector, OracleVectorConfig -from core.rag.models.document import Document -from tests.integration_tests.vdb.test_vector_store import ( +from dify_vdb_oracle.oraclevector import OracleVector, OracleVectorConfig + +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, get_example_text, ) - -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) +from core.rag.models.document import Document class OracleVectorTest(AbstractVectorTest): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/oracle/test_oraclevector.py b/api/providers/vdb/vdb-oracle/tests/unit_tests/test_oraclevector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/oracle/test_oraclevector.py rename to api/providers/vdb/vdb-oracle/tests/unit_tests/test_oraclevector.py index 817a7d342b..678cf876b0 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/oracle/test_oraclevector.py +++ b/api/providers/vdb/vdb-oracle/tests/unit_tests/test_oraclevector.py @@ -55,7 +55,7 @@ def oracle_module(monkeypatch): for name, module in _build_fake_oracle_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.oracle.oraclevector as module + import dify_vdb_oracle.oraclevector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-pgvecto-rs/pyproject.toml b/api/providers/vdb/vdb-pgvecto-rs/pyproject.toml new file mode 100644 index 0000000000..9a25442e9e --- /dev/null +++ b/api/providers/vdb/vdb-pgvecto-rs/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-pgvecto-rs" +version = "0.0.1" + +dependencies = [ + "pgvecto-rs[sqlalchemy]~=0.2.2", +] +description = "Dify vector store backend (dify-vdb-pgvecto-rs)." + +[project.entry-points."dify.vector_backends"] +pgvecto-rs = "dify_vdb_pgvecto_rs.pgvecto_rs:PGVectoRSFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/pgvecto_rs/__init__.py b/api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/pgvecto_rs/__init__.py rename to api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/__init__.py diff --git a/api/core/rag/datasource/vdb/pgvecto_rs/collection.py b/api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/collection.py similarity index 80% rename from api/core/rag/datasource/vdb/pgvecto_rs/collection.py rename to api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/collection.py index c335bc610d..e087ec30a5 100644 --- a/api/core/rag/datasource/vdb/pgvecto_rs/collection.py +++ b/api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/collection.py @@ -1,3 +1,4 @@ +from typing import Any from uuid import UUID from numpy import ndarray @@ -8,5 +9,5 @@ class CollectionORM(DeclarativeBase): __tablename__: str id: Mapped[UUID] text: Mapped[str] - meta: Mapped[dict] + meta: Mapped[dict[str, Any]] vector: Mapped[ndarray] diff --git a/api/core/rag/datasource/vdb/pgvecto_rs/pgvecto_rs.py b/api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/pgvecto_rs.py similarity index 92% rename from api/core/rag/datasource/vdb/pgvecto_rs/pgvecto_rs.py rename to api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/pgvecto_rs.py index 90d9173409..9c721c8bde 100644 --- a/api/core/rag/datasource/vdb/pgvecto_rs/pgvecto_rs.py +++ b/api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/pgvecto_rs.py @@ -9,15 +9,15 @@ from pydantic import BaseModel, model_validator from sqlalchemy import Float, create_engine, insert, select, text from sqlalchemy import text as sql_text from sqlalchemy.dialects import postgresql -from sqlalchemy.orm import Mapped, Session, mapped_column +from sqlalchemy.orm import Mapped, Session, mapped_column, sessionmaker from configs import dify_config -from core.rag.datasource.vdb.pgvecto_rs.collection import CollectionORM from core.rag.datasource.vdb.vector_base import BaseVector from core.rag.datasource.vdb.vector_factory import AbstractVectorFactory from core.rag.datasource.vdb.vector_type import VectorType from core.rag.embedding.embedding_base import Embeddings from core.rag.models.document import Document +from dify_vdb_pgvecto_rs.collection import CollectionORM from extensions.ext_redis import redis_client from models.dataset import Dataset @@ -33,7 +33,7 @@ class PgvectoRSConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config PGVECTO_RS_HOST is required") if not values["port"]: @@ -55,9 +55,8 @@ class PGVectoRS(BaseVector): f"postgresql+psycopg2://{config.user}:{config.password}@{config.host}:{config.port}/{config.database}" ) self._client = create_engine(self._url) - with Session(self._client) as session: + with sessionmaker(bind=self._client).begin() as session: session.execute(text("CREATE EXTENSION IF NOT EXISTS vectors")) - session.commit() self._fields: list[str] = [] class _Table(CollectionORM): @@ -68,7 +67,7 @@ class PGVectoRS(BaseVector): primary_key=True, ) text: Mapped[str] - meta: Mapped[dict] = mapped_column(postgresql.JSONB) + meta: Mapped[dict[str, Any]] = mapped_column(postgresql.JSONB) vector: Mapped[ndarray] = mapped_column(VECTOR(dim)) self._table = _Table @@ -88,7 +87,7 @@ class PGVectoRS(BaseVector): if redis_client.get(collection_exist_cache_key): return index_name = f"{self._collection_name}_embedding_index" - with Session(self._client) as session: + with sessionmaker(bind=self._client).begin() as session: create_statement = sql_text(f""" CREATE TABLE IF NOT EXISTS {self._collection_name} ( id UUID PRIMARY KEY, @@ -111,12 +110,11 @@ class PGVectoRS(BaseVector): $$); """) session.execute(index_statement) - session.commit() redis_client.set(collection_exist_cache_key, 1, ex=3600) def add_texts(self, documents: list[Document], embeddings: list[list[float]], **kwargs): pks = [] - with Session(self._client) as session: + with sessionmaker(bind=self._client).begin() as session: for document, embedding in zip(documents, embeddings): pk = uuid4() session.execute( @@ -128,7 +126,6 @@ class PGVectoRS(BaseVector): ), ) pks.append(pk) - session.commit() return pks @@ -145,10 +142,9 @@ class PGVectoRS(BaseVector): def delete_by_metadata_field(self, key: str, value: str): ids = self.get_ids_by_metadata_field(key, value) if ids: - with Session(self._client) as session: + with sessionmaker(bind=self._client).begin() as session: select_statement = sql_text(f"DELETE FROM {self._collection_name} WHERE id = ANY(:ids)") session.execute(select_statement, {"ids": ids}) - session.commit() def delete_by_ids(self, ids: list[str]): with Session(self._client) as session: @@ -159,15 +155,13 @@ class PGVectoRS(BaseVector): if result: ids = [item[0] for item in result] if ids: - with Session(self._client) as session: + with sessionmaker(bind=self._client).begin() as session: select_statement = sql_text(f"DELETE FROM {self._collection_name} WHERE id = ANY(:ids)") session.execute(select_statement, {"ids": ids}) - session.commit() def delete(self): - with Session(self._client) as session: + with sessionmaker(bind=self._client).begin() as session: session.execute(sql_text(f"DROP TABLE IF EXISTS {self._collection_name}")) - session.commit() def text_exists(self, id: str) -> bool: with Session(self._client) as session: diff --git a/api/tests/integration_tests/vdb/pgvecto_rs/test_pgvecto_rs.py b/api/providers/vdb/vdb-pgvecto-rs/tests/integration_tests/test_pgvecto_rs.py similarity index 82% rename from api/tests/integration_tests/vdb/pgvecto_rs/test_pgvecto_rs.py rename to api/providers/vdb/vdb-pgvecto-rs/tests/integration_tests/test_pgvecto_rs.py index 6210613d42..9fc8627851 100644 --- a/api/tests/integration_tests/vdb/pgvecto_rs/test_pgvecto_rs.py +++ b/api/providers/vdb/vdb-pgvecto-rs/tests/integration_tests/test_pgvecto_rs.py @@ -1,11 +1,10 @@ -from core.rag.datasource.vdb.pgvecto_rs.pgvecto_rs import PGVectoRS, PgvectoRSConfig -from tests.integration_tests.vdb.test_vector_store import ( +from dify_vdb_pgvecto_rs.pgvecto_rs import PGVectoRS, PgvectoRSConfig + +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, get_example_text, ) -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) - class PGVectoRSVectorTest(AbstractVectorTest): def __init__(self): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/pgvecto_rs/test_pgvecto_rs.py b/api/providers/vdb/vdb-pgvecto-rs/tests/unit_tests/test_pgvecto_rs.py similarity index 87% rename from api/tests/unit_tests/core/rag/datasource/vdb/pgvecto_rs/test_pgvecto_rs.py rename to api/providers/vdb/vdb-pgvecto-rs/tests/unit_tests/test_pgvecto_rs.py index 1aec81b8ac..c3291f7f12 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/pgvecto_rs/test_pgvecto_rs.py +++ b/api/providers/vdb/vdb-pgvecto-rs/tests/unit_tests/test_pgvecto_rs.py @@ -53,13 +53,38 @@ def _session_factory(calls, execute_results=None): return _session +class _FakeBeginContext: + def __init__(self, session): + self._session = session + + def __enter__(self): + return self._session + + def __exit__(self, exc_type, exc, tb): + return None + + +def _sessionmaker_factory(calls, execute_results=None): + def _sessionmaker(*args, **kwargs): + session = _FakeSessionContext(calls=calls, execute_results=execute_results) + return MagicMock(begin=MagicMock(return_value=_FakeBeginContext(session))) + + return _sessionmaker + + +def _patch_both(monkeypatch, module, calls, execute_results=None): + """Patch both Session and sessionmaker on the module with the same call tracker.""" + monkeypatch.setattr(module, "Session", _session_factory(calls, execute_results)) + monkeypatch.setattr(module, "sessionmaker", _sessionmaker_factory(calls, execute_results)) + + @pytest.fixture def pgvecto_module(monkeypatch): for name, module in _build_fake_pgvecto_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.pgvecto_rs.collection as collection_module - import core.rag.datasource.vdb.pgvecto_rs.pgvecto_rs as module + import dify_vdb_pgvecto_rs.collection as collection_module + import dify_vdb_pgvecto_rs.pgvecto_rs as module return importlib.reload(module), importlib.reload(collection_module) @@ -105,7 +130,7 @@ def test_init_get_type_and_create_delegate(pgvecto_module, monkeypatch): module, _ = pgvecto_module session_calls = [] monkeypatch.setattr(module, "create_engine", MagicMock(return_value="engine")) - monkeypatch.setattr(module, "Session", _session_factory(session_calls)) + _patch_both(monkeypatch, module, session_calls) vector = module.PGVectoRS("collection_1", _config(module), dim=3) vector.create_collection = MagicMock() @@ -124,7 +149,7 @@ def test_create_collection_cache_and_sql_execution(pgvecto_module, monkeypatch): module, _ = pgvecto_module session_calls = [] monkeypatch.setattr(module, "create_engine", MagicMock(return_value="engine")) - monkeypatch.setattr(module, "Session", _session_factory(session_calls)) + _patch_both(monkeypatch, module, session_calls) lock = MagicMock() lock.__enter__.return_value = None @@ -151,10 +176,10 @@ def test_add_texts_get_ids_and_delete_methods(pgvecto_module, monkeypatch): execute_results = [SimpleNamespace(fetchall=lambda: [("id-1",), ("id-2",)]), SimpleNamespace(fetchall=lambda: [])] monkeypatch.setattr(module, "create_engine", MagicMock(return_value="engine")) - monkeypatch.setattr(module, "Session", _session_factory(init_calls)) + _patch_both(monkeypatch, module, init_calls) vector = module.PGVectoRS("collection_1", _config(module), dim=3) - monkeypatch.setattr(module, "Session", _session_factory(runtime_calls, execute_results=list(execute_results))) + _patch_both(monkeypatch, module, runtime_calls, execute_results=list(execute_results)) class _InsertBuilder: def __init__(self, table): @@ -179,6 +204,7 @@ def test_add_texts_get_ids_and_delete_methods(pgvecto_module, monkeypatch): "Session", _session_factory(runtime_calls, execute_results=[SimpleNamespace(fetchall=lambda: [("id-1",), ("id-2",)])]), ) + monkeypatch.setattr(module, "sessionmaker", _sessionmaker_factory(runtime_calls)) assert vector.get_ids_by_metadata_field("document_id", "doc-1") == ["id-1", "id-2"] monkeypatch.setattr( @@ -204,12 +230,13 @@ def test_add_texts_get_ids_and_delete_methods(pgvecto_module, monkeypatch): ], ), ) + monkeypatch.setattr(module, "sessionmaker", _sessionmaker_factory(runtime_calls)) vector.delete_by_ids(["doc-1"]) assert any("meta->>'doc_id' = ANY (:doc_ids)" in str(args[0]) for args, _ in runtime_calls) assert any("DELETE FROM collection_1 WHERE id = ANY(:ids)" in str(args[0]) for args, _ in runtime_calls) runtime_calls.clear() - monkeypatch.setattr(module, "Session", _session_factory(runtime_calls, execute_results=[MagicMock()])) + _patch_both(monkeypatch, module, runtime_calls, execute_results=[MagicMock()]) vector.delete() assert any("DROP TABLE IF EXISTS collection_1" in str(args[0]) for args, _ in runtime_calls) @@ -218,7 +245,7 @@ def test_text_exists_search_and_full_text(pgvecto_module, monkeypatch): module, _ = pgvecto_module init_calls = [] monkeypatch.setattr(module, "create_engine", MagicMock(return_value="engine")) - monkeypatch.setattr(module, "Session", _session_factory(init_calls)) + _patch_both(monkeypatch, module, init_calls) vector = module.PGVectoRS("collection_1", _config(module), dim=3) runtime_calls = [] @@ -277,7 +304,7 @@ def test_text_exists_search_and_full_text(pgvecto_module, monkeypatch): (SimpleNamespace(meta={"doc_id": "1"}, text="text-1"), 0.1), (SimpleNamespace(meta={"doc_id": "2"}, text="text-2"), 0.8), ] - monkeypatch.setattr(module, "Session", _session_factory(runtime_calls, execute_results=[rows])) + _patch_both(monkeypatch, module, runtime_calls, execute_results=[rows]) docs = vector.search_by_vector([0.1, 0.2], top_k=2, score_threshold=0.5, document_ids_filter=["d-1"]) assert len(docs) == 1 diff --git a/api/providers/vdb/vdb-pgvector/pyproject.toml b/api/providers/vdb/vdb-pgvector/pyproject.toml new file mode 100644 index 0000000000..2a972aa277 --- /dev/null +++ b/api/providers/vdb/vdb-pgvector/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-pgvector" +version = "0.0.1" + +dependencies = [ + "pgvector==0.4.2", +] +description = "Dify vector store backend (dify-vdb-pgvector)." + +[project.entry-points."dify.vector_backends"] +pgvector = "dify_vdb_pgvector.pgvector:PGVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/pgvector/__init__.py b/api/providers/vdb/vdb-pgvector/src/dify_vdb_pgvector/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/pgvector/__init__.py rename to api/providers/vdb/vdb-pgvector/src/dify_vdb_pgvector/__init__.py diff --git a/api/core/rag/datasource/vdb/pgvector/pgvector.py b/api/providers/vdb/vdb-pgvector/src/dify_vdb_pgvector/pgvector.py similarity index 99% rename from api/core/rag/datasource/vdb/pgvector/pgvector.py rename to api/providers/vdb/vdb-pgvector/src/dify_vdb_pgvector/pgvector.py index 0615b8312c..b1bdce0ad4 100644 --- a/api/core/rag/datasource/vdb/pgvector/pgvector.py +++ b/api/providers/vdb/vdb-pgvector/src/dify_vdb_pgvector/pgvector.py @@ -34,7 +34,7 @@ class PGVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config PGVECTOR_HOST is required") if not values["port"]: diff --git a/api/tests/integration_tests/vdb/pgvector/test_pgvector.py b/api/providers/vdb/vdb-pgvector/tests/integration_tests/test_pgvector.py similarity index 73% rename from api/tests/integration_tests/vdb/pgvector/test_pgvector.py rename to api/providers/vdb/vdb-pgvector/tests/integration_tests/test_pgvector.py index 4fdeca5a3a..974657510e 100644 --- a/api/tests/integration_tests/vdb/pgvector/test_pgvector.py +++ b/api/providers/vdb/vdb-pgvector/tests/integration_tests/test_pgvector.py @@ -1,10 +1,9 @@ -from core.rag.datasource.vdb.pgvector.pgvector import PGVector, PGVectorConfig -from tests.integration_tests.vdb.test_vector_store import ( +from dify_vdb_pgvector.pgvector import PGVector, PGVectorConfig + +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, ) -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) - class PGVectorTest(AbstractVectorTest): def __init__(self): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/pgvector/test_pgvector.py b/api/providers/vdb/vdb-pgvector/tests/unit_tests/test_pgvector.py similarity index 92% rename from api/tests/unit_tests/core/rag/datasource/vdb/pgvector/test_pgvector.py rename to api/providers/vdb/vdb-pgvector/tests/unit_tests/test_pgvector.py index 7505262eb7..99a6e00c16 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/pgvector/test_pgvector.py +++ b/api/providers/vdb/vdb-pgvector/tests/unit_tests/test_pgvector.py @@ -2,13 +2,10 @@ from contextlib import contextmanager from types import SimpleNamespace from unittest.mock import MagicMock, patch +import dify_vdb_pgvector.pgvector as pgvector_module import pytest +from dify_vdb_pgvector.pgvector import PGVector, PGVectorConfig -import core.rag.datasource.vdb.pgvector.pgvector as pgvector_module -from core.rag.datasource.vdb.pgvector.pgvector import ( - PGVector, - PGVectorConfig, -) from core.rag.models.document import Document @@ -26,7 +23,7 @@ class TestPGVector: ) self.collection_name = "test_collection" - @patch("core.rag.datasource.vdb.pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") + @patch("dify_vdb_pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") def test_init(self, mock_pool_class): """Test PGVector initialization.""" mock_pool = MagicMock() @@ -41,7 +38,7 @@ class TestPGVector: assert pgvector.pg_bigm is False assert pgvector.index_hash is not None - @patch("core.rag.datasource.vdb.pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") + @patch("dify_vdb_pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") def test_init_with_pg_bigm(self, mock_pool_class): """Test PGVector initialization with pg_bigm enabled.""" config = PGVectorConfig( @@ -61,8 +58,8 @@ class TestPGVector: assert pgvector.pg_bigm is True - @patch("core.rag.datasource.vdb.pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") - @patch("core.rag.datasource.vdb.pgvector.pgvector.redis_client") + @patch("dify_vdb_pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") + @patch("dify_vdb_pgvector.pgvector.redis_client") def test_create_collection_basic(self, mock_redis, mock_pool_class): """Test basic collection creation.""" # Mock Redis operations @@ -104,8 +101,8 @@ class TestPGVector: # Verify Redis cache was set mock_redis.set.assert_called_once() - @patch("core.rag.datasource.vdb.pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") - @patch("core.rag.datasource.vdb.pgvector.pgvector.redis_client") + @patch("dify_vdb_pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") + @patch("dify_vdb_pgvector.pgvector.redis_client") def test_create_collection_with_large_dimension(self, mock_redis, mock_pool_class): """Test collection creation with dimension > 2000 (no HNSW index).""" # Mock Redis operations @@ -139,8 +136,8 @@ class TestPGVector: hnsw_index_calls = [call for call in mock_cursor.execute.call_args_list if "hnsw" in str(call)] assert len(hnsw_index_calls) == 0 - @patch("core.rag.datasource.vdb.pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") - @patch("core.rag.datasource.vdb.pgvector.pgvector.redis_client") + @patch("dify_vdb_pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") + @patch("dify_vdb_pgvector.pgvector.redis_client") def test_create_collection_with_pg_bigm(self, mock_redis, mock_pool_class): """Test collection creation with pg_bigm enabled.""" config = PGVectorConfig( @@ -180,8 +177,8 @@ class TestPGVector: bigm_index_calls = [call for call in mock_cursor.execute.call_args_list if "gin_bigm_ops" in str(call)] assert len(bigm_index_calls) == 1 - @patch("core.rag.datasource.vdb.pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") - @patch("core.rag.datasource.vdb.pgvector.pgvector.redis_client") + @patch("dify_vdb_pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") + @patch("dify_vdb_pgvector.pgvector.redis_client") def test_create_collection_creates_vector_extension(self, mock_redis, mock_pool_class): """Test that vector extension is created if it doesn't exist.""" # Mock Redis operations @@ -213,8 +210,8 @@ class TestPGVector: ] assert len(create_extension_calls) == 1 - @patch("core.rag.datasource.vdb.pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") - @patch("core.rag.datasource.vdb.pgvector.pgvector.redis_client") + @patch("dify_vdb_pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") + @patch("dify_vdb_pgvector.pgvector.redis_client") def test_create_collection_with_cache_hit(self, mock_redis, mock_pool_class): """Test that collection creation is skipped when cache exists.""" # Mock Redis operations - cache exists @@ -240,8 +237,8 @@ class TestPGVector: # Check that no SQL was executed (early return due to cache) assert mock_cursor.execute.call_count == 0 - @patch("core.rag.datasource.vdb.pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") - @patch("core.rag.datasource.vdb.pgvector.pgvector.redis_client") + @patch("dify_vdb_pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") + @patch("dify_vdb_pgvector.pgvector.redis_client") def test_create_collection_with_redis_lock(self, mock_redis, mock_pool_class): """Test that Redis lock is used during collection creation.""" # Mock Redis operations @@ -273,7 +270,7 @@ class TestPGVector: mock_lock.__enter__.assert_called_once() mock_lock.__exit__.assert_called_once() - @patch("core.rag.datasource.vdb.pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") + @patch("dify_vdb_pgvector.pgvector.psycopg2.pool.SimpleConnectionPool") def test_get_cursor_context_manager(self, mock_pool_class): """Test that _get_cursor properly manages connection lifecycle.""" mock_pool = MagicMock() diff --git a/api/providers/vdb/vdb-qdrant/pyproject.toml b/api/providers/vdb/vdb-qdrant/pyproject.toml new file mode 100644 index 0000000000..6dd0b9560b --- /dev/null +++ b/api/providers/vdb/vdb-qdrant/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-qdrant" +version = "0.0.1" + +dependencies = [ + "qdrant-client==1.9.0", +] +description = "Dify vector store backend (dify-vdb-qdrant)." + +[project.entry-points."dify.vector_backends"] +qdrant = "dify_vdb_qdrant.qdrant_vector:QdrantVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/pyvastbase/__init__.py b/api/providers/vdb/vdb-qdrant/src/dify_vdb_qdrant/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/pyvastbase/__init__.py rename to api/providers/vdb/vdb-qdrant/src/dify_vdb_qdrant/__init__.py diff --git a/api/core/rag/datasource/vdb/qdrant/qdrant_vector.py b/api/providers/vdb/vdb-qdrant/src/dify_vdb_qdrant/qdrant_vector.py similarity index 99% rename from api/core/rag/datasource/vdb/qdrant/qdrant_vector.py rename to api/providers/vdb/vdb-qdrant/src/dify_vdb_qdrant/qdrant_vector.py index f4fcb975c3..b5ff87fc5d 100644 --- a/api/core/rag/datasource/vdb/qdrant/qdrant_vector.py +++ b/api/providers/vdb/vdb-qdrant/src/dify_vdb_qdrant/qdrant_vector.py @@ -3,7 +3,7 @@ import os import uuid from collections.abc import Generator, Iterable, Sequence from itertools import islice -from typing import TYPE_CHECKING, Any +from typing import TYPE_CHECKING, Any, cast import qdrant_client from flask import current_app @@ -32,7 +32,6 @@ from extensions.ext_redis import redis_client from models.dataset import Dataset, DatasetCollectionBinding if TYPE_CHECKING: - from qdrant_client import grpc # noqa from qdrant_client.conversions import common_types from qdrant_client.http import models as rest @@ -180,7 +179,7 @@ class QdrantVector(BaseVector): for batch_ids, points in self._generate_rest_batches( texts, embeddings, filtered_metadatas, uuids, 64, self._group_id ): - self._client.upsert(collection_name=self._collection_name, points=points) + self._client.upsert(collection_name=self._collection_name, points=cast("common_types.Points", points)) added_ids.extend(batch_ids) return added_ids @@ -472,7 +471,7 @@ class QdrantVector(BaseVector): def _reload_if_needed(self): if isinstance(self._client, QdrantLocal): - self._client._load() + self._client._load() # pyright: ignore[reportPrivateUsage] @classmethod def _document_from_scored_point( diff --git a/api/tests/integration_tests/vdb/qdrant/test_qdrant.py b/api/providers/vdb/vdb-qdrant/tests/integration_tests/test_qdrant.py similarity index 95% rename from api/tests/integration_tests/vdb/qdrant/test_qdrant.py rename to api/providers/vdb/vdb-qdrant/tests/integration_tests/test_qdrant.py index 709cc2e14e..e0badeb5de 100644 --- a/api/tests/integration_tests/vdb/qdrant/test_qdrant.py +++ b/api/providers/vdb/vdb-qdrant/tests/integration_tests/test_qdrant.py @@ -1,12 +1,11 @@ import uuid -from core.rag.datasource.vdb.qdrant.qdrant_vector import QdrantConfig, QdrantVector -from core.rag.models.document import Document -from tests.integration_tests.vdb.test_vector_store import ( +from dify_vdb_qdrant.qdrant_vector import QdrantConfig, QdrantVector + +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, ) - -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) +from core.rag.models.document import Document class QdrantVectorTest(AbstractVectorTest): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/qdrant/test_qdrant_vector.py b/api/providers/vdb/vdb-qdrant/tests/unit_tests/test_qdrant_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/qdrant/test_qdrant_vector.py rename to api/providers/vdb/vdb-qdrant/tests/unit_tests/test_qdrant_vector.py index 0408506563..0ed5491fbe 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/qdrant/test_qdrant_vector.py +++ b/api/providers/vdb/vdb-qdrant/tests/unit_tests/test_qdrant_vector.py @@ -125,7 +125,7 @@ def qdrant_module(monkeypatch): for name, module in _build_fake_qdrant_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.qdrant.qdrant_vector as module + import dify_vdb_qdrant.qdrant_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-relyt/pyproject.toml b/api/providers/vdb/vdb-relyt/pyproject.toml new file mode 100644 index 0000000000..2a7c7fac87 --- /dev/null +++ b/api/providers/vdb/vdb-relyt/pyproject.toml @@ -0,0 +1,12 @@ +[project] +name = "dify-vdb-relyt" +version = "0.0.1" + +dependencies = [] +description = "Dify vector store backend (dify-vdb-relyt)." + +[project.entry-points."dify.vector_backends"] +relyt = "dify_vdb_relyt.relyt_vector:RelytVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/qdrant/__init__.py b/api/providers/vdb/vdb-relyt/src/dify_vdb_relyt/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/qdrant/__init__.py rename to api/providers/vdb/vdb-relyt/src/dify_vdb_relyt/__init__.py diff --git a/api/core/rag/datasource/vdb/relyt/relyt_vector.py b/api/providers/vdb/vdb-relyt/src/dify_vdb_relyt/relyt_vector.py similarity index 97% rename from api/core/rag/datasource/vdb/relyt/relyt_vector.py rename to api/providers/vdb/vdb-relyt/src/dify_vdb_relyt/relyt_vector.py index e486375ec2..336c2d3c8a 100644 --- a/api/core/rag/datasource/vdb/relyt/relyt_vector.py +++ b/api/providers/vdb/vdb-relyt/src/dify_vdb_relyt/relyt_vector.py @@ -7,7 +7,7 @@ from pydantic import BaseModel, model_validator from sqlalchemy import Column, String, Table, create_engine, insert from sqlalchemy import text as sql_text from sqlalchemy.dialects.postgresql import JSON, TEXT -from sqlalchemy.orm import Session +from sqlalchemy.orm import Session, sessionmaker from core.rag.datasource.vdb.vector_factory import AbstractVectorFactory from core.rag.datasource.vdb.vector_type import VectorType @@ -26,7 +26,7 @@ from extensions.ext_redis import redis_client logger = logging.getLogger(__name__) -Base = declarative_base() # type: Any +Base: Any = declarative_base() class RelytConfig(BaseModel): @@ -38,7 +38,7 @@ class RelytConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config RELYT_HOST is required") if not values["port"]: @@ -79,7 +79,7 @@ class RelytVector(BaseVector): if redis_client.get(collection_exist_cache_key): return index_name = f"{self._collection_name}_embedding_index" - with Session(self.client) as session: + with sessionmaker(bind=self.client).begin() as session: drop_statement = sql_text(f"""DROP TABLE IF EXISTS "{self._collection_name}"; """) session.execute(drop_statement) create_statement = sql_text(f""" @@ -104,7 +104,6 @@ class RelytVector(BaseVector): $$); """) session.execute(index_statement) - session.commit() redis_client.set(collection_exist_cache_key, 1, ex=3600) def add_texts(self, documents: list[Document], embeddings: list[list[float]], **kwargs): @@ -208,9 +207,8 @@ class RelytVector(BaseVector): self.delete_by_uuids(ids) def delete(self): - with Session(self.client) as session: + with sessionmaker(bind=self.client).begin() as session: session.execute(sql_text(f"""DROP TABLE IF EXISTS "{self._collection_name}";""")) - session.commit() def text_exists(self, id: str) -> bool: with Session(self.client) as session: @@ -241,7 +239,7 @@ class RelytVector(BaseVector): self, embedding: list[float], k: int = 4, - filter: dict | None = None, + filter: dict[str, Any] | None = None, ) -> list[tuple[Document, float]]: # Add the filter if provided diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/relyt/test_relyt_vector.py b/api/providers/vdb/vdb-relyt/tests/unit_tests/test_relyt_vector.py similarity index 93% rename from api/tests/unit_tests/core/rag/datasource/vdb/relyt/test_relyt_vector.py rename to api/providers/vdb/vdb-relyt/tests/unit_tests/test_relyt_vector.py index ca8cd5e514..f97ad1400a 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/relyt/test_relyt_vector.py +++ b/api/providers/vdb/vdb-relyt/tests/unit_tests/test_relyt_vector.py @@ -39,12 +39,31 @@ class _FakeSession: return None +class _FakeBeginContext: + def __init__(self, session): + self._session = session + + def __enter__(self): + return self._session + + def __exit__(self, exc_type, exc, tb): + return None + + +def _patch_both(monkeypatch, module, session): + """Patch both Session and sessionmaker on the module.""" + monkeypatch.setattr(module, "Session", lambda _client: session) + monkeypatch.setattr( + module, "sessionmaker", lambda **kwargs: MagicMock(begin=MagicMock(return_value=_FakeBeginContext(session))) + ) + + @pytest.fixture def relyt_module(monkeypatch): for name, module in _build_fake_relyt_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.relyt.relyt_vector as module + import dify_vdb_relyt.relyt_vector as module return importlib.reload(module) @@ -108,13 +127,13 @@ def test_create_collection_cache_and_sql_execution(relyt_module, monkeypatch): monkeypatch.setattr(relyt_module.redis_client, "get", MagicMock(return_value=1)) session = _FakeSession() - monkeypatch.setattr(relyt_module, "Session", lambda _client: session) + _patch_both(monkeypatch, relyt_module, session) vector.create_collection(3) session.execute.assert_not_called() monkeypatch.setattr(relyt_module.redis_client, "get", MagicMock(return_value=None)) session = _FakeSession() - monkeypatch.setattr(relyt_module, "Session", lambda _client: session) + _patch_both(monkeypatch, relyt_module, session) vector.create_collection(3) executed_sql = [str(call.args[0]) for call in session.execute.call_args_list] assert any("DROP TABLE IF EXISTS" in sql for sql in executed_sql) @@ -265,15 +284,15 @@ def test_search_by_vector_filters_by_score_and_ids(relyt_module): # 8. delete commits session -def test_delete_commits_session(relyt_module, monkeypatch): +def test_delete_drops_table(relyt_module, monkeypatch): vector = relyt_module.RelytVector.__new__(relyt_module.RelytVector) vector._collection_name = "collection_1" vector.client = MagicMock() vector.embedding_dimension = 3 session = _FakeSession() - monkeypatch.setattr(relyt_module, "Session", lambda _client: session) + _patch_both(monkeypatch, relyt_module, session) vector.delete() - session.commit.assert_called_once() + session.execute.assert_called_once() def test_relyt_factory_existing_and_generated_collection(relyt_module, monkeypatch): diff --git a/api/providers/vdb/vdb-tablestore/pyproject.toml b/api/providers/vdb/vdb-tablestore/pyproject.toml new file mode 100644 index 0000000000..fd1a2d54e0 --- /dev/null +++ b/api/providers/vdb/vdb-tablestore/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-tablestore" +version = "0.0.1" + +dependencies = [ + "tablestore==6.4.4", +] +description = "Dify vector store backend (dify-vdb-tablestore)." + +[project.entry-points."dify.vector_backends"] +tablestore = "dify_vdb_tablestore.tablestore_vector:TableStoreVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/relyt/__init__.py b/api/providers/vdb/vdb-tablestore/src/dify_vdb_tablestore/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/relyt/__init__.py rename to api/providers/vdb/vdb-tablestore/src/dify_vdb_tablestore/__init__.py diff --git a/api/core/rag/datasource/vdb/tablestore/tablestore_vector.py b/api/providers/vdb/vdb-tablestore/src/dify_vdb_tablestore/tablestore_vector.py similarity index 99% rename from api/core/rag/datasource/vdb/tablestore/tablestore_vector.py rename to api/providers/vdb/vdb-tablestore/src/dify_vdb_tablestore/tablestore_vector.py index 4a734232ec..f9deac11e5 100644 --- a/api/core/rag/datasource/vdb/tablestore/tablestore_vector.py +++ b/api/providers/vdb/vdb-tablestore/src/dify_vdb_tablestore/tablestore_vector.py @@ -30,7 +30,7 @@ class TableStoreConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["access_key_id"]: raise ValueError("config ACCESS_KEY_ID is required") if not values["access_key_secret"]: diff --git a/api/tests/integration_tests/vdb/tablestore/test_tablestore.py b/api/providers/vdb/vdb-tablestore/tests/integration_tests/test_tablestore.py similarity index 93% rename from api/tests/integration_tests/vdb/tablestore/test_tablestore.py rename to api/providers/vdb/vdb-tablestore/tests/integration_tests/test_tablestore.py index b60e26a881..97c9626ee1 100644 --- a/api/tests/integration_tests/vdb/tablestore/test_tablestore.py +++ b/api/providers/vdb/vdb-tablestore/tests/integration_tests/test_tablestore.py @@ -1,20 +1,21 @@ +import logging import os import uuid import tablestore from _pytest.python_api import approx - -from core.rag.datasource.vdb.tablestore.tablestore_vector import ( +from dify_vdb_tablestore.tablestore_vector import ( TableStoreConfig, TableStoreVector, ) -from tests.integration_tests.vdb.test_vector_store import ( + +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, get_example_document, get_example_text, ) -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) +logger = logging.getLogger(__name__) class TableStoreVectorTest(AbstractVectorTest): @@ -90,7 +91,7 @@ class TableStoreVectorTest(AbstractVectorTest): try: self.vector.delete() except Exception: - pass + logger.debug("Failed to delete vector store during test setup, it may not exist yet") return super().run_all_tests() diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/tablestore/test_tablestore_vector.py b/api/providers/vdb/vdb-tablestore/tests/unit_tests/test_tablestore_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/tablestore/test_tablestore_vector.py rename to api/providers/vdb/vdb-tablestore/tests/unit_tests/test_tablestore_vector.py index e3b6676d9b..62a11e0445 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/tablestore/test_tablestore_vector.py +++ b/api/providers/vdb/vdb-tablestore/tests/unit_tests/test_tablestore_vector.py @@ -81,7 +81,7 @@ def tablestore_module(monkeypatch): fake_module = _build_fake_tablestore_module() monkeypatch.setitem(sys.modules, "tablestore", fake_module) - import core.rag.datasource.vdb.tablestore.tablestore_vector as module + import dify_vdb_tablestore.tablestore_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-tencent/pyproject.toml b/api/providers/vdb/vdb-tencent/pyproject.toml new file mode 100644 index 0000000000..7bb761b169 --- /dev/null +++ b/api/providers/vdb/vdb-tencent/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-tencent" +version = "0.0.1" + +dependencies = [ + "tcvectordb~=2.1.0", +] +description = "Dify vector store backend (dify-vdb-tencent)." + +[project.entry-points."dify.vector_backends"] +tencent = "dify_vdb_tencent.tencent_vector:TencentVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/tablestore/__init__.py b/api/providers/vdb/vdb-tencent/src/dify_vdb_tencent/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/tablestore/__init__.py rename to api/providers/vdb/vdb-tencent/src/dify_vdb_tencent/__init__.py diff --git a/api/core/rag/datasource/vdb/tencent/tencent_vector.py b/api/providers/vdb/vdb-tencent/src/dify_vdb_tencent/tencent_vector.py similarity index 100% rename from api/core/rag/datasource/vdb/tencent/tencent_vector.py rename to api/providers/vdb/vdb-tencent/src/dify_vdb_tencent/tencent_vector.py diff --git a/api/tests/integration_tests/vdb/__mock/tcvectordb.py b/api/providers/vdb/vdb-tencent/tests/integration_tests/conftest.py similarity index 100% rename from api/tests/integration_tests/vdb/__mock/tcvectordb.py rename to api/providers/vdb/vdb-tencent/tests/integration_tests/conftest.py diff --git a/api/tests/integration_tests/vdb/tcvectordb/test_tencent.py b/api/providers/vdb/vdb-tencent/tests/integration_tests/test_tencent.py similarity index 76% rename from api/tests/integration_tests/vdb/tcvectordb/test_tencent.py rename to api/providers/vdb/vdb-tencent/tests/integration_tests/test_tencent.py index 3d6deff2a0..a53ec87f92 100644 --- a/api/tests/integration_tests/vdb/tcvectordb/test_tencent.py +++ b/api/providers/vdb/vdb-tencent/tests/integration_tests/test_tencent.py @@ -1,12 +1,8 @@ from unittest.mock import MagicMock -from core.rag.datasource.vdb.tencent.tencent_vector import TencentConfig, TencentVector -from tests.integration_tests.vdb.test_vector_store import AbstractVectorTest, get_example_text +from dify_vdb_tencent.tencent_vector import TencentConfig, TencentVector -pytest_plugins = ( - "tests.integration_tests.vdb.test_vector_store", - "tests.integration_tests.vdb.__mock.tcvectordb", -) +from core.rag.datasource.vdb.vector_integration_test_support import AbstractVectorTest, get_example_text mock_client = MagicMock() mock_client.list_databases.return_value = [{"name": "test"}] diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/tencent/test_tencent_vector.py b/api/providers/vdb/vdb-tencent/tests/unit_tests/test_tencent_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/tencent/test_tencent_vector.py rename to api/providers/vdb/vdb-tencent/tests/unit_tests/test_tencent_vector.py index d8f35a6019..299e40ee1e 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/tencent/test_tencent_vector.py +++ b/api/providers/vdb/vdb-tencent/tests/unit_tests/test_tencent_vector.py @@ -140,7 +140,7 @@ def tencent_module(monkeypatch): for name, module in _build_fake_tencent_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.tencent.tencent_vector as module + import dify_vdb_tencent.tencent_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-tidb-on-qdrant/pyproject.toml b/api/providers/vdb/vdb-tidb-on-qdrant/pyproject.toml new file mode 100644 index 0000000000..5040fb38ba --- /dev/null +++ b/api/providers/vdb/vdb-tidb-on-qdrant/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-tidb-on-qdrant" +version = "0.0.1" + +dependencies = [ + "qdrant-client==1.9.0", +] +description = "Dify vector store backend (dify-vdb-tidb-on-qdrant)." + +[project.entry-points."dify.vector_backends"] +tidb_on_qdrant = "dify_vdb_tidb_on_qdrant.tidb_on_qdrant_vector:TidbOnQdrantVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/tencent/__init__.py b/api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/tencent/__init__.py rename to api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/__init__.py diff --git a/api/core/rag/datasource/vdb/tidb_on_qdrant/tidb_on_qdrant_vector.py b/api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/tidb_on_qdrant_vector.py similarity index 88% rename from api/core/rag/datasource/vdb/tidb_on_qdrant/tidb_on_qdrant_vector.py rename to api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/tidb_on_qdrant_vector.py index 605cc5a08f..abca55f540 100644 --- a/api/core/rag/datasource/vdb/tidb_on_qdrant/tidb_on_qdrant_vector.py +++ b/api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/tidb_on_qdrant_vector.py @@ -1,4 +1,5 @@ import json +import logging import os import uuid from collections.abc import Generator, Iterable, Sequence @@ -7,6 +8,8 @@ from typing import TYPE_CHECKING, Any import httpx import qdrant_client + +logger = logging.getLogger(__name__) from flask import current_app from httpx import DigestAuth from pydantic import BaseModel @@ -24,12 +27,12 @@ from sqlalchemy import select from configs import dify_config from core.rag.datasource.vdb.field import Field -from core.rag.datasource.vdb.tidb_on_qdrant.tidb_service import TidbService from core.rag.datasource.vdb.vector_base import BaseVector, VectorIndexStructDict from core.rag.datasource.vdb.vector_factory import AbstractVectorFactory from core.rag.datasource.vdb.vector_type import VectorType from core.rag.embedding.embedding_base import Embeddings from core.rag.models.document import Document +from dify_vdb_tidb_on_qdrant.tidb_service import TidbService from extensions.ext_database import db from extensions.ext_redis import redis_client from models.dataset import Dataset, TidbAuthBinding @@ -292,26 +295,27 @@ class TidbOnQdrantVector(BaseVector): if not ids: return - try: - filter = models.Filter( - must=[ - models.FieldCondition( - key="metadata.doc_id", - match=models.MatchAny(any=ids), - ), - ], - ) - self._client.delete( - collection_name=self._collection_name, - points_selector=FilterSelector(filter=filter), - ) - except UnexpectedResponse as e: - # Collection does not exist, so return - if e.status_code == 404: - return - # Some other error occurred, so re-raise the exception - else: - raise e + batch_size = 1000 + for i in range(0, len(ids), batch_size): + batch = ids[i : i + batch_size] + + try: + filter = models.Filter( + must=[ + models.FieldCondition( + key="metadata.doc_id", + match=models.MatchAny(any=batch), + ), + ], + ) + self._client.delete( + collection_name=self._collection_name, + points_selector=FilterSelector(filter=filter), + ) + except UnexpectedResponse as e: + # Collection does not exist, so return + if e.status_code != 404: + raise e def text_exists(self, id: str) -> bool: all_collection_name = [] @@ -420,13 +424,16 @@ class TidbOnQdrantVector(BaseVector): class TidbOnQdrantVectorFactory(AbstractVectorFactory): def init_vector(self, dataset: Dataset, attributes: list, embeddings: Embeddings) -> TidbOnQdrantVector: + logger.info("init_vector: tenant_id=%s, dataset_id=%s", dataset.tenant_id, dataset.id) stmt = select(TidbAuthBinding).where(TidbAuthBinding.tenant_id == dataset.tenant_id) tidb_auth_binding = db.session.scalars(stmt).one_or_none() if not tidb_auth_binding: + logger.info("No existing TidbAuthBinding for tenant %s, acquiring lock", dataset.tenant_id) with redis_client.lock("create_tidb_serverless_cluster_lock", timeout=900): stmt = select(TidbAuthBinding).where(TidbAuthBinding.tenant_id == dataset.tenant_id) tidb_auth_binding = db.session.scalars(stmt).one_or_none() if tidb_auth_binding: + logger.info("Found binding after lock: cluster_id=%s", tidb_auth_binding.cluster_id) TIDB_ON_QDRANT_API_KEY = f"{tidb_auth_binding.account}:{tidb_auth_binding.password}" else: @@ -436,11 +443,18 @@ class TidbOnQdrantVectorFactory(AbstractVectorFactory): .limit(1) ) if idle_tidb_auth_binding: + logger.info( + "Assigning idle cluster %s to tenant %s", + idle_tidb_auth_binding.cluster_id, + dataset.tenant_id, + ) idle_tidb_auth_binding.active = True idle_tidb_auth_binding.tenant_id = dataset.tenant_id db.session.commit() + tidb_auth_binding = idle_tidb_auth_binding TIDB_ON_QDRANT_API_KEY = f"{idle_tidb_auth_binding.account}:{idle_tidb_auth_binding.password}" else: + logger.info("No idle clusters available, creating new cluster for tenant %s", dataset.tenant_id) new_cluster = TidbService.create_tidb_serverless_cluster( dify_config.TIDB_PROJECT_ID or "", dify_config.TIDB_API_URL or "", @@ -449,21 +463,39 @@ class TidbOnQdrantVectorFactory(AbstractVectorFactory): dify_config.TIDB_PRIVATE_KEY or "", dify_config.TIDB_REGION or "", ) + logger.info( + "New cluster created: cluster_id=%s, qdrant_endpoint=%s", + new_cluster["cluster_id"], + new_cluster.get("qdrant_endpoint"), + ) new_tidb_auth_binding = TidbAuthBinding( cluster_id=new_cluster["cluster_id"], cluster_name=new_cluster["cluster_name"], account=new_cluster["account"], password=new_cluster["password"], + qdrant_endpoint=new_cluster.get("qdrant_endpoint"), tenant_id=dataset.tenant_id, active=True, status=TidbAuthBindingStatus.ACTIVE, ) db.session.add(new_tidb_auth_binding) db.session.commit() + tidb_auth_binding = new_tidb_auth_binding TIDB_ON_QDRANT_API_KEY = f"{new_tidb_auth_binding.account}:{new_tidb_auth_binding.password}" else: + logger.info("Existing binding found: cluster_id=%s", tidb_auth_binding.cluster_id) TIDB_ON_QDRANT_API_KEY = f"{tidb_auth_binding.account}:{tidb_auth_binding.password}" + qdrant_url = ( + (tidb_auth_binding.qdrant_endpoint if tidb_auth_binding else None) or dify_config.TIDB_ON_QDRANT_URL or "" + ) + logger.info( + "Using qdrant endpoint: %s (from_binding=%s, fallback_global=%s)", + qdrant_url, + tidb_auth_binding.qdrant_endpoint if tidb_auth_binding else None, + dify_config.TIDB_ON_QDRANT_URL, + ) + if dataset.index_struct_dict: class_prefix: str = dataset.index_struct_dict["vector_store"]["class_prefix"] collection_name = class_prefix @@ -478,7 +510,7 @@ class TidbOnQdrantVectorFactory(AbstractVectorFactory): collection_name=collection_name, group_id=dataset.id, config=TidbOnQdrantConfig( - endpoint=dify_config.TIDB_ON_QDRANT_URL or "", + endpoint=qdrant_url, api_key=TIDB_ON_QDRANT_API_KEY, root_path=str(config.root_path), timeout=dify_config.TIDB_ON_QDRANT_CLIENT_TIMEOUT, diff --git a/api/core/rag/datasource/vdb/tidb_on_qdrant/tidb_service.py b/api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/tidb_service.py similarity index 73% rename from api/core/rag/datasource/vdb/tidb_on_qdrant/tidb_service.py rename to api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/tidb_service.py index 37114be6e7..ece061db67 100644 --- a/api/core/rag/datasource/vdb/tidb_on_qdrant/tidb_service.py +++ b/api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/tidb_service.py @@ -1,3 +1,4 @@ +import logging import time import uuid from collections.abc import Sequence @@ -12,6 +13,8 @@ from extensions.ext_redis import redis_client from models.dataset import TidbAuthBinding from models.enums import TidbAuthBindingStatus +logger = logging.getLogger(__name__) + # Reuse a pooled HTTP client for all TiDB Cloud requests to minimize connection churn _tidb_http_client: httpx.Client = get_pooled_http_client( "tidb:cloud", @@ -20,6 +23,46 @@ _tidb_http_client: httpx.Client = get_pooled_http_client( class TidbService: + @staticmethod + def extract_qdrant_endpoint(cluster_response: dict) -> str | None: + """Extract the qdrant endpoint URL from a Get Cluster API response. + + Reads ``endpoints.public.host`` (e.g. ``gateway01.xx.tidbcloud.com``), + prepends ``qdrant-`` and wraps it as an ``https://`` URL. + """ + endpoints = cluster_response.get("endpoints") or {} + public = endpoints.get("public") or {} + host = public.get("host") + if host: + return f"https://qdrant-{host}" + return None + + @staticmethod + def fetch_qdrant_endpoint(api_url: str, public_key: str, private_key: str, cluster_id: str) -> str | None: + """Call Get Cluster API and extract the qdrant endpoint. + + Use ``extract_qdrant_endpoint`` instead when you already have + the cluster response to avoid a redundant API call. + """ + try: + logger.info("Fetching qdrant endpoint for cluster %s", cluster_id) + cluster_response = TidbService.get_tidb_serverless_cluster(api_url, public_key, private_key, cluster_id) + if not cluster_response: + logger.warning("Empty response from Get Cluster API for cluster %s", cluster_id) + return None + qdrant_url = TidbService.extract_qdrant_endpoint(cluster_response) + if qdrant_url: + logger.info("Resolved qdrant endpoint for cluster %s: %s", cluster_id, qdrant_url) + return qdrant_url + logger.warning( + "No endpoints.public.host found for cluster %s, response keys: %s", + cluster_id, + list(cluster_response.keys()), + ) + except Exception: + logger.exception("Failed to fetch qdrant endpoint for cluster %s", cluster_id) + return None + @staticmethod def create_tidb_serverless_cluster( project_id: str, api_url: str, iam_url: str, public_key: str, private_key: str, region: str @@ -57,6 +100,7 @@ class TidbService: "rootPassword": password, } + logger.info("Creating TiDB serverless cluster: display_name=%s, region=%s", display_name, region) response = _tidb_http_client.post( f"{api_url}/clusters", json=cluster_data, auth=DigestAuth(public_key, private_key) ) @@ -64,21 +108,39 @@ class TidbService: if response.status_code == 200: response_data = response.json() cluster_id = response_data["clusterId"] + logger.info("Cluster created, cluster_id=%s, waiting for ACTIVE state", cluster_id) retry_count = 0 max_retries = 30 while retry_count < max_retries: cluster_response = TidbService.get_tidb_serverless_cluster(api_url, public_key, private_key, cluster_id) if cluster_response["state"] == "ACTIVE": user_prefix = cluster_response["userPrefix"] + qdrant_endpoint = TidbService.extract_qdrant_endpoint(cluster_response) + logger.info( + "Cluster %s is ACTIVE, user_prefix=%s, qdrant_endpoint=%s", + cluster_id, + user_prefix, + qdrant_endpoint, + ) return { "cluster_id": cluster_id, "cluster_name": display_name, "account": f"{user_prefix}.root", "password": password, + "qdrant_endpoint": qdrant_endpoint, } - time.sleep(30) # wait 30 seconds before retrying + logger.info( + "Cluster %s state=%s, retry %d/%d", + cluster_id, + cluster_response["state"], + retry_count + 1, + max_retries, + ) + time.sleep(30) retry_count += 1 + logger.error("Cluster %s did not become ACTIVE after %d retries", cluster_id, max_retries) else: + logger.error("Failed to create cluster: status=%d, body=%s", response.status_code, response.text) response.raise_for_status() @staticmethod @@ -243,19 +305,29 @@ class TidbService: if response.status_code == 200: response_data = response.json() cluster_infos = [] + logger.info("Batch created %d clusters", len(response_data.get("clusters", []))) for item in response_data["clusters"]: cache_key = f"tidb_serverless_cluster_password:{item['displayName']}" cached_password = redis_client.get(cache_key) if not cached_password: + logger.warning("No cached password for cluster %s, skipping", item["displayName"]) continue + qdrant_endpoint = TidbService.fetch_qdrant_endpoint(api_url, public_key, private_key, item["clusterId"]) + logger.info( + "Batch cluster %s: qdrant_endpoint=%s", + item["clusterId"], + qdrant_endpoint, + ) cluster_info = { "cluster_id": item["clusterId"], "cluster_name": item["displayName"], "account": "root", "password": cached_password.decode("utf-8"), + "qdrant_endpoint": qdrant_endpoint, } cluster_infos.append(cluster_info) return cluster_infos else: + logger.error("Batch create failed: status=%d, body=%s", response.status_code, response.text) response.raise_for_status() return [] diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/tidb_on_qdrant/test_tidb_on_qdrant_vector.py b/api/providers/vdb/vdb-tidb-on-qdrant/tests/unit_tests/test_tidb_on_qdrant_vector.py similarity index 64% rename from api/tests/unit_tests/core/rag/datasource/vdb/tidb_on_qdrant/test_tidb_on_qdrant_vector.py rename to api/providers/vdb/vdb-tidb-on-qdrant/tests/unit_tests/test_tidb_on_qdrant_vector.py index c25af79ae4..76802de62e 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/tidb_on_qdrant/test_tidb_on_qdrant_vector.py +++ b/api/providers/vdb/vdb-tidb-on-qdrant/tests/unit_tests/test_tidb_on_qdrant_vector.py @@ -2,13 +2,12 @@ from unittest.mock import patch import httpx import pytest -from qdrant_client.http import models as rest -from qdrant_client.http.exceptions import UnexpectedResponse - -from core.rag.datasource.vdb.tidb_on_qdrant.tidb_on_qdrant_vector import ( +from dify_vdb_tidb_on_qdrant.tidb_on_qdrant_vector import ( TidbOnQdrantConfig, TidbOnQdrantVector, ) +from qdrant_client.http import models as rest +from qdrant_client.http.exceptions import UnexpectedResponse class TestTidbOnQdrantVectorDeleteByIds: @@ -22,7 +21,7 @@ class TestTidbOnQdrantVectorDeleteByIds: api_key="test_api_key", ) - with patch("core.rag.datasource.vdb.tidb_on_qdrant.tidb_on_qdrant_vector.qdrant_client.QdrantClient"): + with patch("dify_vdb_tidb_on_qdrant.tidb_on_qdrant_vector.qdrant_client.QdrantClient"): vector = TidbOnQdrantVector( collection_name="test_collection", group_id="test_group", @@ -115,14 +114,12 @@ class TestTidbOnQdrantVectorDeleteByIds: assert exc_info.value.status_code == 500 - def test_delete_by_ids_with_large_batch(self, vector_instance): - """Test deletion with a large batch of IDs.""" - # Create 1000 IDs + def test_delete_by_ids_with_exactly_1000(self, vector_instance): + """Test deletion with exactly 1000 IDs triggers a single batch.""" ids = [f"doc_{i}" for i in range(1000)] vector_instance.delete_by_ids(ids) - # Verify single delete call with all IDs vector_instance._client.delete.assert_called_once() call_args = vector_instance._client.delete.call_args @@ -130,11 +127,28 @@ class TestTidbOnQdrantVectorDeleteByIds: filter_obj = filter_selector.filter field_condition = filter_obj.must[0] - # Verify all 1000 IDs are in the batch assert len(field_condition.match.any) == 1000 assert "doc_0" in field_condition.match.any assert "doc_999" in field_condition.match.any + def test_delete_by_ids_splits_into_batches(self, vector_instance): + """Test deletion with >1000 IDs triggers multiple batched calls.""" + ids = [f"doc_{i}" for i in range(2500)] + + vector_instance.delete_by_ids(ids) + + assert vector_instance._client.delete.call_count == 3 + + batches = [] + for call in vector_instance._client.delete.call_args_list: + filter_selector = call[1]["points_selector"] + field_condition = filter_selector.filter.must[0] + batches.append(field_condition.match.any) + + assert len(batches[0]) == 1000 + assert len(batches[1]) == 1000 + assert len(batches[2]) == 500 + def test_delete_by_ids_filter_structure(self, vector_instance): """Test that the filter structure is correctly constructed.""" ids = ["doc1", "doc2"] @@ -158,3 +172,57 @@ class TestTidbOnQdrantVectorDeleteByIds: # Verify MatchAny structure assert isinstance(field_condition.match, rest.MatchAny) assert field_condition.match.any == ids + + +class TestInitVectorEndpointSelection: + """Test that init_vector selects the correct qdrant endpoint. + + We avoid importing the full module (which triggers Flask app context) + by testing the endpoint selection logic directly on TidbOnQdrantConfig. + """ + + def test_uses_binding_endpoint_when_present(self): + binding_endpoint = "https://qdrant-custom.tidb.com" + global_url = "https://qdrant-global.tidb.com" + + qdrant_url = binding_endpoint or global_url or "" + + assert qdrant_url == "https://qdrant-custom.tidb.com" + config = TidbOnQdrantConfig(endpoint=qdrant_url) + assert config.endpoint == "https://qdrant-custom.tidb.com" + + def test_falls_back_to_global_when_binding_endpoint_is_none(self): + binding_endpoint = None + global_url = "https://qdrant-global.tidb.com" + + qdrant_url = binding_endpoint or global_url or "" + + assert qdrant_url == "https://qdrant-global.tidb.com" + config = TidbOnQdrantConfig(endpoint=qdrant_url) + assert config.endpoint == "https://qdrant-global.tidb.com" + + def test_falls_back_to_empty_when_both_none(self): + binding_endpoint = None + global_url = None + + qdrant_url = binding_endpoint or global_url or "" + + assert qdrant_url == "" + config = TidbOnQdrantConfig(endpoint=qdrant_url) + assert config.endpoint == "" + + def test_binding_endpoint_takes_precedence_over_global(self): + binding_endpoint = "https://qdrant-ap-southeast.tidb.com" + global_url = "https://qdrant-us-east.tidb.com" + + qdrant_url = binding_endpoint or global_url or "" + + assert qdrant_url == "https://qdrant-ap-southeast.tidb.com" + + def test_empty_string_binding_endpoint_falls_back_to_global(self): + binding_endpoint = "" + global_url = "https://qdrant-global.tidb.com" + + qdrant_url = binding_endpoint or global_url or "" + + assert qdrant_url == "https://qdrant-global.tidb.com" diff --git a/api/providers/vdb/vdb-tidb-on-qdrant/tests/unit_tests/test_tidb_service.py b/api/providers/vdb/vdb-tidb-on-qdrant/tests/unit_tests/test_tidb_service.py new file mode 100644 index 0000000000..c1ffbacbbc --- /dev/null +++ b/api/providers/vdb/vdb-tidb-on-qdrant/tests/unit_tests/test_tidb_service.py @@ -0,0 +1,218 @@ +from unittest.mock import MagicMock, patch + +import pytest +from dify_vdb_tidb_on_qdrant.tidb_service import TidbService + + +class TestExtractQdrantEndpoint: + """Unit tests for TidbService.extract_qdrant_endpoint.""" + + def test_returns_endpoint_when_host_present(self): + response = {"endpoints": {"public": {"host": "gateway01.us-east-1.tidbcloud.com", "port": 4000}}} + result = TidbService.extract_qdrant_endpoint(response) + assert result == "https://qdrant-gateway01.us-east-1.tidbcloud.com" + + def test_returns_none_when_host_missing(self): + response = {"endpoints": {"public": {}}} + assert TidbService.extract_qdrant_endpoint(response) is None + + def test_returns_none_when_public_missing(self): + response = {"endpoints": {}} + assert TidbService.extract_qdrant_endpoint(response) is None + + def test_returns_none_when_endpoints_missing(self): + assert TidbService.extract_qdrant_endpoint({}) is None + + +class TestFetchQdrantEndpoint: + """Unit tests for TidbService.fetch_qdrant_endpoint.""" + + @patch.object(TidbService, "get_tidb_serverless_cluster") + def test_returns_endpoint_when_host_present(self, mock_get_cluster): + mock_get_cluster.return_value = { + "endpoints": {"public": {"host": "gateway01.us-east-1.tidbcloud.com", "port": 4000}} + } + result = TidbService.fetch_qdrant_endpoint("url", "pub", "priv", "c-123") + assert result == "https://qdrant-gateway01.us-east-1.tidbcloud.com" + + @patch.object(TidbService, "get_tidb_serverless_cluster") + def test_returns_none_when_cluster_response_is_none(self, mock_get_cluster): + mock_get_cluster.return_value = None + assert TidbService.fetch_qdrant_endpoint("url", "pub", "priv", "c-123") is None + + @patch.object(TidbService, "get_tidb_serverless_cluster") + def test_returns_none_when_host_missing(self, mock_get_cluster): + mock_get_cluster.return_value = {"endpoints": {"public": {}}} + assert TidbService.fetch_qdrant_endpoint("url", "pub", "priv", "c-123") is None + + @patch.object(TidbService, "get_tidb_serverless_cluster") + def test_returns_none_when_endpoints_missing(self, mock_get_cluster): + mock_get_cluster.return_value = {} + assert TidbService.fetch_qdrant_endpoint("url", "pub", "priv", "c-123") is None + + @patch.object(TidbService, "get_tidb_serverless_cluster") + def test_returns_none_on_exception(self, mock_get_cluster): + mock_get_cluster.side_effect = RuntimeError("network error") + assert TidbService.fetch_qdrant_endpoint("url", "pub", "priv", "c-123") is None + + +class TestCreateTidbServerlessClusterQdrantEndpoint: + """Verify that create_tidb_serverless_cluster includes qdrant_endpoint in its result.""" + + @patch.object(TidbService, "get_tidb_serverless_cluster") + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_result_contains_qdrant_endpoint(self, mock_config, mock_http, mock_get_cluster): + mock_config.TIDB_SPEND_LIMIT = 10 + mock_http.post.return_value = MagicMock(status_code=200, json=lambda: {"clusterId": "c-1"}) + mock_get_cluster.return_value = { + "state": "ACTIVE", + "userPrefix": "pfx", + "endpoints": {"public": {"host": "gw.tidbcloud.com", "port": 4000}}, + } + + result = TidbService.create_tidb_serverless_cluster("proj", "url", "iam", "pub", "priv", "us-east-1") + + assert result is not None + assert result["qdrant_endpoint"] == "https://qdrant-gw.tidbcloud.com" + + @patch.object(TidbService, "get_tidb_serverless_cluster") + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_result_qdrant_endpoint_none_when_no_endpoints(self, mock_config, mock_http, mock_get_cluster): + mock_config.TIDB_SPEND_LIMIT = 10 + mock_http.post.return_value = MagicMock(status_code=200, json=lambda: {"clusterId": "c-1"}) + mock_get_cluster.return_value = {"state": "ACTIVE", "userPrefix": "pfx"} + + result = TidbService.create_tidb_serverless_cluster("proj", "url", "iam", "pub", "priv", "us-east-1") + + assert result is not None + assert result["qdrant_endpoint"] is None + + +class TestBatchCreateTidbServerlessClusterQdrantEndpoint: + """Verify that batch_create includes qdrant_endpoint per cluster.""" + + @patch.object(TidbService, "fetch_qdrant_endpoint", return_value="https://qdrant-gw.tidbcloud.com") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.redis_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_batch_result_contains_qdrant_endpoint(self, mock_config, mock_http, mock_redis, mock_fetch_ep): + mock_config.TIDB_SPEND_LIMIT = 10 + cluster_name = "abc123" + mock_http.post.return_value = MagicMock( + status_code=200, + json=lambda: {"clusters": [{"clusterId": "c-1", "displayName": cluster_name}]}, + ) + mock_redis.setex = MagicMock() + mock_redis.get.return_value = b"password123" + + result = TidbService.batch_create_tidb_serverless_cluster( + batch_size=1, + project_id="proj", + api_url="url", + iam_url="iam", + public_key="pub", + private_key="priv", + region="us-east-1", + ) + + assert len(result) == 1 + assert result[0]["qdrant_endpoint"] == "https://qdrant-gw.tidbcloud.com" + + +class TestCreateTidbServerlessClusterRetry: + """Cover retry/logging paths in create_tidb_serverless_cluster.""" + + @patch.object(TidbService, "get_tidb_serverless_cluster") + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_polls_until_active(self, mock_config, mock_http, mock_get_cluster): + mock_config.TIDB_SPEND_LIMIT = 10 + mock_http.post.return_value = MagicMock(status_code=200, json=lambda: {"clusterId": "c-1"}) + mock_get_cluster.side_effect = [ + {"state": "CREATING", "userPrefix": ""}, + {"state": "ACTIVE", "userPrefix": "pfx", "endpoints": {"public": {"host": "gw.tidb.com"}}}, + ] + + with patch("dify_vdb_tidb_on_qdrant.tidb_service.time.sleep"): + result = TidbService.create_tidb_serverless_cluster("proj", "url", "iam", "pub", "priv", "us-east-1") + + assert result is not None + assert result["qdrant_endpoint"] == "https://qdrant-gw.tidb.com" + assert mock_get_cluster.call_count == 2 + + @patch.object(TidbService, "get_tidb_serverless_cluster") + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_returns_none_after_max_retries(self, mock_config, mock_http, mock_get_cluster): + mock_config.TIDB_SPEND_LIMIT = 10 + mock_http.post.return_value = MagicMock(status_code=200, json=lambda: {"clusterId": "c-1"}) + mock_get_cluster.return_value = {"state": "CREATING", "userPrefix": ""} + + with patch("dify_vdb_tidb_on_qdrant.tidb_service.time.sleep"): + result = TidbService.create_tidb_serverless_cluster("proj", "url", "iam", "pub", "priv", "us-east-1") + + assert result is None + + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_raises_on_post_failure(self, mock_config, mock_http): + mock_config.TIDB_SPEND_LIMIT = 10 + mock_response = MagicMock(status_code=400, text="Bad Request") + mock_response.raise_for_status.side_effect = Exception("HTTP 400") + mock_http.post.return_value = mock_response + + with pytest.raises(Exception, match="HTTP 400"): + TidbService.create_tidb_serverless_cluster("proj", "url", "iam", "pub", "priv", "us-east-1") + + +class TestBatchCreateEdgeCases: + """Cover logging/edge-case branches in batch_create.""" + + @patch.object(TidbService, "fetch_qdrant_endpoint", return_value=None) + @patch("dify_vdb_tidb_on_qdrant.tidb_service.redis_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_skips_cluster_when_no_cached_password(self, mock_config, mock_http, mock_redis, mock_fetch_ep): + mock_config.TIDB_SPEND_LIMIT = 10 + mock_http.post.return_value = MagicMock( + status_code=200, + json=lambda: {"clusters": [{"clusterId": "c-1", "displayName": "name1"}]}, + ) + mock_redis.setex = MagicMock() + mock_redis.get.return_value = None + + result = TidbService.batch_create_tidb_serverless_cluster( + batch_size=1, + project_id="proj", + api_url="url", + iam_url="iam", + public_key="pub", + private_key="priv", + region="us-east-1", + ) + + assert len(result) == 0 + mock_fetch_ep.assert_not_called() + + @patch("dify_vdb_tidb_on_qdrant.tidb_service.redis_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_raises_on_post_failure(self, mock_config, mock_http, mock_redis): + mock_config.TIDB_SPEND_LIMIT = 10 + mock_response = MagicMock(status_code=500, text="Server Error") + mock_response.raise_for_status.side_effect = Exception("HTTP 500") + mock_http.post.return_value = mock_response + mock_redis.setex = MagicMock() + + with pytest.raises(Exception, match="HTTP 500"): + TidbService.batch_create_tidb_serverless_cluster( + batch_size=1, + project_id="proj", + api_url="url", + iam_url="iam", + public_key="pub", + private_key="priv", + region="us-east-1", + ) diff --git a/api/providers/vdb/vdb-tidb-vector/pyproject.toml b/api/providers/vdb/vdb-tidb-vector/pyproject.toml new file mode 100644 index 0000000000..0e2f0ad88f --- /dev/null +++ b/api/providers/vdb/vdb-tidb-vector/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-tidb-vector" +version = "0.0.1" + +dependencies = [ + "tidb-vector==0.0.15", +] +description = "Dify vector store backend (dify-vdb-tidb-vector)." + +[project.entry-points."dify.vector_backends"] +tidb_vector = "dify_vdb_tidb_vector.tidb_vector:TiDBVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/tidb_on_qdrant/__init__.py b/api/providers/vdb/vdb-tidb-vector/src/dify_vdb_tidb_vector/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/tidb_on_qdrant/__init__.py rename to api/providers/vdb/vdb-tidb-vector/src/dify_vdb_tidb_vector/__init__.py diff --git a/api/core/rag/datasource/vdb/tidb_vector/tidb_vector.py b/api/providers/vdb/vdb-tidb-vector/src/dify_vdb_tidb_vector/tidb_vector.py similarity index 97% rename from api/core/rag/datasource/vdb/tidb_vector/tidb_vector.py rename to api/providers/vdb/vdb-tidb-vector/src/dify_vdb_tidb_vector/tidb_vector.py index c948917374..c696a685dd 100644 --- a/api/core/rag/datasource/vdb/tidb_vector/tidb_vector.py +++ b/api/providers/vdb/vdb-tidb-vector/src/dify_vdb_tidb_vector/tidb_vector.py @@ -6,7 +6,7 @@ import sqlalchemy from pydantic import BaseModel, model_validator from sqlalchemy import JSON, TEXT, Column, DateTime, String, Table, create_engine, insert from sqlalchemy import text as sql_text -from sqlalchemy.orm import Session, declarative_base +from sqlalchemy.orm import Session, declarative_base, sessionmaker from configs import dify_config from core.rag.datasource.vdb.field import Field, parse_metadata_json @@ -31,7 +31,7 @@ class TiDBVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config TIDB_VECTOR_HOST is required") if not values["port"]: @@ -97,8 +97,7 @@ class TiDBVector(BaseVector): if redis_client.get(collection_exist_cache_key): return tidb_dist_func = self._get_distance_func() - with Session(self._engine) as session: - session.begin() + with sessionmaker(bind=self._engine).begin() as session: create_statement = sql_text(f""" CREATE TABLE IF NOT EXISTS {self._collection_name} ( id CHAR(36) PRIMARY KEY, @@ -115,7 +114,6 @@ class TiDBVector(BaseVector): ); """) session.execute(create_statement) - session.commit() redis_client.set(collection_exist_cache_key, 1, ex=3600) def add_texts(self, documents: list[Document], embeddings: list[list[float]], **kwargs): @@ -238,9 +236,8 @@ class TiDBVector(BaseVector): return [] def delete(self): - with Session(self._engine) as session: + with sessionmaker(bind=self._engine).begin() as session: session.execute(sql_text(f"""DROP TABLE IF EXISTS {self._collection_name};""")) - session.commit() def _get_distance_func(self) -> str: match self._distance_func: diff --git a/api/tests/integration_tests/vdb/tidb_vector/check_tiflash_ready.py b/api/providers/vdb/vdb-tidb-vector/tests/integration_tests/check_tiflash_ready.py similarity index 72% rename from api/tests/integration_tests/vdb/tidb_vector/check_tiflash_ready.py rename to api/providers/vdb/vdb-tidb-vector/tests/integration_tests/check_tiflash_ready.py index f76700aa0e..97f8406e42 100644 --- a/api/tests/integration_tests/vdb/tidb_vector/check_tiflash_ready.py +++ b/api/providers/vdb/vdb-tidb-vector/tests/integration_tests/check_tiflash_ready.py @@ -1,9 +1,13 @@ +import logging import time import pymysql +logger = logging.getLogger(__name__) + def check_tiflash_ready() -> bool: + connection = None try: connection = pymysql.connect( host="localhost", @@ -23,8 +27,8 @@ def check_tiflash_ready() -> bool: cursor.execute(select_tiflash_query) result = cursor.fetchall() return result is not None and len(result) > 0 - except Exception as e: - print(f"TiFlash is not ready. Exception: {e}") + except Exception: + logger.exception("TiFlash is not ready.") return False finally: if connection: @@ -38,20 +42,20 @@ def main(): for attempt in range(max_attempts): try: is_tiflash_ready = check_tiflash_ready() - except Exception as e: - print(f"TiFlash is not ready. Exception: {e}") + except Exception: + logger.exception("TiFlash is not ready.") is_tiflash_ready = False if is_tiflash_ready: break else: - print(f"Attempt {attempt + 1} failed, retry in {retry_interval_seconds} seconds...") + logger.error("Attempt %s failed, retry in %s seconds...", attempt + 1, retry_interval_seconds) time.sleep(retry_interval_seconds) if is_tiflash_ready: - print("TiFlash is ready in TiDB.") + logger.info("TiFlash is ready in TiDB.") else: - print(f"TiFlash is not ready in TiDB after {max_attempts} attempting checks.") + logger.error("TiFlash is not ready in TiDB after %s attempting checks.", max_attempts) exit(1) diff --git a/api/tests/integration_tests/vdb/tidb_vector/test_tidb_vector.py b/api/providers/vdb/vdb-tidb-vector/tests/integration_tests/test_tidb_vector.py similarity index 77% rename from api/tests/integration_tests/vdb/tidb_vector/test_tidb_vector.py rename to api/providers/vdb/vdb-tidb-vector/tests/integration_tests/test_tidb_vector.py index 14c6d1c67c..ac854acbf9 100644 --- a/api/tests/integration_tests/vdb/tidb_vector/test_tidb_vector.py +++ b/api/providers/vdb/vdb-tidb-vector/tests/integration_tests/test_tidb_vector.py @@ -1,10 +1,8 @@ import pytest +from dify_vdb_tidb_vector.tidb_vector import TiDBVector, TiDBVectorConfig -from core.rag.datasource.vdb.tidb_vector.tidb_vector import TiDBVector, TiDBVectorConfig -from models.dataset import Document -from tests.integration_tests.vdb.test_vector_store import AbstractVectorTest, get_example_text - -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) +from core.rag.datasource.vdb.vector_integration_test_support import AbstractVectorTest, get_example_text +from core.rag.models.document import Document @pytest.fixture diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/tidb_vector/test_tidb_vector.py b/api/providers/vdb/vdb-tidb-vector/tests/unit_tests/test_tidb_vector.py similarity index 97% rename from api/tests/unit_tests/core/rag/datasource/vdb/tidb_vector/test_tidb_vector.py rename to api/providers/vdb/vdb-tidb-vector/tests/unit_tests/test_tidb_vector.py index 951a920f3b..bdbed2f740 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/tidb_vector/test_tidb_vector.py +++ b/api/providers/vdb/vdb-tidb-vector/tests/unit_tests/test_tidb_vector.py @@ -12,7 +12,7 @@ from core.rag.models.document import Document @pytest.fixture def tidb_module(): - import core.rag.datasource.vdb.tidb_vector.tidb_vector as module + import dify_vdb_tidb_vector.tidb_vector as module return importlib.reload(module) @@ -137,14 +137,15 @@ def test_create_collection_executes_create_sql_and_sets_cache(tidb_module, monke session = MagicMock() - class _SessionCtx: + class _BeginCtx: def __enter__(self): return session def __exit__(self, exc_type, exc, tb): return False - monkeypatch.setattr(tidb_module, "Session", lambda _engine: _SessionCtx()) + mock_sm = MagicMock(begin=MagicMock(return_value=_BeginCtx())) + monkeypatch.setattr(tidb_module, "sessionmaker", lambda **kwargs: mock_sm) vector = tidb_module.TiDBVector.__new__(tidb_module.TiDBVector) vector._collection_name = "collection_1" @@ -153,11 +154,9 @@ def test_create_collection_executes_create_sql_and_sets_cache(tidb_module, monke vector._create_collection(3) - session.begin.assert_called_once() sql = str(session.execute.call_args.args[0]) assert "VECTOR(3)" in sql assert "VEC_L2_DISTANCE" in sql - session.commit.assert_called_once() tidb_module.redis_client.set.assert_called_once() @@ -396,23 +395,22 @@ def test_search_by_vector_filters_and_scores(tidb_module, monkeypatch): def test_delete_drops_table(tidb_module, monkeypatch): session = MagicMock() session.execute.return_value = None - session.commit = MagicMock() - class _SessionCtx: + class _BeginCtx: def __enter__(self): return session def __exit__(self, exc_type, exc, tb): return False - monkeypatch.setattr(tidb_module, "Session", lambda _engine: _SessionCtx()) + mock_sm = MagicMock(begin=MagicMock(return_value=_BeginCtx())) + monkeypatch.setattr(tidb_module, "sessionmaker", lambda **kwargs: mock_sm) vector = tidb_module.TiDBVector.__new__(tidb_module.TiDBVector) vector._collection_name = "collection_1" vector._engine = MagicMock() vector.delete() drop_sql = str(session.execute.call_args.args[0]) assert "DROP TABLE IF EXISTS collection_1" in drop_sql - session.commit.assert_called_once() def test_tidb_factory_uses_existing_or_generated_collection(tidb_module, monkeypatch): diff --git a/api/providers/vdb/vdb-upstash/pyproject.toml b/api/providers/vdb/vdb-upstash/pyproject.toml new file mode 100644 index 0000000000..f71773cdbb --- /dev/null +++ b/api/providers/vdb/vdb-upstash/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-upstash" +version = "0.0.1" + +dependencies = [ + "upstash-vector==0.8.0", +] +description = "Dify vector store backend (dify-vdb-upstash)." + +[project.entry-points."dify.vector_backends"] +upstash = "dify_vdb_upstash.upstash_vector:UpstashVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/tidb_vector/__init__.py b/api/providers/vdb/vdb-upstash/src/dify_vdb_upstash/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/tidb_vector/__init__.py rename to api/providers/vdb/vdb-upstash/src/dify_vdb_upstash/__init__.py diff --git a/api/core/rag/datasource/vdb/upstash/upstash_vector.py b/api/providers/vdb/vdb-upstash/src/dify_vdb_upstash/upstash_vector.py similarity index 98% rename from api/core/rag/datasource/vdb/upstash/upstash_vector.py rename to api/providers/vdb/vdb-upstash/src/dify_vdb_upstash/upstash_vector.py index 289d971853..75d70a1964 100644 --- a/api/core/rag/datasource/vdb/upstash/upstash_vector.py +++ b/api/providers/vdb/vdb-upstash/src/dify_vdb_upstash/upstash_vector.py @@ -20,7 +20,7 @@ class UpstashVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["url"]: raise ValueError("Upstash URL is required") if not values["token"]: diff --git a/api/tests/integration_tests/vdb/__mock/upstashvectordb.py b/api/providers/vdb/vdb-upstash/tests/integration_tests/conftest.py similarity index 94% rename from api/tests/integration_tests/vdb/__mock/upstashvectordb.py rename to api/providers/vdb/vdb-upstash/tests/integration_tests/conftest.py index 70c85d4c98..adba0c150c 100644 --- a/api/tests/integration_tests/vdb/__mock/upstashvectordb.py +++ b/api/providers/vdb/vdb-upstash/tests/integration_tests/conftest.py @@ -6,7 +6,6 @@ from _pytest.monkeypatch import MonkeyPatch from upstash_vector import Index -# Mocking the Index class from upstash_vector class MockIndex: def __init__(self, url="", token=""): self.url = url @@ -37,7 +36,6 @@ class MockIndex: namespace: str = "", include_data: bool = False, ): - # Simple mock query, in real scenario you would calculate similarity mock_result = [] for vector_data in self.vectors: mock_result.append(vector_data) diff --git a/api/tests/integration_tests/vdb/upstash/test_upstash_vector.py b/api/providers/vdb/vdb-upstash/tests/integration_tests/test_upstash_vector.py similarity index 75% rename from api/tests/integration_tests/vdb/upstash/test_upstash_vector.py rename to api/providers/vdb/vdb-upstash/tests/integration_tests/test_upstash_vector.py index 8cea0a05eb..f4a65030b6 100644 --- a/api/tests/integration_tests/vdb/upstash/test_upstash_vector.py +++ b/api/providers/vdb/vdb-upstash/tests/integration_tests/test_upstash_vector.py @@ -1,8 +1,7 @@ -from core.rag.datasource.vdb.upstash.upstash_vector import UpstashVector, UpstashVectorConfig -from core.rag.models.document import Document -from tests.integration_tests.vdb.test_vector_store import AbstractVectorTest, get_example_text +from dify_vdb_upstash.upstash_vector import UpstashVector, UpstashVectorConfig -pytest_plugins = ("tests.integration_tests.vdb.__mock.upstashvectordb",) +from core.rag.datasource.vdb.vector_integration_test_support import AbstractVectorTest, get_example_text +from core.rag.models.document import Document class UpstashVectorTest(AbstractVectorTest): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/upstash/test_upstash_vector.py b/api/providers/vdb/vdb-upstash/tests/unit_tests/test_upstash_vector.py similarity index 97% rename from api/tests/unit_tests/core/rag/datasource/vdb/upstash/test_upstash_vector.py rename to api/providers/vdb/vdb-upstash/tests/unit_tests/test_upstash_vector.py index ac8a63a44b..a884275c89 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/upstash/test_upstash_vector.py +++ b/api/providers/vdb/vdb-upstash/tests/unit_tests/test_upstash_vector.py @@ -38,11 +38,11 @@ def _build_fake_upstash_module(): @pytest.fixture def upstash_module(monkeypatch): # Remove patched modules if present - for modname in ["upstash_vector", "core.rag.datasource.vdb.upstash.upstash_vector"]: + for modname in ["upstash_vector", "dify_vdb_upstash.upstash_vector"]: if modname in sys.modules: monkeypatch.delitem(sys.modules, modname, raising=False) monkeypatch.setitem(sys.modules, "upstash_vector", _build_fake_upstash_module()) - module = importlib.import_module("core.rag.datasource.vdb.upstash.upstash_vector") + module = importlib.import_module("dify_vdb_upstash.upstash_vector") return module diff --git a/api/providers/vdb/vdb-vastbase/pyproject.toml b/api/providers/vdb/vdb-vastbase/pyproject.toml new file mode 100644 index 0000000000..287eb147dc --- /dev/null +++ b/api/providers/vdb/vdb-vastbase/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-vastbase" +version = "0.0.1" + +dependencies = [ + "pyobvector~=0.2.17", +] +description = "Dify vector store backend (dify-vdb-vastbase)." + +[project.entry-points."dify.vector_backends"] +vastbase = "dify_vdb_vastbase.vastbase_vector:VastbaseVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/upstash/__init__.py b/api/providers/vdb/vdb-vastbase/src/dify_vdb_vastbase/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/upstash/__init__.py rename to api/providers/vdb/vdb-vastbase/src/dify_vdb_vastbase/__init__.py diff --git a/api/core/rag/datasource/vdb/pyvastbase/vastbase_vector.py b/api/providers/vdb/vdb-vastbase/src/dify_vdb_vastbase/vastbase_vector.py similarity index 99% rename from api/core/rag/datasource/vdb/pyvastbase/vastbase_vector.py rename to api/providers/vdb/vdb-vastbase/src/dify_vdb_vastbase/vastbase_vector.py index d080e8da58..ab00f9db28 100644 --- a/api/core/rag/datasource/vdb/pyvastbase/vastbase_vector.py +++ b/api/providers/vdb/vdb-vastbase/src/dify_vdb_vastbase/vastbase_vector.py @@ -28,7 +28,7 @@ class VastbaseVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config VASTBASE_HOST is required") if not values["port"]: diff --git a/api/tests/integration_tests/vdb/pyvastbase/test_vastbase_vector.py b/api/providers/vdb/vdb-vastbase/tests/integration_tests/test_vastbase_vector.py similarity index 72% rename from api/tests/integration_tests/vdb/pyvastbase/test_vastbase_vector.py rename to api/providers/vdb/vdb-vastbase/tests/integration_tests/test_vastbase_vector.py index a47f13625c..0467dec37a 100644 --- a/api/tests/integration_tests/vdb/pyvastbase/test_vastbase_vector.py +++ b/api/providers/vdb/vdb-vastbase/tests/integration_tests/test_vastbase_vector.py @@ -1,10 +1,9 @@ -from core.rag.datasource.vdb.pyvastbase.vastbase_vector import VastbaseVector, VastbaseVectorConfig -from tests.integration_tests.vdb.test_vector_store import ( +from dify_vdb_vastbase.vastbase_vector import VastbaseVector, VastbaseVectorConfig + +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, ) -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) - class VastbaseVectorTest(AbstractVectorTest): def __init__(self): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/pyvastbase/test_vastbase_vector.py b/api/providers/vdb/vdb-vastbase/tests/unit_tests/test_vastbase_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/pyvastbase/test_vastbase_vector.py rename to api/providers/vdb/vdb-vastbase/tests/unit_tests/test_vastbase_vector.py index bd8df520ba..4dfb956c00 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/pyvastbase/test_vastbase_vector.py +++ b/api/providers/vdb/vdb-vastbase/tests/unit_tests/test_vastbase_vector.py @@ -41,7 +41,7 @@ def vastbase_module(monkeypatch): for name, module in _build_fake_psycopg2_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.pyvastbase.vastbase_vector as module + import dify_vdb_vastbase.vastbase_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-vikingdb/pyproject.toml b/api/providers/vdb/vdb-vikingdb/pyproject.toml new file mode 100644 index 0000000000..fdf59f76a4 --- /dev/null +++ b/api/providers/vdb/vdb-vikingdb/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-vikingdb" +version = "0.0.1" + +dependencies = [ + "volcengine-compat~=1.0.0", +] +description = "Dify vector store backend (dify-vdb-vikingdb)." + +[project.entry-points."dify.vector_backends"] +vikingdb = "dify_vdb_vikingdb.vikingdb_vector:VikingDBVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/vikingdb/__init__.py b/api/providers/vdb/vdb-vikingdb/src/dify_vdb_vikingdb/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/vikingdb/__init__.py rename to api/providers/vdb/vdb-vikingdb/src/dify_vdb_vikingdb/__init__.py diff --git a/api/core/rag/datasource/vdb/vikingdb/vikingdb_vector.py b/api/providers/vdb/vdb-vikingdb/src/dify_vdb_vikingdb/vikingdb_vector.py similarity index 100% rename from api/core/rag/datasource/vdb/vikingdb/vikingdb_vector.py rename to api/providers/vdb/vdb-vikingdb/src/dify_vdb_vikingdb/vikingdb_vector.py diff --git a/api/tests/integration_tests/vdb/__mock/vikingdb.py b/api/providers/vdb/vdb-vikingdb/tests/integration_tests/conftest.py similarity index 100% rename from api/tests/integration_tests/vdb/__mock/vikingdb.py rename to api/providers/vdb/vdb-vikingdb/tests/integration_tests/conftest.py diff --git a/api/tests/integration_tests/vdb/vikingdb/test_vikingdb.py b/api/providers/vdb/vdb-vikingdb/tests/integration_tests/test_vikingdb.py similarity index 78% rename from api/tests/integration_tests/vdb/vikingdb/test_vikingdb.py rename to api/providers/vdb/vdb-vikingdb/tests/integration_tests/test_vikingdb.py index 56311acd25..5a3908d14b 100644 --- a/api/tests/integration_tests/vdb/vikingdb/test_vikingdb.py +++ b/api/providers/vdb/vdb-vikingdb/tests/integration_tests/test_vikingdb.py @@ -1,10 +1,6 @@ -from core.rag.datasource.vdb.vikingdb.vikingdb_vector import VikingDBConfig, VikingDBVector -from tests.integration_tests.vdb.test_vector_store import AbstractVectorTest, get_example_text +from dify_vdb_vikingdb.vikingdb_vector import VikingDBConfig, VikingDBVector -pytest_plugins = ( - "tests.integration_tests.vdb.test_vector_store", - "tests.integration_tests.vdb.__mock.vikingdb", -) +from core.rag.datasource.vdb.vector_integration_test_support import AbstractVectorTest, get_example_text class VikingDBVectorTest(AbstractVectorTest): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/vikingdb/test_vikingdb_vector.py b/api/providers/vdb/vdb-vikingdb/tests/unit_tests/test_vikingdb_vector.py similarity index 99% rename from api/tests/unit_tests/core/rag/datasource/vdb/vikingdb/test_vikingdb_vector.py rename to api/providers/vdb/vdb-vikingdb/tests/unit_tests/test_vikingdb_vector.py index 9da92af2d0..544b8163be 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/vikingdb/test_vikingdb_vector.py +++ b/api/providers/vdb/vdb-vikingdb/tests/unit_tests/test_vikingdb_vector.py @@ -83,7 +83,7 @@ def vikingdb_module(monkeypatch): for name, module in _build_fake_vikingdb_modules().items(): monkeypatch.setitem(sys.modules, name, module) - import core.rag.datasource.vdb.vikingdb.vikingdb_vector as module + import dify_vdb_vikingdb.vikingdb_vector as module return importlib.reload(module) diff --git a/api/providers/vdb/vdb-weaviate/pyproject.toml b/api/providers/vdb/vdb-weaviate/pyproject.toml new file mode 100644 index 0000000000..035fbd396d --- /dev/null +++ b/api/providers/vdb/vdb-weaviate/pyproject.toml @@ -0,0 +1,14 @@ +[project] +name = "dify-vdb-weaviate" +version = "0.0.1" + +dependencies = [ + "weaviate-client==4.20.5", +] +description = "Dify vector store backend (dify-vdb-weaviate)." + +[project.entry-points."dify.vector_backends"] +weaviate = "dify_vdb_weaviate.weaviate_vector:WeaviateVectorFactory" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/api/core/rag/datasource/vdb/weaviate/__init__.py b/api/providers/vdb/vdb-weaviate/src/dify_vdb_weaviate/__init__.py similarity index 100% rename from api/core/rag/datasource/vdb/weaviate/__init__.py rename to api/providers/vdb/vdb-weaviate/src/dify_vdb_weaviate/__init__.py diff --git a/api/core/rag/datasource/vdb/weaviate/weaviate_vector.py b/api/providers/vdb/vdb-weaviate/src/dify_vdb_weaviate/weaviate_vector.py similarity index 90% rename from api/core/rag/datasource/vdb/weaviate/weaviate_vector.py rename to api/providers/vdb/vdb-weaviate/src/dify_vdb_weaviate/weaviate_vector.py index 25b65b82a9..902e6a03a8 100644 --- a/api/core/rag/datasource/vdb/weaviate/weaviate_vector.py +++ b/api/providers/vdb/vdb-weaviate/src/dify_vdb_weaviate/weaviate_vector.py @@ -20,7 +20,7 @@ from pydantic import BaseModel, model_validator from weaviate.classes.data import DataObject from weaviate.classes.init import Auth from weaviate.classes.query import Filter, MetadataQuery -from weaviate.exceptions import UnexpectedStatusCodeError +from weaviate.exceptions import UnexpectedStatusCodeError, WeaviateQueryError from configs import dify_config from core.rag.datasource.vdb.field import Field @@ -82,7 +82,7 @@ class WeaviateConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict) -> dict: + def validate_config(cls, values: dict[str, Any]) -> dict[str, Any]: """Validates that required configuration values are present.""" if not values["endpoint"]: raise ValueError("config WEAVIATE_ENDPOINT is required") @@ -230,6 +230,8 @@ class WeaviateVector(BaseVector): wc.Property(name="doc_id", data_type=wc.DataType.TEXT), wc.Property(name="doc_type", data_type=wc.DataType.TEXT), wc.Property(name="chunk_index", data_type=wc.DataType.INT), + wc.Property(name="is_summary", data_type=wc.DataType.BOOL), + wc.Property(name="original_chunk_id", data_type=wc.DataType.TEXT), ], vector_config=wc.Configure.Vectors.self_provided(), ) @@ -262,6 +264,10 @@ class WeaviateVector(BaseVector): to_add.append(wc.Property(name="doc_type", data_type=wc.DataType.TEXT)) if "chunk_index" not in existing: to_add.append(wc.Property(name="chunk_index", data_type=wc.DataType.INT)) + if "is_summary" not in existing: + to_add.append(wc.Property(name="is_summary", data_type=wc.DataType.BOOL)) + if "original_chunk_id" not in existing: + to_add.append(wc.Property(name="original_chunk_id", data_type=wc.DataType.TEXT)) for prop in to_add: try: @@ -400,15 +406,27 @@ class WeaviateVector(BaseVector): top_k = int(kwargs.get("top_k", 4)) score_threshold = float(kwargs.get("score_threshold") or 0.0) - res = col.query.near_vector( - near_vector=query_vector, - limit=top_k, - return_properties=props, - return_metadata=MetadataQuery(distance=True), - include_vector=False, - filters=where, - target_vector="default", - ) + try: + res = col.query.near_vector( + near_vector=query_vector, + limit=top_k, + return_properties=props, + return_metadata=MetadataQuery(distance=True), + include_vector=False, + filters=where, + target_vector="default", + ) + except WeaviateQueryError: + self._ensure_properties() + res = col.query.near_vector( + near_vector=query_vector, + limit=top_k, + return_properties=props, + return_metadata=MetadataQuery(distance=True), + include_vector=False, + filters=where, + target_vector="default", + ) docs: list[Document] = [] for obj in res.objects: @@ -446,14 +464,25 @@ class WeaviateVector(BaseVector): top_k = int(kwargs.get("top_k", 4)) - res = col.query.bm25( - query=query, - query_properties=[Field.TEXT_KEY.value], - limit=top_k, - return_properties=props, - include_vector=True, - filters=where, - ) + try: + res = col.query.bm25( + query=query, + query_properties=[Field.TEXT_KEY.value], + limit=top_k, + return_properties=props, + include_vector=True, + filters=where, + ) + except WeaviateQueryError: + self._ensure_properties() + res = col.query.bm25( + query=query, + query_properties=[Field.TEXT_KEY.value], + limit=top_k, + return_properties=props, + include_vector=True, + filters=where, + ) docs: list[Document] = [] for obj in res.objects: diff --git a/api/tests/integration_tests/vdb/weaviate/test_weaviate.py b/api/providers/vdb/vdb-weaviate/tests/integration_tests/test_weaviate.py similarity index 72% rename from api/tests/integration_tests/vdb/weaviate/test_weaviate.py rename to api/providers/vdb/vdb-weaviate/tests/integration_tests/test_weaviate.py index a1d9850979..631d23d653 100644 --- a/api/tests/integration_tests/vdb/weaviate/test_weaviate.py +++ b/api/providers/vdb/vdb-weaviate/tests/integration_tests/test_weaviate.py @@ -1,10 +1,9 @@ -from core.rag.datasource.vdb.weaviate.weaviate_vector import WeaviateConfig, WeaviateVector -from tests.integration_tests.vdb.test_vector_store import ( +from dify_vdb_weaviate.weaviate_vector import WeaviateConfig, WeaviateVector + +from core.rag.datasource.vdb.vector_integration_test_support import ( AbstractVectorTest, ) -pytest_plugins = ("tests.integration_tests.vdb.test_vector_store",) - class WeaviateVectorTest(AbstractVectorTest): def __init__(self): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/weaviate/test_weavaite.py b/api/providers/vdb/vdb-weaviate/tests/unit_tests/test_weavaite.py similarity index 92% rename from api/tests/unit_tests/core/rag/datasource/vdb/weaviate/test_weavaite.py rename to api/providers/vdb/vdb-weaviate/tests/unit_tests/test_weavaite.py index baf8c9e5f8..c773e4d552 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/weaviate/test_weavaite.py +++ b/api/providers/vdb/vdb-weaviate/tests/unit_tests/test_weavaite.py @@ -1,6 +1,6 @@ from unittest.mock import MagicMock, patch -from core.rag.datasource.vdb.weaviate.weaviate_vector import WeaviateConfig, WeaviateVector +from dify_vdb_weaviate.weaviate_vector import WeaviateConfig, WeaviateVector def test_init_client_with_valid_config(): diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/weaviate/test_weaviate_vector.py b/api/providers/vdb/vdb-weaviate/tests/unit_tests/test_weaviate_vector.py similarity index 84% rename from api/tests/unit_tests/core/rag/datasource/vdb/weaviate/test_weaviate_vector.py rename to api/providers/vdb/vdb-weaviate/tests/unit_tests/test_weaviate_vector.py index 69d1833001..b40f7e52ca 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/weaviate/test_weaviate_vector.py +++ b/api/providers/vdb/vdb-weaviate/tests/unit_tests/test_weaviate_vector.py @@ -14,9 +14,9 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch import pytest +from dify_vdb_weaviate import weaviate_vector as weaviate_vector_module +from dify_vdb_weaviate.weaviate_vector import WeaviateConfig, WeaviateVector -from core.rag.datasource.vdb.weaviate import weaviate_vector as weaviate_vector_module -from core.rag.datasource.vdb.weaviate.weaviate_vector import WeaviateConfig, WeaviateVector from core.rag.models.document import Document @@ -40,7 +40,7 @@ class TestWeaviateVector(unittest.TestCase): with pytest.raises(ValueError, match="config WEAVIATE_ENDPOINT is required"): WeaviateConfig(endpoint="") - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def _create_weaviate_vector(self, mock_weaviate_module): """Helper to create a WeaviateVector instance with mocked client.""" mock_client = MagicMock() @@ -66,7 +66,7 @@ class TestWeaviateVector(unittest.TestCase): mock_client.close.assert_called_once() mock_debug.assert_called_once() - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate.connect_to_custom") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate.connect_to_custom") def test_init_client_reuses_cached_client_without_reconnect(self, mock_connect): cached_client = MagicMock() cached_client.is_ready.return_value = True @@ -79,7 +79,7 @@ class TestWeaviateVector(unittest.TestCase): assert client is cached_client mock_connect.assert_not_called() - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate.connect_to_custom") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate.connect_to_custom") def test_init_client_reuses_cached_client_after_lock_recheck(self, mock_connect): cached_client = MagicMock() cached_client.is_ready.side_effect = [False, True] @@ -92,8 +92,8 @@ class TestWeaviateVector(unittest.TestCase): assert client is cached_client mock_connect.assert_not_called() - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.Auth.api_key", return_value="auth-token") - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate.connect_to_custom") + @patch("dify_vdb_weaviate.weaviate_vector.Auth.api_key", return_value="auth-token") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate.connect_to_custom") def test_init_client_parses_custom_grpc_endpoint_without_scheme(self, mock_connect, mock_api_key): mock_client = MagicMock() mock_client.is_ready.return_value = True @@ -122,7 +122,7 @@ class TestWeaviateVector(unittest.TestCase): } mock_api_key.assert_called_once_with("test-key") - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate.connect_to_custom") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate.connect_to_custom") def test_init_client_raises_when_database_not_ready(self, mock_connect): mock_client = MagicMock() mock_client.is_ready.return_value = False @@ -133,7 +133,7 @@ class TestWeaviateVector(unittest.TestCase): with pytest.raises(ConnectionError, match="Vector database is not ready"): wv._init_client(self.config) - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_init(self, mock_weaviate_module): """Test WeaviateVector initialization stores attributes including doc_type.""" mock_client = MagicMock() @@ -183,9 +183,9 @@ class TestWeaviateVector(unittest.TestCase): wv._create_collection.assert_called_once() wv.add_texts.assert_called_once_with([doc], [[0.1, 0.2]]) - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.redis_client") - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.dify_config") - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.redis_client") + @patch("dify_vdb_weaviate.weaviate_vector.dify_config") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_create_collection_includes_doc_type_property(self, mock_weaviate_module, mock_dify_config, mock_redis): """Test that _create_collection defines doc_type in the schema properties.""" # Mock Redis @@ -232,7 +232,7 @@ class TestWeaviateVector(unittest.TestCase): f"doc_type should be in collection schema properties, got: {property_names}" ) - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.redis_client") + @patch("dify_vdb_weaviate.weaviate_vector.redis_client") def test_create_collection_returns_early_when_cache_key_exists(self, mock_redis): mock_lock = MagicMock() mock_lock.__enter__ = MagicMock() @@ -251,7 +251,7 @@ class TestWeaviateVector(unittest.TestCase): wv._ensure_properties.assert_not_called() mock_redis.set.assert_not_called() - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.redis_client") + @patch("dify_vdb_weaviate.weaviate_vector.redis_client") def test_create_collection_logs_and_reraises_errors(self, mock_redis): mock_lock = MagicMock() mock_lock.__enter__ = MagicMock() @@ -270,7 +270,7 @@ class TestWeaviateVector(unittest.TestCase): mock_exception.assert_called_once() - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_ensure_properties_adds_missing_doc_type(self, mock_weaviate_module): """Test that _ensure_properties adds doc_type when it's missing from existing schema.""" mock_client = MagicMock() @@ -305,7 +305,7 @@ class TestWeaviateVector(unittest.TestCase): added_names = [call.args[0].name for call in add_calls] assert "doc_type" in added_names, f"doc_type should be added to existing collection, added: {added_names}" - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_ensure_properties_adds_all_missing_core_properties(self, mock_weaviate_module): mock_client = MagicMock() mock_client.is_ready.return_value = True @@ -326,9 +326,9 @@ class TestWeaviateVector(unittest.TestCase): add_calls = mock_col.config.add_property.call_args_list added_names = [call.args[0].name for call in add_calls] - assert added_names == ["document_id", "doc_id", "doc_type", "chunk_index"] + assert added_names == ["document_id", "doc_id", "doc_type", "chunk_index", "is_summary", "original_chunk_id"] - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_ensure_properties_skips_existing_doc_type(self, mock_weaviate_module): """Test that _ensure_properties does not add doc_type when it already exists.""" mock_client = MagicMock() @@ -346,6 +346,8 @@ class TestWeaviateVector(unittest.TestCase): SimpleNamespace(name="doc_id"), SimpleNamespace(name="doc_type"), SimpleNamespace(name="chunk_index"), + SimpleNamespace(name="is_summary"), + SimpleNamespace(name="original_chunk_id"), ] mock_cfg = MagicMock() mock_cfg.properties = existing_props @@ -361,7 +363,7 @@ class TestWeaviateVector(unittest.TestCase): # No properties should be added mock_col.config.add_property.assert_not_called() - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_ensure_properties_logs_warning_when_property_addition_fails(self, mock_weaviate_module): mock_client = MagicMock() mock_client.is_ready.return_value = True @@ -383,9 +385,9 @@ class TestWeaviateVector(unittest.TestCase): with patch.object(weaviate_vector_module.logger, "warning") as mock_warning: wv._ensure_properties() - assert mock_warning.call_count == 4 + assert mock_warning.call_count == 6 - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_search_by_vector_returns_doc_type_in_metadata(self, mock_weaviate_module): """Test that search_by_vector returns doc_type in document metadata. @@ -432,7 +434,7 @@ class TestWeaviateVector(unittest.TestCase): assert len(docs) == 1 assert docs[0].metadata.get("doc_type") == "image" - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_search_by_vector_uses_document_filter_and_default_distance(self, mock_weaviate_module): mock_client = MagicMock() mock_client.is_ready.return_value = True @@ -469,7 +471,7 @@ class TestWeaviateVector(unittest.TestCase): assert docs[0].metadata["score"] == 0.0 assert mock_col.query.near_vector.call_args.kwargs["filters"] is not None - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_search_by_vector_returns_empty_when_collection_is_missing(self, mock_weaviate_module): mock_client = MagicMock() mock_client.is_ready.return_value = True @@ -484,7 +486,57 @@ class TestWeaviateVector(unittest.TestCase): assert wv.search_by_vector(query_vector=[0.1] * 3) == [] - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") + def test_search_by_vector_retries_on_weaviate_query_error(self, mock_weaviate_module): + """Test that search_by_vector catches WeaviateQueryError, calls _ensure_properties, and retries.""" + from weaviate.exceptions import WeaviateQueryError + + mock_client = MagicMock() + mock_client.is_ready.return_value = True + mock_weaviate_module.connect_to_custom.return_value = mock_client + + mock_client.collections.exists.return_value = True + mock_col = MagicMock() + mock_client.collections.use.return_value = mock_col + + # First call raises WeaviateQueryError, second call succeeds + mock_obj = MagicMock() + mock_obj.properties = {"text": "retry result", "document_id": "doc-1"} + mock_obj.metadata.distance = 0.2 + + mock_result = MagicMock() + mock_result.objects = [mock_obj] + + mock_col.query.near_vector.side_effect = [ + WeaviateQueryError("missing property", "gRPC"), + mock_result, + ] + + # Mock _ensure_properties dependencies + mock_cfg = MagicMock() + mock_cfg.properties = [ + SimpleNamespace(name="text"), + SimpleNamespace(name="document_id"), + SimpleNamespace(name="doc_id"), + SimpleNamespace(name="doc_type"), + SimpleNamespace(name="chunk_index"), + SimpleNamespace(name="is_summary"), + SimpleNamespace(name="original_chunk_id"), + ] + mock_col.config.get.return_value = mock_cfg + + wv = WeaviateVector( + collection_name=self.collection_name, + config=self.config, + attributes=self.attributes, + ) + docs = wv.search_by_vector(query_vector=[0.1] * 3, top_k=1) + + assert mock_col.query.near_vector.call_count == 2 + assert len(docs) == 1 + assert docs[0].metadata["score"] == pytest.approx(0.8) + + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_search_by_full_text_returns_doc_type_in_metadata(self, mock_weaviate_module): """Test that search_by_full_text also returns doc_type in document metadata.""" mock_client = MagicMock() @@ -526,7 +578,7 @@ class TestWeaviateVector(unittest.TestCase): assert len(docs) == 1 assert docs[0].metadata.get("doc_type") == "image" - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_search_by_full_text_uses_document_filter(self, mock_weaviate_module): mock_client = MagicMock() mock_client.is_ready.return_value = True @@ -554,7 +606,7 @@ class TestWeaviateVector(unittest.TestCase): assert docs[0].vector == [0.3, 0.4] assert mock_col.query.bm25.call_args.kwargs["filters"] is not None - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_search_by_full_text_returns_empty_when_collection_is_missing(self, mock_weaviate_module): mock_client = MagicMock() mock_client.is_ready.return_value = True @@ -569,7 +621,57 @@ class TestWeaviateVector(unittest.TestCase): assert wv.search_by_full_text(query="missing") == [] - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") + def test_search_by_full_text_retries_on_weaviate_query_error(self, mock_weaviate_module): + """Test that search_by_full_text catches WeaviateQueryError, calls _ensure_properties, and retries.""" + from weaviate.exceptions import WeaviateQueryError + + mock_client = MagicMock() + mock_client.is_ready.return_value = True + mock_weaviate_module.connect_to_custom.return_value = mock_client + + mock_client.collections.exists.return_value = True + mock_col = MagicMock() + mock_client.collections.use.return_value = mock_col + + # First call raises WeaviateQueryError, second call succeeds + mock_obj = MagicMock() + mock_obj.properties = {"text": "retry bm25 result", "doc_id": "segment-1"} + mock_obj.vector = {"default": [0.5, 0.6]} + + mock_result = MagicMock() + mock_result.objects = [mock_obj] + + mock_col.query.bm25.side_effect = [ + WeaviateQueryError("missing property", "gRPC"), + mock_result, + ] + + # Mock _ensure_properties dependencies + mock_cfg = MagicMock() + mock_cfg.properties = [ + SimpleNamespace(name="text"), + SimpleNamespace(name="document_id"), + SimpleNamespace(name="doc_id"), + SimpleNamespace(name="doc_type"), + SimpleNamespace(name="chunk_index"), + SimpleNamespace(name="is_summary"), + SimpleNamespace(name="original_chunk_id"), + ] + mock_col.config.get.return_value = mock_cfg + + wv = WeaviateVector( + collection_name=self.collection_name, + config=self.config, + attributes=self.attributes, + ) + docs = wv.search_by_full_text(query="retry", top_k=1) + + assert mock_col.query.bm25.call_count == 2 + assert len(docs) == 1 + assert docs[0].page_content == "retry bm25 result" + + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_add_texts_stores_doc_type_in_properties(self, mock_weaviate_module): """Test that add_texts includes doc_type from document metadata in stored properties.""" mock_client = MagicMock() @@ -611,7 +713,7 @@ class TestWeaviateVector(unittest.TestCase): stored_props = call_kwargs.kwargs.get("properties") assert stored_props.get("doc_type") == "image", f"doc_type should be stored in properties, got: {stored_props}" - @patch("core.rag.datasource.vdb.weaviate.weaviate_vector.weaviate") + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_add_texts_falls_back_to_random_uuid_and_serializes_datetime_metadata(self, mock_weaviate_module): mock_client = MagicMock() mock_client.is_ready.return_value = True @@ -635,7 +737,7 @@ class TestWeaviateVector(unittest.TestCase): with ( patch.object(wv, "_get_uuids", return_value=["not-a-uuid"]), - patch("core.rag.datasource.vdb.weaviate.weaviate_vector._uuid.uuid4", return_value="fallback-uuid"), + patch("dify_vdb_weaviate.weaviate_vector._uuid.uuid4", return_value="fallback-uuid"), ): ids = wv.add_texts(documents=[doc], embeddings=[[]]) @@ -775,9 +877,7 @@ class TestWeaviateVectorFactory(unittest.TestCase): patch.object(weaviate_vector_module.dify_config, "WEAVIATE_GRPC_ENDPOINT", "localhost:50051"), patch.object(weaviate_vector_module.dify_config, "WEAVIATE_API_KEY", "api-key"), patch.object(weaviate_vector_module.dify_config, "WEAVIATE_BATCH_SIZE", 88), - patch( - "core.rag.datasource.vdb.weaviate.weaviate_vector.WeaviateVector", return_value="vector" - ) as mock_vector, + patch("dify_vdb_weaviate.weaviate_vector.WeaviateVector", return_value="vector") as mock_vector, ): factory = weaviate_vector_module.WeaviateVectorFactory() result = factory.init_vector(dataset, attributes, MagicMock()) @@ -806,9 +906,7 @@ class TestWeaviateVectorFactory(unittest.TestCase): "gen_collection_name_by_id", return_value="GeneratedCollection_Node", ), - patch( - "core.rag.datasource.vdb.weaviate.weaviate_vector.WeaviateVector", return_value="vector" - ) as mock_vector, + patch("dify_vdb_weaviate.weaviate_vector.WeaviateVector", return_value="vector") as mock_vector, ): factory = weaviate_vector_module.WeaviateVectorFactory() result = factory.init_vector(dataset, attributes, MagicMock()) diff --git a/api/pyproject.toml b/api/pyproject.toml index dab420fc87..a1ceea181e 100644 --- a/api/pyproject.toml +++ b/api/pyproject.toml @@ -4,93 +4,55 @@ version = "1.13.3" requires-python = "~=3.12.0" dependencies = [ - "aliyun-log-python-sdk~=0.9.37", + # Legacy: mature and widely deployed + "bleach>=6.3.0", + "boto3>=1.42.88", + "celery>=5.6.3", + "croniter>=6.2.2", + "flask-cors>=6.0.2", + "gevent>=26.4.0", + "gevent-websocket>=0.10.1", + "gmpy2>=2.3.0", + "google-api-python-client>=2.194.0", + "gunicorn>=25.3.0", + "psycogreen>=1.0.2", + "psycopg2-binary>=2.9.11", + "python-socketio>=5.13.0", + "redis[hiredis]>=7.4.0", + "sendgrid>=6.12.5", + "sseclient-py>=1.8.0", + + # Stable: production-proven, cap below the next major + "aliyun-log-python-sdk>=0.9.44,<1.0.0", + "azure-identity>=1.25.3,<2.0.0", + "flask-compress>=1.24,<2.0.0", + "flask-login>=0.6.3,<1.0.0", + "flask-migrate>=4.1.0,<5.0.0", + "flask-orjson>=2.0.0,<3.0.0", + "flask-restx>=1.3.2,<2.0.0", + "google-cloud-aiplatform>=1.147.0,<2.0.0", + "httpx[socks]>=0.28.1,<1.0.0", + "langfuse>=4.2.0,<5.0.0", + "langsmith>=0.7.31,<1.0.0", + "mlflow-skinny>=3.11.1,<4.0.0", + "opentelemetry-distro>=0.62b0,<1.0.0", + "opentelemetry-instrumentation-celery>=0.62b0,<1.0.0", + "opentelemetry-instrumentation-flask>=0.62b0,<1.0.0", + "opentelemetry-instrumentation-httpx>=0.62b0,<1.0.0", + "opentelemetry-instrumentation-redis>=0.62b0,<1.0.0", + "opentelemetry-instrumentation-sqlalchemy>=0.62b0,<1.0.0", + "opentelemetry-propagator-b3>=1.41.0,<2.0.0", + "readabilipy>=0.3.0,<1.0.0", + "resend>=2.27.0,<3.0.0", + "weave>=0.52.36,<1.0.0", + + # Emerging: newer and fast-moving, use compatible pins "arize-phoenix-otel~=0.15.0", - "azure-identity==1.25.3", - "beautifulsoup4==4.14.3", - "boto3==1.42.83", - "bs4~=0.0.1", - "cachetools~=5.3.0", - "celery~=5.6.2", - "charset-normalizer>=3.4.4", - "flask~=3.1.2", - "flask-compress>=1.17,<1.25", - "flask-cors~=6.0.0", - "flask-login~=0.6.3", - "flask-migrate~=4.1.0", - "flask-orjson~=2.0.0", - "flask-sqlalchemy~=3.1.1", - "gevent~=25.9.1", - "gmpy2~=2.3.0", - "google-api-core>=2.19.1", - "google-api-python-client==2.193.0", - "google-auth>=2.47.0", - "google-auth-httplib2==0.3.1", - "google-cloud-aiplatform>=1.123.0", - "googleapis-common-protos>=1.65.0", - "graphon>=0.1.2", - "gunicorn~=25.3.0", - "httpx[socks]~=0.28.0", - "jieba==0.42.1", - "json-repair>=0.55.1", - "langfuse>=3.0.0,<5.0.0", - "langsmith~=0.7.16", - "markdown~=3.10.2", - "mlflow-skinny>=3.0.0", - "numpy~=1.26.4", - "openpyxl~=3.1.5", - "opik~=1.10.37", - "litellm==1.82.6", # Pinned to avoid madoka dependency issue - "opentelemetry-api==1.40.0", - "opentelemetry-distro==0.61b0", - "opentelemetry-exporter-otlp==1.40.0", - "opentelemetry-exporter-otlp-proto-common==1.40.0", - "opentelemetry-exporter-otlp-proto-grpc==1.40.0", - "opentelemetry-exporter-otlp-proto-http==1.40.0", - "opentelemetry-instrumentation==0.61b0", - "opentelemetry-instrumentation-celery==0.61b0", - "opentelemetry-instrumentation-flask==0.61b0", - "opentelemetry-instrumentation-httpx==0.61b0", - "opentelemetry-instrumentation-redis==0.61b0", - "opentelemetry-instrumentation-sqlalchemy==0.61b0", - "opentelemetry-propagator-b3==1.40.0", - "opentelemetry-proto==1.40.0", - "opentelemetry-sdk==1.40.0", - "opentelemetry-semantic-conventions==0.61b0", - "opentelemetry-util-http==0.61b0", - "pandas[excel,output-formatting,performance]~=3.0.1", - "psycogreen~=1.0.2", - "psycopg2-binary~=2.9.6", - "pycryptodome==3.23.0", - "pydantic~=2.12.5", - "pydantic-settings~=2.13.1", - "pyjwt~=2.12.0", - "pypdfium2==5.6.0", - "python-docx~=1.2.0", - "python-dotenv==1.2.2", - "pyyaml~=6.0.1", - "readabilipy~=0.3.0", - "redis[hiredis]~=7.4.0", - "resend~=2.26.0", - "sentry-sdk[flask]~=2.55.0", - "sqlalchemy~=2.0.29", - "starlette==1.0.0", - "tiktoken~=0.12.0", - "transformers~=5.3.0", - "unstructured[docx,epub,md,ppt,pptx]~=0.21.5", - "pypandoc~=1.13", - "yarl~=1.23.0", - "sseclient-py~=1.9.0", + "fastopenapi[flask]~=0.7.0", + "graphon~=0.1.2", "httpx-sse~=0.4.0", - "sendgrid~=6.12.3", - "flask-restx~=1.3.2", - "packaging~=23.2", - "croniter>=6.0.0", - "weaviate-client==4.20.4", - "apscheduler>=3.11.0", - "weave>=0.52.16", - "fastopenapi[flask]>=0.7.0", - "bleach~=6.3.0", + "json-repair~=0.59.2", + "opik~=1.11.2", ] # Before adding new dependency, consider place it in # alphabet order (a-z) and suitable group. @@ -98,9 +60,48 @@ dependencies = [ [tool.setuptools] packages = [] +[tool.uv.workspace] +members = ["providers/vdb/*"] +exclude = ["providers/vdb/__pycache__"] + +[tool.uv.sources] +dify-vdb-alibabacloud-mysql = { workspace = true } +dify-vdb-analyticdb = { workspace = true } +dify-vdb-baidu = { workspace = true } +dify-vdb-chroma = { workspace = true } +dify-vdb-clickzetta = { workspace = true } +dify-vdb-couchbase = { workspace = true } +dify-vdb-elasticsearch = { workspace = true } +dify-vdb-hologres = { workspace = true } +dify-vdb-huawei-cloud = { workspace = true } +dify-vdb-iris = { workspace = true } +dify-vdb-lindorm = { workspace = true } +dify-vdb-matrixone = { workspace = true } +dify-vdb-milvus = { workspace = true } +dify-vdb-myscale = { workspace = true } +dify-vdb-oceanbase = { workspace = true } +dify-vdb-opengauss = { workspace = true } +dify-vdb-opensearch = { workspace = true } +dify-vdb-oracle = { workspace = true } +dify-vdb-pgvecto-rs = { workspace = true } +dify-vdb-pgvector = { workspace = true } +dify-vdb-qdrant = { workspace = true } +dify-vdb-relyt = { workspace = true } +dify-vdb-tablestore = { workspace = true } +dify-vdb-tencent = { workspace = true } +dify-vdb-tidb-on-qdrant = { workspace = true } +dify-vdb-tidb-vector = { workspace = true } +dify-vdb-upstash = { workspace = true } +dify-vdb-vastbase = { workspace = true } +dify-vdb-vikingdb = { workspace = true } +dify-vdb-weaviate = { workspace = true } + [tool.uv] -default-groups = ["storage", "tools", "vdb"] +default-groups = ["storage", "tools", "vdb-all"] package = false +override-dependencies = [ + "pyarrow>=18.0.0", +] [dependency-groups] @@ -109,69 +110,69 @@ package = false # Required for development and running tests ############################################################ dev = [ - "coverage~=7.13.4", - "dotenv-linter~=0.7.0", - "faker~=40.12.0", - "lxml-stubs~=0.5.1", - "basedpyright~=1.39.0", - "ruff~=0.15.5", - "pytest~=9.0.2", - "pytest-benchmark~=5.2.3", - "pytest-cov~=7.1.0", - "pytest-env~=1.6.0", - "pytest-mock~=3.15.1", - "testcontainers~=4.14.1", - "types-aiofiles~=25.1.0", - "types-beautifulsoup4~=4.12.0", - "types-cachetools~=6.2.0", - "types-colorama~=0.4.15", - "types-defusedxml~=0.7.0", - "types-deprecated~=1.3.1", - "types-docutils~=0.22.3", - "types-flask-cors~=6.0.0", - "types-flask-migrate~=4.1.0", - "types-gevent~=25.9.0", - "types-greenlet~=3.3.0", - "types-html5lib~=1.1.11", - "types-markdown~=3.10.2", - "types-oauthlib~=3.3.0", - "types-objgraph~=3.6.0", - "types-olefile~=0.47.0", - "types-openpyxl~=3.1.5", - "types-pexpect~=4.9.0", - "types-protobuf~=7.34.1", - "types-psutil~=7.2.2", - "types-psycopg2~=2.9.21", - "types-pygments~=2.20.0", - "types-pymysql~=1.1.0", - "types-python-dateutil~=2.9.0", - "types-pywin32~=311.0.0", - "types-pyyaml~=6.0.12", - "types-regex~=2026.4.4", - "types-shapely~=2.1.0", - "types-simplejson>=3.20.0", - "types-six>=1.17.0", - "types-tensorflow>=2.18.0", - "types-tqdm>=4.67.0", + "coverage>=7.13.4", + "dotenv-linter>=0.7.0", + "faker>=20.1.0", + "lxml-stubs>=0.5.1", + "basedpyright>=1.39.0", + "ruff>=0.15.10", + "pytest>=9.0.3", + "pytest-benchmark>=5.2.3", + "pytest-cov>=7.1.0", + "pytest-env>=1.6.0", + "pytest-mock>=3.15.1", + "testcontainers>=4.14.2", + "types-aiofiles>=25.1.0", + "types-beautifulsoup4>=4.12.0", + "types-cachetools>=6.2.0", + "types-colorama>=0.4.15", + "types-defusedxml>=0.7.0", + "types-deprecated>=1.3.1", + "types-docutils>=0.22.3", + "types-flask-cors>=6.0.0", + "types-flask-migrate>=4.1.0", + "types-gevent>=26.4.0", + "types-greenlet>=3.4.0", + "types-html5lib>=1.1.11", + "types-markdown>=3.10.2", + "types-oauthlib>=3.3.0", + "types-objgraph>=3.6.0", + "types-olefile>=0.47.0", + "types-openpyxl>=3.1.5", + "types-pexpect>=4.9.0", + "types-protobuf>=7.34.1", + "types-psutil>=7.2.2", + "types-psycopg2>=2.9.21", + "types-pygments>=2.20.0", + "types-pymysql>=1.1.0", + "types-python-dateutil>=2.9.0", + "types-pywin32>=311.0.0", + "types-pyyaml>=6.0.12", + "types-regex>=2026.4.4", + "types-shapely>=2.1.0", + "types-simplejson>=3.20.0.20260408", + "types-six>=1.17.0.20260408", + "types-tensorflow>=2.18.0.20260408", + "types-tqdm>=4.67.3.20260408", "types-ujson>=5.10.0", - "boto3-stubs>=1.38.20", - "types-jmespath>=1.0.2.20240106", - "hypothesis>=6.131.15", + "boto3-stubs>=1.42.88", + "types-jmespath>=1.1.0.20260408", + "hypothesis>=6.151.12", "types_pyOpenSSL>=24.1.0", - "types_cffi>=1.17.0", - "types_setuptools>=80.9.0", - "pandas-stubs~=3.0.0", + "types_cffi>=2.0.0.20260408", + "types_setuptools>=82.0.0.20260408", + "pandas-stubs>=3.0.0", "scipy-stubs>=1.15.3.0", - "types-python-http-client>=3.3.7.20240910", + "types-python-http-client>=3.3.7.20260408", "import-linter>=2.3", "types-redis>=4.6.0.20241004", "celery-types>=0.23.0", - "mypy~=1.20.0", + "mypy>=1.20.1", # "locust>=2.40.4", # Temporarily removed due to compatibility issues. Uncomment when resolved. - "sseclient-py>=1.8.0", "pytest-timeout>=2.4.0", "pytest-xdist>=3.8.0", - "pyrefly>=0.59.1", + "pyrefly>=0.60.0", + "xinference-client>=2.4.0", ] ############################################################ @@ -179,54 +180,91 @@ dev = [ # Required for storage clients ############################################################ storage = [ - "azure-storage-blob==12.28.0", - "bce-python-sdk~=0.9.23", - "cos-python-sdk-v5==1.9.41", - "esdk-obs-python==3.26.2", - "google-cloud-storage>=3.0.0", - "opendal~=0.46.0", - "oss2==2.19.1", - "supabase~=2.18.1", - "tos~=2.9.0", + "azure-storage-blob>=12.28.0", + "bce-python-sdk>=0.9.69", + "cos-python-sdk-v5>=1.9.41", + "esdk-obs-python>=3.22.2", + "google-cloud-storage>=3.10.1", + "opendal>=0.46.0", + "oss2>=2.19.1", + "supabase>=2.18.1", + "tos>=2.9.0", ] ############################################################ # [ Tools ] dependency group ############################################################ -tools = ["cloudscraper~=1.2.71", "nltk~=3.9.1"] +tools = ["cloudscraper>=1.2.71", "nltk>=3.9.1"] ############################################################ -# [ VDB ] dependency group -# Required by vector store clients +# [ VDB ] workspace plugins — hollow packages under providers/vdb/* +# Each declares its own third-party deps and registers dify.vector_backends entry points. +# Use: uv sync --group vdb-all | uv sync --group vdb-qdrant ############################################################ -vdb = [ - "alibabacloud_gpdb20160503~=5.2.0", - "alibabacloud_tea_openapi~=0.4.3", - "chromadb==0.5.20", - "clickhouse-connect~=0.15.0", - "clickzetta-connector-python>=0.8.102", - "couchbase~=4.6.0", - "elasticsearch==8.14.0", - "opensearch-py==3.1.0", - "oracledb==3.4.2", - "pgvecto-rs[sqlalchemy]~=0.2.1", - "pgvector==0.4.2", - "pymilvus~=2.6.10", - "pymochow==2.4.0", - "pyobvector~=0.2.17", - "qdrant-client==1.9.0", - "intersystems-irispython>=5.1.0", - "tablestore==6.4.3", - "tcvectordb~=2.1.0", - "tidb-vector==0.0.15", - "upstash-vector==0.8.0", - "volcengine-compat~=1.0.0", - "weaviate-client==4.20.4", - "xinference-client~=2.4.0", - "mo-vector~=0.1.13", - "mysql-connector-python>=9.3.0", - "holo-search-sdk>=0.4.1", +vdb-all = [ + "dify-vdb-alibabacloud-mysql", + "dify-vdb-analyticdb", + "dify-vdb-baidu", + "dify-vdb-chroma", + "dify-vdb-clickzetta", + "dify-vdb-couchbase", + "dify-vdb-elasticsearch", + "dify-vdb-hologres", + "dify-vdb-huawei-cloud", + "dify-vdb-iris", + "dify-vdb-lindorm", + "dify-vdb-matrixone", + "dify-vdb-milvus", + "dify-vdb-myscale", + "dify-vdb-oceanbase", + "dify-vdb-opengauss", + "dify-vdb-opensearch", + "dify-vdb-oracle", + "dify-vdb-pgvecto-rs", + "dify-vdb-pgvector", + "dify-vdb-qdrant", + "dify-vdb-relyt", + "dify-vdb-tablestore", + "dify-vdb-tencent", + "dify-vdb-tidb-on-qdrant", + "dify-vdb-tidb-vector", + "dify-vdb-upstash", + "dify-vdb-vastbase", + "dify-vdb-vikingdb", + "dify-vdb-weaviate", ] +vdb-alibabacloud-mysql = ["dify-vdb-alibabacloud-mysql"] +vdb-analyticdb = ["dify-vdb-analyticdb"] +vdb-baidu = ["dify-vdb-baidu"] +vdb-chroma = ["dify-vdb-chroma"] +vdb-clickzetta = ["dify-vdb-clickzetta"] +vdb-couchbase = ["dify-vdb-couchbase"] +vdb-elasticsearch = ["dify-vdb-elasticsearch"] +vdb-hologres = ["dify-vdb-hologres"] +vdb-huawei-cloud = ["dify-vdb-huawei-cloud"] +vdb-iris = ["dify-vdb-iris"] +vdb-lindorm = ["dify-vdb-lindorm"] +vdb-matrixone = ["dify-vdb-matrixone"] +vdb-milvus = ["dify-vdb-milvus"] +vdb-myscale = ["dify-vdb-myscale"] +vdb-oceanbase = ["dify-vdb-oceanbase"] +vdb-opengauss = ["dify-vdb-opengauss"] +vdb-opensearch = ["dify-vdb-opensearch"] +vdb-oracle = ["dify-vdb-oracle"] +vdb-pgvecto-rs = ["dify-vdb-pgvecto-rs"] +vdb-pgvector = ["dify-vdb-pgvector"] +vdb-qdrant = ["dify-vdb-qdrant"] +vdb-relyt = ["dify-vdb-relyt"] +vdb-tablestore = ["dify-vdb-tablestore"] +vdb-tencent = ["dify-vdb-tencent"] +vdb-tidb-on-qdrant = ["dify-vdb-tidb-on-qdrant"] +vdb-tidb-vector = ["dify-vdb-tidb-vector"] +vdb-upstash = ["dify-vdb-upstash"] +vdb-vastbase = ["dify-vdb-vastbase"] +vdb-vikingdb = ["dify-vdb-vikingdb"] +vdb-weaviate = ["dify-vdb-weaviate"] +# Optional client used by some tests / integrations (not a vector backend plugin) +vdb-xinference = ["xinference-client>=2.4.0"] [tool.pyrefly] project-includes = ["."] diff --git a/api/pyrefly-local-excludes.txt b/api/pyrefly-local-excludes.txt index 43f604c2de..3e5ece1fcf 100644 --- a/api/pyrefly-local-excludes.txt +++ b/api/pyrefly-local-excludes.txt @@ -45,31 +45,7 @@ core/plugin/backwards_invocation/model.py core/prompt/utils/extract_thread_messages.py core/rag/datasource/keyword/jieba/jieba.py core/rag/datasource/keyword/jieba/jieba_keyword_table_handler.py -core/rag/datasource/vdb/analyticdb/analyticdb_vector.py -core/rag/datasource/vdb/analyticdb/analyticdb_vector_openapi.py -core/rag/datasource/vdb/baidu/baidu_vector.py -core/rag/datasource/vdb/chroma/chroma_vector.py -core/rag/datasource/vdb/clickzetta/clickzetta_vector.py -core/rag/datasource/vdb/couchbase/couchbase_vector.py -core/rag/datasource/vdb/elasticsearch/elasticsearch_vector.py -core/rag/datasource/vdb/huawei/huawei_cloud_vector.py -core/rag/datasource/vdb/lindorm/lindorm_vector.py -core/rag/datasource/vdb/matrixone/matrixone_vector.py -core/rag/datasource/vdb/milvus/milvus_vector.py -core/rag/datasource/vdb/myscale/myscale_vector.py -core/rag/datasource/vdb/oceanbase/oceanbase_vector.py -core/rag/datasource/vdb/opensearch/opensearch_vector.py -core/rag/datasource/vdb/oracle/oraclevector.py -core/rag/datasource/vdb/pgvecto_rs/pgvecto_rs.py -core/rag/datasource/vdb/relyt/relyt_vector.py -core/rag/datasource/vdb/tablestore/tablestore_vector.py -core/rag/datasource/vdb/tencent/tencent_vector.py -core/rag/datasource/vdb/tidb_on_qdrant/tidb_on_qdrant_vector.py -core/rag/datasource/vdb/tidb_on_qdrant/tidb_service.py -core/rag/datasource/vdb/tidb_vector/tidb_vector.py -core/rag/datasource/vdb/upstash/upstash_vector.py -core/rag/datasource/vdb/vikingdb/vikingdb_vector.py -core/rag/datasource/vdb/weaviate/weaviate_vector.py +providers/vdb/** core/rag/extractor/csv_extractor.py core/rag/extractor/excel_extractor.py core/rag/extractor/firecrawl/firecrawl_app.py diff --git a/api/pyrightconfig.json b/api/pyrightconfig.json index a8b884ea81..c4582e891d 100644 --- a/api/pyrightconfig.json +++ b/api/pyrightconfig.json @@ -4,7 +4,8 @@ "tests/", ".venv", "migrations/", - "core/rag" + "core/rag", + "providers/", ], "typeCheckingMode": "strict", "allowedUntypedLibraries": [ @@ -36,7 +37,9 @@ "gmpy2", "sendgrid", "sendgrid.helpers.mail", - "holo_search_sdk.types" + "holo_search_sdk.types", + "dify_vdb_qdrant", + "dify_vdb_tidb_on_qdrant" ], "reportUnknownMemberType": "hint", "reportUnknownParameterType": "hint", @@ -47,7 +50,6 @@ "reportMissingTypeArgument": "hint", "reportUnnecessaryComparison": "hint", "reportUnnecessaryIsInstance": "hint", - "reportUntypedFunctionDecorator": "hint", "reportUnnecessaryTypeIgnoreComment": "hint", "reportAttributeAccessIssue": "hint", "pythonVersion": "3.12", diff --git a/api/repositories/api_workflow_run_repository.py b/api/repositories/api_workflow_run_repository.py index 100589804c..72b38e7906 100644 --- a/api/repositories/api_workflow_run_repository.py +++ b/api/repositories/api_workflow_run_repository.py @@ -38,11 +38,11 @@ from collections.abc import Callable, Sequence from datetime import datetime from typing import Protocol, TypedDict -from graphon.entities.pause_reason import PauseReason -from graphon.enums import WorkflowType from sqlalchemy.orm import Session from core.repositories.factory import WorkflowExecutionRepository +from graphon.entities.pause_reason import PauseReason +from graphon.enums import WorkflowType from libs.infinite_scroll_pagination import InfiniteScrollPagination from models.enums import WorkflowRunTriggeredFrom from models.workflow import WorkflowAppLog, WorkflowArchiveLog, WorkflowPause, WorkflowPauseReason, WorkflowRun diff --git a/api/repositories/sqlalchemy_api_workflow_node_execution_repository.py b/api/repositories/sqlalchemy_api_workflow_node_execution_repository.py index d5c6a203b1..44735eb769 100644 --- a/api/repositories/sqlalchemy_api_workflow_node_execution_repository.py +++ b/api/repositories/sqlalchemy_api_workflow_node_execution_repository.py @@ -10,11 +10,11 @@ from collections.abc import Sequence from datetime import datetime from typing import Protocol, cast -from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus from sqlalchemy import asc, delete, desc, func, select from sqlalchemy.engine import CursorResult from sqlalchemy.orm import Session, sessionmaker +from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus from models.workflow import WorkflowNodeExecutionModel, WorkflowNodeExecutionOffload from repositories.api_workflow_node_execution_repository import ( DifyAPIWorkflowNodeExecutionRepository, diff --git a/api/repositories/sqlalchemy_api_workflow_run_repository.py b/api/repositories/sqlalchemy_api_workflow_run_repository.py index 9267be2636..474b200fc5 100644 --- a/api/repositories/sqlalchemy_api_workflow_run_repository.py +++ b/api/repositories/sqlalchemy_api_workflow_run_repository.py @@ -28,20 +28,19 @@ from decimal import Decimal from typing import Any, cast import sqlalchemy as sa -from graphon.entities.pause_reason import HumanInputRequired, PauseReason, PauseReasonType, SchedulingPause -from graphon.enums import WorkflowExecutionStatus, WorkflowType -from graphon.nodes.human_input.entities import FormDefinition from pydantic import ValidationError from sqlalchemy import and_, delete, func, null, or_, select, tuple_ from sqlalchemy.engine import CursorResult from sqlalchemy.orm import Session, selectinload, sessionmaker from extensions.ext_storage import storage +from graphon.entities.pause_reason import HumanInputRequired, PauseReason, PauseReasonType, SchedulingPause +from graphon.enums import WorkflowExecutionStatus, WorkflowType +from graphon.nodes.human_input.entities import FormDefinition from libs.datetime_utils import naive_utc_now from libs.helper import convert_datetime_to_date from libs.infinite_scroll_pagination import InfiniteScrollPagination from libs.time_parser import get_time_threshold -from libs.uuid_utils import uuidv7 from models.enums import WorkflowRunTriggeredFrom from models.human_input import HumanInputForm from models.workflow import WorkflowAppLog, WorkflowArchiveLog, WorkflowPause, WorkflowPauseReason, WorkflowRun @@ -744,12 +743,11 @@ class DifyAPISQLAlchemyWorkflowRunRepository(APIWorkflowRunRepository): # Upload the state file # Create the pause record - pause_model = WorkflowPause() - pause_model.id = str(uuidv7()) - pause_model.workflow_id = workflow_run.workflow_id - pause_model.workflow_run_id = workflow_run.id - pause_model.state_object_key = state_obj_key - pause_model.created_at = naive_utc_now() + pause_model = WorkflowPause( + workflow_id=workflow_run.workflow_id, + workflow_run_id=workflow_run.id, + state_object_key=state_obj_key, + ) pause_reason_models = [] for reason in pause_reasons: if isinstance(reason, HumanInputRequired): diff --git a/api/repositories/sqlalchemy_execution_extra_content_repository.py b/api/repositories/sqlalchemy_execution_extra_content_repository.py index feba5f7eb6..67f8795d3f 100644 --- a/api/repositories/sqlalchemy_execution_extra_content_repository.py +++ b/api/repositories/sqlalchemy_execution_extra_content_repository.py @@ -7,9 +7,6 @@ from collections import defaultdict from collections.abc import Sequence from typing import Any -from graphon.nodes.human_input.entities import FormDefinition -from graphon.nodes.human_input.enums import HumanInputFormStatus -from graphon.nodes.human_input.human_input_node import HumanInputNode from sqlalchemy import select from sqlalchemy.orm import Session, selectinload, sessionmaker @@ -21,6 +18,9 @@ from core.entities.execution_extra_content import ( from core.entities.execution_extra_content import ( HumanInputContent as HumanInputContentDomainModel, ) +from graphon.nodes.human_input.entities import FormDefinition +from graphon.nodes.human_input.enums import HumanInputFormStatus +from graphon.nodes.human_input.human_input_node import HumanInputNode from models.execution_extra_content import ( ExecutionExtraContent as ExecutionExtraContentModel, ) diff --git a/api/repositories/workflow_collaboration_repository.py b/api/repositories/workflow_collaboration_repository.py new file mode 100644 index 0000000000..000f80496d --- /dev/null +++ b/api/repositories/workflow_collaboration_repository.py @@ -0,0 +1,147 @@ +from __future__ import annotations + +import json +from typing import TypedDict + +from extensions.ext_redis import redis_client + +SESSION_STATE_TTL_SECONDS = 3600 +WORKFLOW_ONLINE_USERS_PREFIX = "workflow_online_users:" +WORKFLOW_LEADER_PREFIX = "workflow_leader:" +WS_SID_MAP_PREFIX = "ws_sid_map:" + + +class WorkflowSessionInfo(TypedDict): + user_id: str + username: str + avatar: str | None + sid: str + connected_at: int + + +class SidMapping(TypedDict): + workflow_id: str + user_id: str + + +class WorkflowCollaborationRepository: + def __init__(self) -> None: + self._redis = redis_client + + def __repr__(self) -> str: + return f"{self.__class__.__name__}(redis_client={self._redis})" + + @staticmethod + def workflow_key(workflow_id: str) -> str: + return f"{WORKFLOW_ONLINE_USERS_PREFIX}{workflow_id}" + + @staticmethod + def leader_key(workflow_id: str) -> str: + return f"{WORKFLOW_LEADER_PREFIX}{workflow_id}" + + @staticmethod + def sid_key(sid: str) -> str: + return f"{WS_SID_MAP_PREFIX}{sid}" + + @staticmethod + def _decode(value: str | bytes | None) -> str | None: + if value is None: + return None + if isinstance(value, bytes): + return value.decode("utf-8") + return value + + def refresh_session_state(self, workflow_id: str, sid: str) -> None: + workflow_key = self.workflow_key(workflow_id) + sid_key = self.sid_key(sid) + if self._redis.exists(workflow_key): + self._redis.expire(workflow_key, SESSION_STATE_TTL_SECONDS) + if self._redis.exists(sid_key): + self._redis.expire(sid_key, SESSION_STATE_TTL_SECONDS) + + def set_session_info(self, workflow_id: str, session_info: WorkflowSessionInfo) -> None: + workflow_key = self.workflow_key(workflow_id) + self._redis.hset(workflow_key, session_info["sid"], json.dumps(session_info)) + self._redis.set( + self.sid_key(session_info["sid"]), + json.dumps({"workflow_id": workflow_id, "user_id": session_info["user_id"]}), + ex=SESSION_STATE_TTL_SECONDS, + ) + self.refresh_session_state(workflow_id, session_info["sid"]) + + def get_sid_mapping(self, sid: str) -> SidMapping | None: + raw = self._redis.get(self.sid_key(sid)) + if not raw: + return None + value = self._decode(raw) + if not value: + return None + try: + return json.loads(value) + except (TypeError, json.JSONDecodeError): + return None + + def delete_session(self, workflow_id: str, sid: str) -> None: + self._redis.hdel(self.workflow_key(workflow_id), sid) + self._redis.delete(self.sid_key(sid)) + + def session_exists(self, workflow_id: str, sid: str) -> bool: + return bool(self._redis.hexists(self.workflow_key(workflow_id), sid)) + + def sid_mapping_exists(self, sid: str) -> bool: + return bool(self._redis.exists(self.sid_key(sid))) + + def get_session_sids(self, workflow_id: str) -> list[str]: + raw_sids = self._redis.hkeys(self.workflow_key(workflow_id)) + decoded_sids: list[str] = [] + for sid in raw_sids: + decoded = self._decode(sid) + if decoded: + decoded_sids.append(decoded) + return decoded_sids + + def list_sessions(self, workflow_id: str) -> list[WorkflowSessionInfo]: + sessions_json = self._redis.hgetall(self.workflow_key(workflow_id)) + users: list[WorkflowSessionInfo] = [] + + for session_info_json in sessions_json.values(): + value = self._decode(session_info_json) + if not value: + continue + try: + session_info = json.loads(value) + except (TypeError, json.JSONDecodeError): + continue + + if not isinstance(session_info, dict): + continue + if "user_id" not in session_info or "username" not in session_info or "sid" not in session_info: + continue + + users.append( + { + "user_id": str(session_info["user_id"]), + "username": str(session_info["username"]), + "avatar": session_info.get("avatar"), + "sid": str(session_info["sid"]), + "connected_at": int(session_info.get("connected_at") or 0), + } + ) + + return users + + def get_current_leader(self, workflow_id: str) -> str | None: + raw = self._redis.get(self.leader_key(workflow_id)) + return self._decode(raw) + + def set_leader_if_absent(self, workflow_id: str, sid: str) -> bool: + return bool(self._redis.set(self.leader_key(workflow_id), sid, nx=True, ex=SESSION_STATE_TTL_SECONDS)) + + def set_leader(self, workflow_id: str, sid: str) -> None: + self._redis.set(self.leader_key(workflow_id), sid, ex=SESSION_STATE_TTL_SECONDS) + + def delete_leader(self, workflow_id: str) -> None: + self._redis.delete(self.leader_key(workflow_id)) + + def expire_leader(self, workflow_id: str) -> None: + self._redis.expire(self.leader_key(workflow_id), SESSION_STATE_TTL_SECONDS) diff --git a/api/schedule/clean_workflow_runlogs_precise.py b/api/schedule/clean_workflow_runlogs_precise.py index ebb8d52924..c5762fcdad 100644 --- a/api/schedule/clean_workflow_runlogs_precise.py +++ b/api/schedule/clean_workflow_runlogs_precise.py @@ -4,6 +4,7 @@ import time from collections.abc import Sequence import click +from sqlalchemy import delete, select from sqlalchemy.orm import Session, sessionmaker import app @@ -113,11 +114,9 @@ def _delete_batch( try: with session.begin_nested(): workflow_run_ids = [run.id for run in workflow_runs] - message_data = ( - session.query(Message.id, Message.conversation_id) - .where(Message.workflow_run_id.in_(workflow_run_ids)) - .all() - ) + message_data = session.execute( + select(Message.id, Message.conversation_id).where(Message.workflow_run_id.in_(workflow_run_ids)) + ).all() message_id_list = [msg.id for msg in message_data] conversation_id_list = list({msg.conversation_id for msg in message_data if msg.conversation_id}) if message_id_list: @@ -132,23 +131,19 @@ def _delete_batch( SavedMessage, ] for model in message_related_models: - session.query(model).where(model.message_id.in_(message_id_list)).delete(synchronize_session=False) # type: ignore + session.execute(delete(model).where(model.message_id.in_(message_id_list))) # type: ignore # error: "DeclarativeAttributeIntercept" has no attribute "message_id". But this type is only in lib # and these 6 types all have the message_id field. - session.query(Message).where(Message.workflow_run_id.in_(workflow_run_ids)).delete( - synchronize_session=False - ) + session.execute(delete(Message).where(Message.workflow_run_id.in_(workflow_run_ids))) if conversation_id_list: - session.query(ConversationVariable).where( - ConversationVariable.conversation_id.in_(conversation_id_list) - ).delete(synchronize_session=False) - - session.query(Conversation).where(Conversation.id.in_(conversation_id_list)).delete( - synchronize_session=False + session.execute( + delete(ConversationVariable).where(ConversationVariable.conversation_id.in_(conversation_id_list)) ) + session.execute(delete(Conversation).where(Conversation.id.in_(conversation_id_list))) + def _delete_node_executions(active_session: Session, runs: Sequence[WorkflowRun]) -> tuple[int, int]: run_ids = [run.id for run in runs] repo = DifyAPIRepositoryFactory.create_api_workflow_node_execution_repository( diff --git a/api/schedule/create_tidb_serverless_task.py b/api/schedule/create_tidb_serverless_task.py index 6ceb3ef856..e242b0c667 100644 --- a/api/schedule/create_tidb_serverless_task.py +++ b/api/schedule/create_tidb_serverless_task.py @@ -1,11 +1,11 @@ import time import click +from dify_vdb_tidb_on_qdrant.tidb_service import TidbService from sqlalchemy import func, select import app from configs import dify_config -from core.rag.datasource.vdb.tidb_on_qdrant.tidb_service import TidbService from extensions.ext_database import db from models.dataset import TidbAuthBinding from models.enums import TidbAuthBindingStatus @@ -57,6 +57,7 @@ def create_clusters(batch_size): cluster_name=new_cluster["cluster_name"], account=new_cluster["account"], password=new_cluster["password"], + qdrant_endpoint=new_cluster.get("qdrant_endpoint"), active=False, status=TidbAuthBindingStatus.CREATING, ) diff --git a/api/schedule/update_tidb_serverless_status_task.py b/api/schedule/update_tidb_serverless_status_task.py index 10003b1b97..46d1b85aa0 100644 --- a/api/schedule/update_tidb_serverless_status_task.py +++ b/api/schedule/update_tidb_serverless_status_task.py @@ -2,11 +2,11 @@ import time from collections.abc import Sequence import click +from dify_vdb_tidb_on_qdrant.tidb_service import TidbService from sqlalchemy import select import app from configs import dify_config -from core.rag.datasource.vdb.tidb_on_qdrant.tidb_service import TidbService from extensions.ext_database import db from models.dataset import TidbAuthBinding from models.enums import TidbAuthBindingStatus diff --git a/api/services/account_service.py b/api/services/account_service.py index 4b58b3b697..ccc4a7c1fa 100644 --- a/api/services/account_service.py +++ b/api/services/account_service.py @@ -9,7 +9,8 @@ from typing import Any, TypedDict, cast from pydantic import BaseModel, TypeAdapter from sqlalchemy import delete, func, select, update -from sqlalchemy.orm import Session, sessionmaker + +from core.db.session_factory import session_factory class InvitationData(TypedDict): @@ -800,19 +801,19 @@ class AccountService: return token @staticmethod - def get_account_by_email_with_case_fallback(email: str, session: Session | None = None) -> Account | None: + def get_account_by_email_with_case_fallback(email: str) -> Account | None: """ Retrieve an account by email and fall back to the lowercase email if the original lookup fails. This keeps backward compatibility for older records that stored uppercase emails while the rest of the system gradually normalizes new inputs. """ - query_session = session or db.session - account = query_session.execute(select(Account).filter_by(email=email)).scalar_one_or_none() - if account or email == email.lower(): - return account + with session_factory.create_session() as session: + account = session.execute(select(Account).where(Account.email == email)).scalar_one_or_none() + if account or email == email.lower(): + return account - return query_session.execute(select(Account).filter_by(email=email.lower())).scalar_one_or_none() + return session.execute(select(Account).where(Account.email == email.lower())).scalar_one_or_none() @classmethod def get_email_code_login_data(cls, token: str) -> dict[str, Any] | None: @@ -1516,8 +1517,7 @@ class RegisterService: check_workspace_member_invite_permission(tenant.id) - with sessionmaker(db.engine, expire_on_commit=False).begin() as session: - account = AccountService.get_account_by_email_with_case_fallback(email, session=session) + account = AccountService.get_account_by_email_with_case_fallback(email) if not account: TenantService.check_member_permission(tenant, inviter, None, "add") diff --git a/api/services/advanced_prompt_template_service.py b/api/services/advanced_prompt_template_service.py index a6e6b1bae7..5d136e7393 100644 --- a/api/services/advanced_prompt_template_service.py +++ b/api/services/advanced_prompt_template_service.py @@ -1,4 +1,5 @@ import copy +from typing import Any, TypedDict from core.prompt.prompt_templates.advanced_prompt_templates import ( BAICHUAN_CHAT_APP_CHAT_PROMPT_CONFIG, @@ -15,9 +16,18 @@ from core.prompt.prompt_templates.advanced_prompt_templates import ( from models.model import AppMode +class AdvancedPromptTemplateArgs(TypedDict): + """Expected shape of the args dict passed to AdvancedPromptTemplateService.get_prompt.""" + + app_mode: str + model_mode: str + model_name: str + has_context: str + + class AdvancedPromptTemplateService: @classmethod - def get_prompt(cls, args: dict): + def get_prompt(cls, args: AdvancedPromptTemplateArgs) -> dict[str, Any]: app_mode = args["app_mode"] model_mode = args["model_mode"] model_name = args["model_name"] @@ -29,7 +39,7 @@ class AdvancedPromptTemplateService: return cls.get_common_prompt(app_mode, model_mode, has_context) @classmethod - def get_common_prompt(cls, app_mode: str, model_mode: str, has_context: str): + def get_common_prompt(cls, app_mode: str, model_mode: str, has_context: str) -> dict[str, Any]: context_prompt = copy.deepcopy(CONTEXT) match app_mode: @@ -63,7 +73,7 @@ class AdvancedPromptTemplateService: return {} @classmethod - def get_completion_prompt(cls, prompt_template: dict, has_context: str, context: str): + def get_completion_prompt(cls, prompt_template: dict[str, Any], has_context: str, context: str) -> dict[str, Any]: if has_context == "true": prompt_template["completion_prompt_config"]["prompt"]["text"] = ( context + prompt_template["completion_prompt_config"]["prompt"]["text"] @@ -72,7 +82,7 @@ class AdvancedPromptTemplateService: return prompt_template @classmethod - def get_chat_prompt(cls, prompt_template: dict, has_context: str, context: str): + def get_chat_prompt(cls, prompt_template: dict[str, Any], has_context: str, context: str) -> dict[str, Any]: if has_context == "true": prompt_template["chat_prompt_config"]["prompt"][0]["text"] = ( context + prompt_template["chat_prompt_config"]["prompt"][0]["text"] @@ -81,7 +91,7 @@ class AdvancedPromptTemplateService: return prompt_template @classmethod - def get_baichuan_prompt(cls, app_mode: str, model_mode: str, has_context: str): + def get_baichuan_prompt(cls, app_mode: str, model_mode: str, has_context: str) -> dict[str, Any]: baichuan_context_prompt = copy.deepcopy(BAICHUAN_CONTEXT) match app_mode: diff --git a/api/services/annotation_service.py b/api/services/annotation_service.py index ae5facbec0..ff0882ad5c 100644 --- a/api/services/annotation_service.py +++ b/api/services/annotation_service.py @@ -1,11 +1,8 @@ import logging import uuid - -import pandas as pd - -logger = logging.getLogger(__name__) from typing import TypedDict +import pandas as pd from sqlalchemy import delete, or_, select, update from werkzeug.datastructures import FileStorage from werkzeug.exceptions import NotFound @@ -24,6 +21,8 @@ from tasks.annotation.disable_annotation_reply_task import disable_annotation_re from tasks.annotation.enable_annotation_reply_task import enable_annotation_reply_task from tasks.annotation.update_annotation_to_index_task import update_annotation_to_index_task +logger = logging.getLogger(__name__) + class AnnotationJobStatusDict(TypedDict): job_id: str @@ -46,9 +45,50 @@ class AnnotationSettingDisabledDict(TypedDict): enabled: bool +class EnableAnnotationArgs(TypedDict): + """Expected shape of the args dict passed to enable_app_annotation.""" + + score_threshold: float + embedding_provider_name: str + embedding_model_name: str + + +class UpsertAnnotationArgs(TypedDict, total=False): + """Expected shape of the args dict passed to up_insert_app_annotation_from_message.""" + + answer: str + content: str + message_id: str + question: str + + +class InsertAnnotationArgs(TypedDict): + """Expected shape of the args dict passed to insert_app_annotation_directly.""" + + question: str + answer: str + + +class UpdateAnnotationArgs(TypedDict, total=False): + """Expected shape of the args dict passed to update_app_annotation_directly. + + Both fields are optional at the type level; the service validates at runtime + and raises ValueError if either is missing. + """ + + answer: str + question: str + + +class UpdateAnnotationSettingArgs(TypedDict): + """Expected shape of the args dict passed to update_app_annotation_setting.""" + + score_threshold: float + + class AppAnnotationService: @classmethod - def up_insert_app_annotation_from_message(cls, args: dict, app_id: str) -> MessageAnnotation: + def up_insert_app_annotation_from_message(cls, args: UpsertAnnotationArgs, app_id: str) -> MessageAnnotation: # get app info current_user, current_tenant_id = current_account_with_tenant() app = db.session.scalar( @@ -62,8 +102,9 @@ class AppAnnotationService: if answer is None: raise ValueError("Either 'answer' or 'content' must be provided") - if args.get("message_id"): - message_id = str(args["message_id"]) + raw_message_id = args.get("message_id") + if raw_message_id: + message_id = str(raw_message_id) message = db.session.scalar( select(Message).where(Message.id == message_id, Message.app_id == app.id).limit(1) ) @@ -87,9 +128,10 @@ class AppAnnotationService: account_id=current_user.id, ) else: - question = args.get("question") - if not question: + maybe_question = args.get("question") + if not maybe_question: raise ValueError("'question' is required when 'message_id' is not provided") + question = maybe_question annotation = MessageAnnotation(app_id=app.id, content=answer, question=question, account_id=current_user.id) db.session.add(annotation) @@ -110,7 +152,7 @@ class AppAnnotationService: return annotation @classmethod - def enable_app_annotation(cls, args: dict, app_id: str) -> AnnotationJobStatusDict: + def enable_app_annotation(cls, args: EnableAnnotationArgs, app_id: str) -> AnnotationJobStatusDict: enable_app_annotation_key = f"enable_app_annotation_{str(app_id)}" cache_result = redis_client.get(enable_app_annotation_key) if cache_result is not None: @@ -217,7 +259,7 @@ class AppAnnotationService: return annotations @classmethod - def insert_app_annotation_directly(cls, args: dict, app_id: str) -> MessageAnnotation: + def insert_app_annotation_directly(cls, args: InsertAnnotationArgs, app_id: str) -> MessageAnnotation: # get app info current_user, current_tenant_id = current_account_with_tenant() app = db.session.scalar( @@ -251,7 +293,7 @@ class AppAnnotationService: return annotation @classmethod - def update_app_annotation_directly(cls, args: dict, app_id: str, annotation_id: str): + def update_app_annotation_directly(cls, args: UpdateAnnotationArgs, app_id: str, annotation_id: str): # get app info _, current_tenant_id = current_account_with_tenant() app = db.session.scalar( @@ -270,7 +312,11 @@ class AppAnnotationService: if question is None: raise ValueError("'question' is required") - annotation.content = args["answer"] + answer = args.get("answer") + if answer is None: + raise ValueError("'answer' is required") + + annotation.content = answer annotation.question = question db.session.commit() @@ -613,7 +659,7 @@ class AppAnnotationService: @classmethod def update_app_annotation_setting( - cls, app_id: str, annotation_setting_id: str, args: dict + cls, app_id: str, annotation_setting_id: str, args: UpdateAnnotationSettingArgs ) -> AnnotationSettingDict: current_user, current_tenant_id = current_account_with_tenant() # get app info diff --git a/api/services/app_dsl_service.py b/api/services/app_dsl_service.py index dd73e10374..78806927bc 100644 --- a/api/services/app_dsl_service.py +++ b/api/services/app_dsl_service.py @@ -3,23 +3,16 @@ import hashlib import logging import uuid from collections.abc import Mapping -from enum import StrEnum -from typing import cast +from typing import Any, cast from urllib.parse import urlparse from uuid import uuid4 import yaml from Crypto.Cipher import AES from Crypto.Util.Padding import pad, unpad -from graphon.enums import BuiltinNodeTypes -from graphon.model_runtime.utils.encoders import jsonable_encoder -from graphon.nodes.llm.entities import LLMNodeData -from graphon.nodes.parameter_extractor.entities import ParameterExtractorNodeData -from graphon.nodes.question_classifier.entities import QuestionClassifierNodeData -from graphon.nodes.tool.entities import ToolNodeData from packaging import version from packaging.version import parse as parse_version -from pydantic import BaseModel, Field +from pydantic import BaseModel from sqlalchemy import select from sqlalchemy.orm import Session @@ -36,10 +29,17 @@ from core.workflow.nodes.trigger_schedule.trigger_schedule_node import TriggerSc from events.app_event import app_model_config_was_updated, app_was_created from extensions.ext_redis import redis_client from factories import variable_factory +from graphon.enums import BuiltinNodeTypes +from graphon.model_runtime.utils.encoders import jsonable_encoder +from graphon.nodes.llm.entities import LLMNodeData +from graphon.nodes.parameter_extractor.entities import ParameterExtractorNodeData +from graphon.nodes.question_classifier.entities import QuestionClassifierNodeData +from graphon.nodes.tool.entities import ToolNodeData from libs.datetime_utils import naive_utc_now from models import Account, App, AppMode from models.model import AppModelConfig, AppModelConfigDict, IconType from models.workflow import Workflow +from services.entities.dsl_entities import CheckDependenciesResult, ImportMode, ImportStatus from services.plugin.dependencies_analysis import DependenciesAnalysisService from services.workflow_draft_variable_service import WorkflowDraftVariableService from services.workflow_service import WorkflowService @@ -53,18 +53,6 @@ DSL_MAX_SIZE = 10 * 1024 * 1024 # 10MB CURRENT_DSL_VERSION = "0.6.0" -class ImportMode(StrEnum): - YAML_CONTENT = "yaml-content" - YAML_URL = "yaml-url" - - -class ImportStatus(StrEnum): - COMPLETED = "completed" - COMPLETED_WITH_WARNINGS = "completed-with-warnings" - PENDING = "pending" - FAILED = "failed" - - class Import(BaseModel): id: str status: ImportStatus @@ -75,10 +63,6 @@ class Import(BaseModel): error: str = "" -class CheckDependenciesResult(BaseModel): - leaked_dependencies: list[PluginDependency] = Field(default_factory=list) - - def _check_version_compatibility(imported_version: str) -> ImportStatus: """Determine import status based on version comparison""" try: @@ -416,7 +400,7 @@ class AppDslService: self, *, app: App | None, - data: dict, + data: dict[str, Any], account: Account, name: str | None = None, description: str | None = None, @@ -471,7 +455,7 @@ class AppDslService: app.updated_by = account.id self._session.add(app) - self._session.commit() + self._session.flush() app_was_created.send(app, account=account) # save dependencies @@ -483,61 +467,67 @@ class AppDslService: ) # Initialize app based on mode - if app_mode in {AppMode.ADVANCED_CHAT, AppMode.WORKFLOW}: - workflow_data = data.get("workflow") - if not workflow_data or not isinstance(workflow_data, dict): - raise ValueError("Missing workflow data for workflow/advanced chat app") + match app_mode: + case AppMode.ADVANCED_CHAT | AppMode.WORKFLOW: + workflow_data = data.get("workflow") + if not workflow_data or not isinstance(workflow_data, dict): + raise ValueError("Missing workflow data for workflow/advanced chat app") - environment_variables_list = workflow_data.get("environment_variables", []) - environment_variables = [ - variable_factory.build_environment_variable_from_mapping(obj) for obj in environment_variables_list - ] - conversation_variables_list = workflow_data.get("conversation_variables", []) - conversation_variables = [ - variable_factory.build_conversation_variable_from_mapping(obj) for obj in conversation_variables_list - ] + environment_variables_list = workflow_data.get("environment_variables", []) + environment_variables = [ + variable_factory.build_environment_variable_from_mapping(obj) for obj in environment_variables_list + ] + conversation_variables_list = workflow_data.get("conversation_variables", []) + conversation_variables = [ + variable_factory.build_conversation_variable_from_mapping(obj) + for obj in conversation_variables_list + ] - workflow_service = WorkflowService() - current_draft_workflow = workflow_service.get_draft_workflow(app_model=app) - if current_draft_workflow: - unique_hash = current_draft_workflow.unique_hash - else: - unique_hash = None - graph = workflow_data.get("graph", {}) - for node in graph.get("nodes", []): - if node.get("data", {}).get("type", "") == BuiltinNodeTypes.KNOWLEDGE_RETRIEVAL: - dataset_ids = node["data"].get("dataset_ids", []) - node["data"]["dataset_ids"] = [ - decrypted_id - for dataset_id in dataset_ids - if (decrypted_id := self.decrypt_dataset_id(encrypted_data=dataset_id, tenant_id=app.tenant_id)) - ] - workflow_service.sync_draft_workflow( - app_model=app, - graph=workflow_data.get("graph", {}), - features=workflow_data.get("features", {}), - unique_hash=unique_hash, - account=account, - environment_variables=environment_variables, - conversation_variables=conversation_variables, - ) - elif app_mode in {AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.COMPLETION}: - # Initialize model config - model_config = data.get("model_config") - if not model_config or not isinstance(model_config, dict): - raise ValueError("Missing model_config for chat/agent-chat/completion app") - # Initialize or update model config - if not app.app_model_config: - app_model_config = AppModelConfig( - app_id=app.id, created_by=account.id, updated_by=account.id - ).from_model_config_dict(cast(AppModelConfigDict, model_config)) - app_model_config.id = str(uuid4()) - app.app_model_config_id = app_model_config.id + workflow_service = WorkflowService() + current_draft_workflow = workflow_service.get_draft_workflow(app_model=app) + if current_draft_workflow: + unique_hash = current_draft_workflow.unique_hash + else: + unique_hash = None + graph = workflow_data.get("graph", {}) + for node in graph.get("nodes", []): + if node.get("data", {}).get("type", "") == BuiltinNodeTypes.KNOWLEDGE_RETRIEVAL: + dataset_ids = node["data"].get("dataset_ids", []) + node["data"]["dataset_ids"] = [ + decrypted_id + for dataset_id in dataset_ids + if ( + decrypted_id := self.decrypt_dataset_id( + encrypted_data=dataset_id, tenant_id=app.tenant_id + ) + ) + ] + workflow_service.sync_draft_workflow( + app_model=app, + graph=workflow_data.get("graph", {}), + features=workflow_data.get("features", {}), + unique_hash=unique_hash, + account=account, + environment_variables=environment_variables, + conversation_variables=conversation_variables, + ) + case AppMode.CHAT | AppMode.AGENT_CHAT | AppMode.COMPLETION: + # Initialize model config + model_config = data.get("model_config") + if not model_config or not isinstance(model_config, dict): + raise ValueError("Missing model_config for chat/agent-chat/completion app") + # Initialize or update model config + if not app.app_model_config: + app_model_config = AppModelConfig( + app_id=app.id, created_by=account.id, updated_by=account.id + ).from_model_config_dict(cast(AppModelConfigDict, model_config)) + app_model_config.id = str(uuid4()) + app.app_model_config_id = app_model_config.id - self._session.add(app_model_config) - app_model_config_was_updated.send(app, app_model_config=app_model_config) - else: - raise ValueError("Invalid app mode") + self._session.add(app_model_config) + app_model_config_was_updated.send(app, app_model_config=app_model_config) + case _: + raise ValueError("Invalid app mode") return app @classmethod @@ -577,7 +567,7 @@ class AppDslService: @classmethod def _append_workflow_export_data( - cls, *, export_data: dict, app_model: App, include_secret: bool, workflow_id: str | None = None + cls, *, export_data: dict[str, Any], app_model: App, include_secret: bool, workflow_id: str | None = None ): """ Append workflow export data @@ -630,7 +620,7 @@ class AppDslService: ] @classmethod - def _append_model_config_export_data(cls, export_data: dict, app_model: App): + def _append_model_config_export_data(cls, export_data: dict[str, Any], app_model: App): """ Append model config export data :param export_data: export data diff --git a/api/services/app_generate_service.py b/api/services/app_generate_service.py index a6639dc780..2c9d815b64 100644 --- a/api/services/app_generate_service.py +++ b/api/services/app_generate_service.py @@ -4,7 +4,7 @@ import logging import threading import uuid from collections.abc import Callable, Generator, Mapping -from typing import TYPE_CHECKING, Any, Union +from typing import TYPE_CHECKING, Any from configs import dify_config from core.app.apps.advanced_chat.app_generator import AdvancedChatAppGenerator @@ -89,7 +89,7 @@ class AppGenerateService: def generate( cls, app_model: App, - user: Union[Account, EndUser], + user: Account | EndUser, args: Mapping[str, Any], invoke_from: InvokeFrom, streaming: bool = True, @@ -358,11 +358,11 @@ class AppGenerateService: def generate_more_like_this( cls, app_model: App, - user: Union[Account, EndUser], + user: Account | EndUser, message_id: str, invoke_from: InvokeFrom, streaming: bool = True, - ) -> Union[Mapping, Generator]: + ) -> Mapping | Generator: """ Generate more like this :param app_model: app model diff --git a/api/services/app_model_config_service.py b/api/services/app_model_config_service.py index 2013c869af..8252de7753 100644 --- a/api/services/app_model_config_service.py +++ b/api/services/app_model_config_service.py @@ -1,3 +1,5 @@ +from typing import Any + from core.app.apps.agent_chat.app_config_manager import AgentChatAppConfigManager from core.app.apps.chat.app_config_manager import ChatAppConfigManager from core.app.apps.completion.app_config_manager import CompletionAppConfigManager @@ -6,7 +8,7 @@ from models.model import AppMode, AppModelConfigDict class AppModelConfigService: @classmethod - def validate_configuration(cls, tenant_id: str, config: dict, app_mode: AppMode) -> AppModelConfigDict: + def validate_configuration(cls, tenant_id: str, config: dict[str, Any], app_mode: AppMode) -> AppModelConfigDict: match app_mode: case AppMode.CHAT: return ChatAppConfigManager.config_validate(tenant_id, config) diff --git a/api/services/app_service.py b/api/services/app_service.py index 87d52a3159..afd98e2975 100644 --- a/api/services/app_service.py +++ b/api/services/app_service.py @@ -4,8 +4,6 @@ from typing import Any, TypedDict, cast import sqlalchemy as sa from flask_sqlalchemy.pagination import Pagination -from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from sqlalchemy import select from configs import dify_config @@ -17,6 +15,8 @@ from core.tools.tool_manager import ToolManager from core.tools.utils.configuration import ToolParameterConfigurationManager from events.app_event import app_was_created, app_was_deleted, app_was_updated from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from libs.datetime_utils import naive_utc_now from libs.login import current_user from models import Account @@ -32,7 +32,7 @@ logger = logging.getLogger(__name__) class AppService: - def get_paginate_apps(self, user_id: str, tenant_id: str, args: dict) -> Pagination | None: + def get_paginate_apps(self, user_id: str, tenant_id: str, args: dict[str, Any]) -> Pagination | None: """ Get app list with pagination :param user_id: user id @@ -78,7 +78,7 @@ class AppService: return app_models - def create_app(self, tenant_id: str, args: dict, account: Account) -> App: + def create_app(self, tenant_id: str, args: dict[str, Any], account: Account) -> App: """ Create app :param tenant_id: tenant id @@ -389,7 +389,7 @@ class AppService: """ app_mode = AppMode.value_of(app_model.mode) - meta: dict = {"tool_icons": {}} + meta: dict[str, Any] = {"tool_icons": {}} if app_mode in {AppMode.ADVANCED_CHAT, AppMode.WORKFLOW}: workflow = app_model.workflow diff --git a/api/services/app_task_service.py b/api/services/app_task_service.py index 0842e9d3e7..6e9d6b1c73 100644 --- a/api/services/app_task_service.py +++ b/api/services/app_task_service.py @@ -5,11 +5,10 @@ like stopping tasks, handling both legacy Redis flag mechanism and new GraphEngine command channel mechanism. """ -from graphon.graph_engine.manager import GraphEngineManager - from core.app.apps.base_app_queue_manager import AppQueueManager from core.app.entities.app_invoke_entities import InvokeFrom from extensions.ext_redis import redis_client +from graphon.graph_engine.manager import GraphEngineManager from models.model import AppMode diff --git a/api/services/async_workflow_service.py b/api/services/async_workflow_service.py index b4471f51d8..8b39d63385 100644 --- a/api/services/async_workflow_service.py +++ b/api/services/async_workflow_service.py @@ -7,7 +7,7 @@ with support for different subscription tiers, rate limiting, and execution trac import json from datetime import UTC, datetime -from typing import Any, Union +from typing import Any from celery.result import AsyncResult from sqlalchemy import select @@ -51,7 +51,7 @@ class AsyncWorkflowService: @classmethod def trigger_workflow_async( - cls, session: Session, user: Union[Account, EndUser], trigger_data: TriggerData + cls, session: Session, user: Account | EndUser, trigger_data: TriggerData ) -> AsyncTriggerResponse: """ Universal entry point for async workflow execution - THIS METHOD WILL NOT BLOCK @@ -184,7 +184,7 @@ class AsyncWorkflowService: @classmethod def reinvoke_trigger( - cls, session: Session, user: Union[Account, EndUser], workflow_trigger_log_id: str + cls, session: Session, user: Account | EndUser, workflow_trigger_log_id: str ) -> AsyncTriggerResponse: """ Re-invoke a previously failed or rate-limited trigger - THIS METHOD WILL NOT BLOCK diff --git a/api/services/audio_service.py b/api/services/audio_service.py index 1c7027efb4..60948e652b 100644 --- a/api/services/audio_service.py +++ b/api/services/audio_service.py @@ -5,12 +5,12 @@ from collections.abc import Generator from typing import cast from flask import Response, stream_with_context -from graphon.model_runtime.entities.model_entities import ModelType from werkzeug.datastructures import FileStorage from constants import AUDIO_EXTENSIONS from core.model_manager import ModelManager from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelType from models.enums import MessageStatus from models.model import App, AppMode, Message from services.errors.audio import ( diff --git a/api/services/auth/api_key_auth_service.py b/api/services/auth/api_key_auth_service.py index 3282dcfb11..36b1517056 100644 --- a/api/services/auth/api_key_auth_service.py +++ b/api/services/auth/api_key_auth_service.py @@ -1,4 +1,5 @@ import json +from typing import Any from sqlalchemy import select @@ -19,7 +20,7 @@ class ApiKeyAuthService: return data_source_api_key_bindings @staticmethod - def create_provider_auth(tenant_id: str, args: dict): + def create_provider_auth(tenant_id: str, args: dict[str, Any]): auth_result = ApiKeyAuthFactory(args["provider"], args["credentials"]).validate_credentials() if auth_result: # Encrypt the api key diff --git a/api/services/billing_service.py b/api/services/billing_service.py index 75dd3519ad..c0e23cdc6f 100644 --- a/api/services/billing_service.py +++ b/api/services/billing_service.py @@ -2,7 +2,7 @@ import json import logging import os from collections.abc import Sequence -from typing import Literal, NotRequired, TypedDict +from typing import Any, Literal, NotRequired, TypedDict import httpx from pydantic import TypeAdapter @@ -637,7 +637,7 @@ class BillingService: start_time / end_time: RFC3339 strings (e.g. "2026-03-01T00:00:00Z"), optional. Returns {"notification_id": str}. """ - payload: dict = { + payload: dict[str, Any] = { "contents": contents, "frequency": frequency, "status": status, diff --git a/api/services/clear_free_plan_tenant_expired_logs.py b/api/services/clear_free_plan_tenant_expired_logs.py index b4a7fa051f..dcc93b4b0f 100644 --- a/api/services/clear_free_plan_tenant_expired_logs.py +++ b/api/services/clear_free_plan_tenant_expired_logs.py @@ -6,14 +6,14 @@ from concurrent.futures import ThreadPoolExecutor import click from flask import Flask, current_app -from graphon.model_runtime.utils.encoders import jsonable_encoder -from sqlalchemy import select +from sqlalchemy import delete, func, select from sqlalchemy.orm import Session, sessionmaker from configs import dify_config from enums.cloud_plan import CloudPlan from extensions.ext_database import db from extensions.ext_storage import storage +from graphon.model_runtime.utils.encoders import jsonable_encoder from models.account import Tenant from models.model import ( App, @@ -62,13 +62,11 @@ class ClearFreePlanTenantExpiredLogs: for model, table_name in related_tables: # Query records related to expired messages - records = ( - session.query(model) - .where( + records = session.scalars( + select(model).where( model.message_id.in_(batch_message_ids), # type: ignore ) - .all() - ) + ).all() if len(records) == 0: continue @@ -103,9 +101,13 @@ class ClearFreePlanTenantExpiredLogs: except Exception: logger.exception("Failed to save %s records", table_name) - session.query(model).where( - model.id.in_(record_ids), # type: ignore - ).delete(synchronize_session=False) + session.execute( + delete(model) + .where( + model.id.in_(record_ids), # type: ignore + ) + .execution_options(synchronize_session=False) + ) click.echo( click.style( @@ -120,16 +122,15 @@ class ClearFreePlanTenantExpiredLogs: apps = db.session.scalars(select(App).where(App.tenant_id == tenant_id)).all() app_ids = [app.id for app in apps] while True: - with Session(db.engine).no_autoflush as session: - messages = ( - session.query(Message) + with sessionmaker(bind=db.engine, autoflush=False).begin() as session: + messages = session.scalars( + select(Message) .where( Message.app_id.in_(app_ids), Message.created_at < datetime.datetime.now() - datetime.timedelta(days=days), ) .limit(batch) - .all() - ) + ).all() if len(messages) == 0: break @@ -147,12 +148,11 @@ class ClearFreePlanTenantExpiredLogs: message_ids = [message.id for message in messages] # delete messages - session.query(Message).where( - Message.id.in_(message_ids), - ).delete(synchronize_session=False) + session.execute( + delete(Message).where(Message.id.in_(message_ids)).execution_options(synchronize_session=False) + ) cls._clear_message_related_tables(session, tenant_id, message_ids) - session.commit() click.echo( click.style( @@ -161,16 +161,15 @@ class ClearFreePlanTenantExpiredLogs: ) while True: - with Session(db.engine).no_autoflush as session: - conversations = ( - session.query(Conversation) + with sessionmaker(bind=db.engine, autoflush=False).begin() as session: + conversations = session.scalars( + select(Conversation) .where( Conversation.app_id.in_(app_ids), Conversation.updated_at < datetime.datetime.now() - datetime.timedelta(days=days), ) .limit(batch) - .all() - ) + ).all() if len(conversations) == 0: break @@ -187,10 +186,11 @@ class ClearFreePlanTenantExpiredLogs: ) conversation_ids = [conversation.id for conversation in conversations] - session.query(Conversation).where( - Conversation.id.in_(conversation_ids), - ).delete(synchronize_session=False) - session.commit() + session.execute( + delete(Conversation) + .where(Conversation.id.in_(conversation_ids)) + .execution_options(synchronize_session=False) + ) click.echo( click.style( @@ -294,16 +294,15 @@ class ClearFreePlanTenantExpiredLogs: break while True: - with Session(db.engine).no_autoflush as session: - workflow_app_logs = ( - session.query(WorkflowAppLog) + with sessionmaker(bind=db.engine, autoflush=False).begin() as session: + workflow_app_logs = session.scalars( + select(WorkflowAppLog) .where( WorkflowAppLog.tenant_id == tenant_id, WorkflowAppLog.created_at < datetime.datetime.now() - datetime.timedelta(days=days), ) .limit(batch) - .all() - ) + ).all() if len(workflow_app_logs) == 0: break @@ -323,10 +322,11 @@ class ClearFreePlanTenantExpiredLogs: workflow_app_log_ids = [workflow_app_log.id for workflow_app_log in workflow_app_logs] # delete workflow app logs - session.query(WorkflowAppLog).where(WorkflowAppLog.id.in_(workflow_app_log_ids)).delete( - synchronize_session=False + session.execute( + delete(WorkflowAppLog) + .where(WorkflowAppLog.id.in_(workflow_app_log_ids)) + .execution_options(synchronize_session=False) ) - session.commit() click.echo( click.style( @@ -347,7 +347,7 @@ class ClearFreePlanTenantExpiredLogs: current_time = started_at with sessionmaker(db.engine).begin() as session: - total_tenant_count = session.query(Tenant.id).count() + total_tenant_count = session.scalar(select(func.count(Tenant.id))) or 0 click.echo(click.style(f"Total tenant count: {total_tenant_count}", fg="white")) @@ -412,9 +412,12 @@ class ClearFreePlanTenantExpiredLogs: tenant_count = 0 for test_interval in test_intervals: tenant_count = ( - session.query(Tenant.id) - .where(Tenant.created_at.between(current_time, current_time + test_interval)) - .count() + session.scalar( + select(func.count(Tenant.id)).where( + Tenant.created_at.between(current_time, current_time + test_interval) + ) + ) + or 0 ) if tenant_count <= 100: interval = test_interval @@ -436,8 +439,8 @@ class ClearFreePlanTenantExpiredLogs: batch_end = min(current_time + interval, ended_at) - rs = ( - session.query(Tenant.id) + rs = session.execute( + select(Tenant.id) .where(Tenant.created_at.between(current_time, batch_end)) .order_by(Tenant.created_at) ) diff --git a/api/services/conversation_service.py b/api/services/conversation_service.py index f5085af59b..ee8a1c4edd 100644 --- a/api/services/conversation_service.py +++ b/api/services/conversation_service.py @@ -3,7 +3,6 @@ import logging from collections.abc import Callable, Sequence from typing import Any -from graphon.variables.types import SegmentType from sqlalchemy import asc, desc, func, or_, select from sqlalchemy.orm import Session @@ -13,6 +12,7 @@ from core.db.session_factory import session_factory from core.llm_generator.llm_generator import LLMGenerator from extensions.ext_database import db from factories import variable_factory +from graphon.variables.types import SegmentType from libs.datetime_utils import naive_utc_now from libs.infinite_scroll_pagination import InfiniteScrollPagination from models import Account, ConversationVariable diff --git a/api/services/conversation_variable_updater.py b/api/services/conversation_variable_updater.py index 95a8951951..287d513f48 100644 --- a/api/services/conversation_variable_updater.py +++ b/api/services/conversation_variable_updater.py @@ -1,7 +1,7 @@ -from graphon.variables.variables import VariableBase from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker +from graphon.variables.variables import VariableBase from models import ConversationVariable diff --git a/api/services/credit_pool_service.py b/api/services/credit_pool_service.py index 16788300d3..2d210db121 100644 --- a/api/services/credit_pool_service.py +++ b/api/services/credit_pool_service.py @@ -29,14 +29,15 @@ class CreditPoolService: @classmethod def get_pool(cls, tenant_id: str, pool_type: str = "trial") -> TenantCreditPool | None: """get tenant credit pool""" - return db.session.scalar( - select(TenantCreditPool) - .where( - TenantCreditPool.tenant_id == tenant_id, - TenantCreditPool.pool_type == pool_type, + with sessionmaker(db.engine, expire_on_commit=False).begin() as session: + return session.scalar( + select(TenantCreditPool) + .where( + TenantCreditPool.tenant_id == tenant_id, + TenantCreditPool.pool_type == pool_type, + ) + .limit(1) ) - .limit(1) - ) @classmethod def check_credits_available( diff --git a/api/services/dataset_service.py b/api/services/dataset_service.py index 3e952059ac..e6f5f80a6d 100644 --- a/api/services/dataset_service.py +++ b/api/services/dataset_service.py @@ -10,9 +10,6 @@ from collections.abc import Sequence from typing import Any, Literal, TypedDict, cast import sqlalchemy as sa -from graphon.file import helpers as file_helpers -from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType -from graphon.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel from redis.exceptions import LockNotOwnedError from sqlalchemy import delete, exists, func, select, update from sqlalchemy.orm import Session, sessionmaker @@ -31,6 +28,9 @@ from events.dataset_event import dataset_was_deleted from events.document_event import document_was_deleted from extensions.ext_database import db from extensions.ext_redis import redis_client +from graphon.file import helpers as file_helpers +from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType +from graphon.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel from libs import helper from libs.datetime_utils import naive_utc_now from libs.login import current_user @@ -233,7 +233,7 @@ class DatasetService: embedding_model_provider: str | None = None, embedding_model_name: str | None = None, retrieval_model: RetrievalModel | None = None, - summary_index_setting: dict | None = None, + summary_index_setting: dict[str, Any] | None = None, ): # check if dataset name already exists if db.session.scalar(select(Dataset).where(Dataset.name == name, Dataset.tenant_id == tenant_id).limit(1)): @@ -528,6 +528,8 @@ class DatasetService: raise ValueError("External knowledge id is required.") if not external_knowledge_api_id: raise ValueError("External knowledge api id is required.") + # Ensure the referenced external API template exists and belongs to the dataset tenant. + ExternalDatasetService.get_external_knowledge_api(external_knowledge_api_id, dataset.tenant_id) # Update metadata fields dataset.updated_by = user.id if user else None dataset.updated_at = naive_utc_now() @@ -552,8 +554,8 @@ class DatasetService: external_knowledge_api_id: External knowledge API identifier """ with sessionmaker(db.engine).begin() as session: - external_knowledge_binding = ( - session.query(ExternalKnowledgeBindings).filter_by(dataset_id=dataset_id).first() + external_knowledge_binding = session.scalar( + select(ExternalKnowledgeBindings).where(ExternalKnowledgeBindings.dataset_id == dataset_id).limit(1) ) if not external_knowledge_binding: @@ -1454,15 +1456,17 @@ class DocumentService: document_id_list: list[str] = [str(document_id) for document_id in document_ids] with session_factory.create_session() as session: - updated_count = ( - session.query(Document) - .filter( + result = session.execute( + update(Document) + .where( Document.id.in_(document_id_list), Document.dataset_id == dataset_id, Document.doc_form != IndexStructureType.QA_INDEX, # Skip qa_model documents ) - .update({Document.need_summary: need_summary}, synchronize_session=False) + .values(need_summary=need_summary) + .execution_options(synchronize_session=False) ) + updated_count = result.rowcount # type: ignore[union-attr,attr-defined] session.commit() logger.info( "Updated need_summary to %s for %d documents in dataset %s", @@ -2489,7 +2493,7 @@ class DocumentService: data_source_type: str, document_form: str, document_language: str, - data_source_info: dict, + data_source_info: dict[str, Any], created_from: str, position: int, account: Account, @@ -2822,6 +2826,10 @@ class DocumentService: knowledge_config.process_rule.rules.pre_processing_rules = list(unique_pre_processing_rule_dicts.values()) + if knowledge_config.process_rule.mode == ProcessRuleMode.HIERARCHICAL: + if not knowledge_config.process_rule.rules.parent_mode: + knowledge_config.process_rule.rules.parent_mode = "paragraph" + if not knowledge_config.process_rule.rules.segmentation: raise ValueError("Process rule segmentation is required") @@ -2842,7 +2850,7 @@ class DocumentService: raise ValueError("Process rule segmentation max_tokens is invalid") @classmethod - def estimate_args_validate(cls, args: dict): + def estimate_args_validate(cls, args: dict[str, Any]): if "info_list" not in args or not args["info_list"]: raise ValueError("Data source info is required") @@ -3124,7 +3132,7 @@ class DocumentService: class SegmentService: @classmethod - def segment_create_args_validate(cls, args: dict, document: Document): + def segment_create_args_validate(cls, args: dict[str, Any], document: Document): if document.doc_form == IndexStructureType.QA_INDEX: if "answer" not in args or not args["answer"]: raise ValueError("Answer is required") @@ -3141,7 +3149,7 @@ class SegmentService: raise ValueError(f"Exceeded maximum attachment limit of {single_chunk_attachment_limit}") @classmethod - def create_segment(cls, args: dict, document: Document, dataset: Dataset): + def create_segment(cls, args: dict[str, Any], document: Document, dataset: Dataset): assert isinstance(current_user, Account) assert current_user.current_tenant_id is not None diff --git a/api/services/datasource_provider_service.py b/api/services/datasource_provider_service.py index faa978afdc..416bc8cef9 100644 --- a/api/services/datasource_provider_service.py +++ b/api/services/datasource_provider_service.py @@ -3,9 +3,8 @@ import time from collections.abc import Mapping from typing import Any -from graphon.model_runtime.entities.provider_entities import FormType -from sqlalchemy import func, select -from sqlalchemy.orm import Session +from sqlalchemy import delete, func, select, update +from sqlalchemy.orm import Session, sessionmaker from configs import dify_config from constants import HIDDEN_VALUE, UNKNOWN_VALUE @@ -18,6 +17,7 @@ from core.plugin.impl.oauth import OAuthHandler from core.tools.utils.encryption import ProviderConfigCache, ProviderConfigEncrypter, create_provider_encrypter from extensions.ext_database import db from extensions.ext_redis import redis_client +from graphon.model_runtime.entities.provider_entities import FormType from models.oauth import DatasourceOauthParamConfig, DatasourceOauthTenantParamConfig, DatasourceProvider from models.provider_ids import DatasourceProviderID from services.plugin.plugin_service import PluginService @@ -53,13 +53,14 @@ class DatasourceProviderService: """ remove oauth custom client params """ - with Session(db.engine) as session: - session.query(DatasourceOauthTenantParamConfig).filter_by( - tenant_id=tenant_id, - provider=datasource_provider_id.provider_name, - plugin_id=datasource_provider_id.plugin_id, - ).delete() - session.commit() + with sessionmaker(bind=db.engine).begin() as session: + session.execute( + delete(DatasourceOauthTenantParamConfig).where( + DatasourceOauthTenantParamConfig.tenant_id == tenant_id, + DatasourceOauthTenantParamConfig.provider == datasource_provider_id.provider_name, + DatasourceOauthTenantParamConfig.plugin_id == datasource_provider_id.plugin_id, + ) + ) def decrypt_datasource_provider_credentials( self, @@ -109,17 +110,23 @@ class DatasourceProviderService: """ get credential by id """ - with Session(db.engine) as session: + with sessionmaker(bind=db.engine).begin() as session: if credential_id: - datasource_provider = ( - session.query(DatasourceProvider).filter_by(tenant_id=tenant_id, id=credential_id).first() + datasource_provider = session.scalar( + select(DatasourceProvider) + .where(DatasourceProvider.tenant_id == tenant_id, DatasourceProvider.id == credential_id) + .limit(1) ) else: - datasource_provider = ( - session.query(DatasourceProvider) - .filter_by(tenant_id=tenant_id, provider=provider, plugin_id=plugin_id) + datasource_provider = session.scalar( + select(DatasourceProvider) + .where( + DatasourceProvider.tenant_id == tenant_id, + DatasourceProvider.provider == provider, + DatasourceProvider.plugin_id == plugin_id, + ) .order_by(DatasourceProvider.is_default.desc(), DatasourceProvider.created_at.asc()) - .first() + .limit(1) ) if not datasource_provider: return {} @@ -156,7 +163,6 @@ class DatasourceProviderService: datasource_provider=datasource_provider, ) datasource_provider.expires_at = refreshed_credentials.expires_at - session.commit() return self.decrypt_datasource_provider_credentials( tenant_id=tenant_id, @@ -174,13 +180,16 @@ class DatasourceProviderService: """ get all datasource credentials by provider """ - with Session(db.engine) as session: - datasource_providers = ( - session.query(DatasourceProvider) - .filter_by(tenant_id=tenant_id, provider=provider, plugin_id=plugin_id) + with sessionmaker(bind=db.engine).begin() as session: + datasource_providers = session.scalars( + select(DatasourceProvider) + .where( + DatasourceProvider.tenant_id == tenant_id, + DatasourceProvider.provider == provider, + DatasourceProvider.plugin_id == plugin_id, + ) .order_by(DatasourceProvider.is_default.desc(), DatasourceProvider.created_at.asc()) - .all() - ) + ).all() if not datasource_providers: return [] current_user = get_current_user() @@ -224,7 +233,6 @@ class DatasourceProviderService: provider=provider, ) real_credentials_list.append(real_credentials) - session.commit() return real_credentials_list @@ -234,16 +242,16 @@ class DatasourceProviderService: """ update datasource provider name """ - with Session(db.engine) as session: - target_provider = ( - session.query(DatasourceProvider) - .filter_by( - tenant_id=tenant_id, - id=credential_id, - provider=datasource_provider_id.provider_name, - plugin_id=datasource_provider_id.plugin_id, + with sessionmaker(bind=db.engine).begin() as session: + target_provider = session.scalar( + select(DatasourceProvider) + .where( + DatasourceProvider.tenant_id == tenant_id, + DatasourceProvider.id == credential_id, + DatasourceProvider.provider == datasource_provider_id.provider_name, + DatasourceProvider.plugin_id == datasource_provider_id.plugin_id, ) - .first() + .limit(1) ) if target_provider is None: raise ValueError("provider not found") @@ -253,20 +261,19 @@ class DatasourceProviderService: # check name is exist if ( - session.query(DatasourceProvider) - .filter_by( - tenant_id=tenant_id, - name=name, - provider=datasource_provider_id.provider_name, - plugin_id=datasource_provider_id.plugin_id, + session.scalar( + select(func.count(DatasourceProvider.id)).where( + DatasourceProvider.tenant_id == tenant_id, + DatasourceProvider.name == name, + DatasourceProvider.provider == datasource_provider_id.provider_name, + DatasourceProvider.plugin_id == datasource_provider_id.plugin_id, + ) ) - .count() - > 0 - ): + or 0 + ) > 0: raise ValueError("Authorization name is already exists") target_provider.name = name - session.commit() return def set_default_datasource_provider( @@ -275,39 +282,43 @@ class DatasourceProviderService: """ set default datasource provider """ - with Session(db.engine) as session: + with sessionmaker(bind=db.engine).begin() as session: # get provider - target_provider = ( - session.query(DatasourceProvider) - .filter_by( - tenant_id=tenant_id, - id=credential_id, - provider=datasource_provider_id.provider_name, - plugin_id=datasource_provider_id.plugin_id, + target_provider = session.scalar( + select(DatasourceProvider) + .where( + DatasourceProvider.tenant_id == tenant_id, + DatasourceProvider.id == credential_id, + DatasourceProvider.provider == datasource_provider_id.provider_name, + DatasourceProvider.plugin_id == datasource_provider_id.plugin_id, ) - .first() + .limit(1) ) if target_provider is None: raise ValueError("provider not found") # clear default provider - session.query(DatasourceProvider).filter_by( - tenant_id=tenant_id, - provider=target_provider.provider, - plugin_id=target_provider.plugin_id, - is_default=True, - ).update({"is_default": False}) + session.execute( + update(DatasourceProvider) + .where( + DatasourceProvider.tenant_id == tenant_id, + DatasourceProvider.provider == target_provider.provider, + DatasourceProvider.plugin_id == target_provider.plugin_id, + DatasourceProvider.is_default.is_(True), + ) + .values(is_default=False) + .execution_options(synchronize_session=False) + ) # set new default provider target_provider.is_default = True - session.commit() return {"result": "success"} def setup_oauth_custom_client_params( self, tenant_id: str, datasource_provider_id: DatasourceProviderID, - client_params: dict | None, + client_params: dict[str, Any] | None, enabled: bool | None, ): """ @@ -315,15 +326,15 @@ class DatasourceProviderService: """ if client_params is None and enabled is None: return - with Session(db.engine) as session: - tenant_oauth_client_params = ( - session.query(DatasourceOauthTenantParamConfig) - .filter_by( - tenant_id=tenant_id, - provider=datasource_provider_id.provider_name, - plugin_id=datasource_provider_id.plugin_id, + with sessionmaker(bind=db.engine).begin() as session: + tenant_oauth_client_params = session.scalar( + select(DatasourceOauthTenantParamConfig) + .where( + DatasourceOauthTenantParamConfig.tenant_id == tenant_id, + DatasourceOauthTenantParamConfig.provider == datasource_provider_id.provider_name, + DatasourceOauthTenantParamConfig.plugin_id == datasource_provider_id.plugin_id, ) - .first() + .limit(1) ) if not tenant_oauth_client_params: @@ -341,7 +352,7 @@ class DatasourceProviderService: original_params = ( encrypter.decrypt(tenant_oauth_client_params.client_params) if tenant_oauth_client_params else {} ) - new_params: dict = { + new_params: dict[str, Any] = { key: value if value != HIDDEN_VALUE else original_params.get(key, UNKNOWN_VALUE) for key, value in client_params.items() } @@ -349,7 +360,6 @@ class DatasourceProviderService: if enabled is not None: tenant_oauth_client_params.enabled = enabled - session.commit() def is_system_oauth_params_exist(self, datasource_provider_id: DatasourceProviderID) -> bool: """ @@ -357,9 +367,14 @@ class DatasourceProviderService: """ with Session(db.engine).no_autoflush as session: return ( - session.query(DatasourceOauthParamConfig) - .filter_by(provider=datasource_provider_id.provider_name, plugin_id=datasource_provider_id.plugin_id) - .first() + session.scalar( + select(DatasourceOauthParamConfig) + .where( + DatasourceOauthParamConfig.provider == datasource_provider_id.provider_name, + DatasourceOauthParamConfig.plugin_id == datasource_provider_id.plugin_id, + ) + .limit(1) + ) is not None ) @@ -429,15 +444,15 @@ class DatasourceProviderService: plugin_id = datasource_provider_id.plugin_id with Session(db.engine).no_autoflush as session: # get tenant oauth client params - tenant_oauth_client_params = ( - session.query(DatasourceOauthTenantParamConfig) - .filter_by( - tenant_id=tenant_id, - provider=provider, - plugin_id=plugin_id, - enabled=True, + tenant_oauth_client_params = session.scalar( + select(DatasourceOauthTenantParamConfig) + .where( + DatasourceOauthTenantParamConfig.tenant_id == tenant_id, + DatasourceOauthTenantParamConfig.provider == provider, + DatasourceOauthTenantParamConfig.plugin_id == plugin_id, + DatasourceOauthTenantParamConfig.enabled.is_(True), ) - .first() + .limit(1) ) if tenant_oauth_client_params: encrypter, _ = self.get_oauth_encrypter(tenant_id, datasource_provider_id) @@ -449,8 +464,13 @@ class DatasourceProviderService: is_verified = PluginService.is_plugin_verified(tenant_id, provider_controller.plugin_unique_identifier) if is_verified: # fallback to system oauth client params - oauth_client_params = ( - session.query(DatasourceOauthParamConfig).filter_by(provider=provider, plugin_id=plugin_id).first() + oauth_client_params = session.scalar( + select(DatasourceOauthParamConfig) + .where( + DatasourceOauthParamConfig.provider == provider, + DatasourceOauthParamConfig.plugin_id == plugin_id, + ) + .limit(1) ) if oauth_client_params: return oauth_client_params.system_credentials @@ -461,15 +481,13 @@ class DatasourceProviderService: def generate_next_datasource_provider_name( session: Session, tenant_id: str, provider_id: DatasourceProviderID, credential_type: CredentialType ) -> str: - db_providers = ( - session.query(DatasourceProvider) - .filter_by( - tenant_id=tenant_id, - provider=provider_id.provider_name, - plugin_id=provider_id.plugin_id, + db_providers = session.scalars( + select(DatasourceProvider).where( + DatasourceProvider.tenant_id == tenant_id, + DatasourceProvider.provider == provider_id.provider_name, + DatasourceProvider.plugin_id == provider_id.plugin_id, ) - .all() - ) + ).all() return generate_incremental_name( [provider.name for provider in db_providers], f"{credential_type.get_name()}", @@ -482,17 +500,19 @@ class DatasourceProviderService: provider_id: DatasourceProviderID, avatar_url: str | None, expire_at: int, - credentials: dict, + credentials: dict[str, Any], credential_id: str, ) -> None: """ update datasource oauth provider """ - with Session(db.engine) as session: + with sessionmaker(bind=db.engine).begin() as session: lock = f"datasource_provider_create_lock:{tenant_id}_{provider_id}_{CredentialType.OAUTH2.value}" with redis_client.lock(lock, timeout=20): - target_provider = ( - session.query(DatasourceProvider).filter_by(id=credential_id, tenant_id=tenant_id).first() + target_provider = session.scalar( + select(DatasourceProvider) + .where(DatasourceProvider.id == credential_id, DatasourceProvider.tenant_id == tenant_id) + .limit(1) ) if target_provider is None: raise ValueError("provider not found") @@ -502,25 +522,28 @@ class DatasourceProviderService: db_provider_name = target_provider.name else: name_conflict = ( - session.query(DatasourceProvider) - .filter_by( - tenant_id=tenant_id, - name=db_provider_name, - provider=provider_id.provider_name, - plugin_id=provider_id.plugin_id, - auth_type=CredentialType.OAUTH2.value, + session.scalar( + select(func.count(DatasourceProvider.id)).where( + DatasourceProvider.tenant_id == tenant_id, + DatasourceProvider.name == db_provider_name, + DatasourceProvider.provider == provider_id.provider_name, + DatasourceProvider.plugin_id == provider_id.plugin_id, + DatasourceProvider.auth_type == CredentialType.OAUTH2.value, + ) ) - .count() + or 0 ) if name_conflict > 0: db_provider_name = generate_incremental_name( [ provider.name - for provider in session.query(DatasourceProvider).filter_by( - tenant_id=tenant_id, - provider=provider_id.provider_name, - plugin_id=provider_id.plugin_id, - ) + for provider in session.scalars( + select(DatasourceProvider).where( + DatasourceProvider.tenant_id == tenant_id, + DatasourceProvider.provider == provider_id.provider_name, + DatasourceProvider.plugin_id == provider_id.plugin_id, + ) + ).all() ], db_provider_name, ) @@ -535,7 +558,6 @@ class DatasourceProviderService: target_provider.expires_at = expire_at target_provider.encrypted_credentials = credentials target_provider.avatar_url = avatar_url or target_provider.avatar_url - session.commit() def add_datasource_oauth_provider( self, @@ -544,13 +566,13 @@ class DatasourceProviderService: provider_id: DatasourceProviderID, avatar_url: str | None, expire_at: int, - credentials: dict, + credentials: dict[str, Any], ) -> None: """ add datasource oauth provider """ credential_type = CredentialType.OAUTH2 - with Session(db.engine) as session: + with sessionmaker(bind=db.engine).begin() as session: lock = f"datasource_provider_create_lock:{tenant_id}_{provider_id}_{credential_type.value}" with redis_client.lock(lock, timeout=60): db_provider_name = name @@ -563,25 +585,27 @@ class DatasourceProviderService: ) else: if ( - session.query(DatasourceProvider) - .filter_by( - tenant_id=tenant_id, - name=db_provider_name, - provider=provider_id.provider_name, - plugin_id=provider_id.plugin_id, - auth_type=credential_type.value, + session.scalar( + select(func.count(DatasourceProvider.id)).where( + DatasourceProvider.tenant_id == tenant_id, + DatasourceProvider.name == db_provider_name, + DatasourceProvider.provider == provider_id.provider_name, + DatasourceProvider.plugin_id == provider_id.plugin_id, + DatasourceProvider.auth_type == credential_type.value, + ) ) - .count() - > 0 - ): + or 0 + ) > 0: db_provider_name = generate_incremental_name( [ provider.name - for provider in session.query(DatasourceProvider).filter_by( - tenant_id=tenant_id, - provider=provider_id.provider_name, - plugin_id=provider_id.plugin_id, - ) + for provider in session.scalars( + select(DatasourceProvider).where( + DatasourceProvider.tenant_id == tenant_id, + DatasourceProvider.provider == provider_id.provider_name, + DatasourceProvider.plugin_id == provider_id.plugin_id, + ) + ).all() ], db_provider_name, ) @@ -604,14 +628,13 @@ class DatasourceProviderService: expires_at=expire_at, ) session.add(datasource_provider) - session.commit() def add_datasource_api_key_provider( self, name: str | None, tenant_id: str, provider_id: DatasourceProviderID, - credentials: dict, + credentials: dict[str, Any], ) -> None: """ validate datasource provider credentials. @@ -623,7 +646,7 @@ class DatasourceProviderService: provider_name = provider_id.provider_name plugin_id = provider_id.plugin_id - with Session(db.engine) as session: + with sessionmaker(bind=db.engine).begin() as session: lock = f"datasource_provider_create_lock:{tenant_id}_{provider_id}_{CredentialType.API_KEY}" with redis_client.lock(lock, timeout=20): db_provider_name = name or self.generate_next_datasource_provider_name( @@ -635,11 +658,16 @@ class DatasourceProviderService: # check name is exist if ( - session.query(DatasourceProvider) - .filter_by(tenant_id=tenant_id, plugin_id=plugin_id, provider=provider_name, name=db_provider_name) - .count() - > 0 - ): + session.scalar( + select(func.count(DatasourceProvider.id)).where( + DatasourceProvider.tenant_id == tenant_id, + DatasourceProvider.plugin_id == plugin_id, + DatasourceProvider.provider == provider_name, + DatasourceProvider.name == db_provider_name, + ) + ) + or 0 + ) > 0: raise ValueError("Authorization name is already exists") try: @@ -670,7 +698,6 @@ class DatasourceProviderService: encrypted_credentials=credentials, ) session.add(datasource_provider) - session.commit() def extract_secret_variables(self, tenant_id: str, provider_id: str, credential_type: CredentialType) -> list[str]: """ @@ -920,28 +947,44 @@ class DatasourceProviderService: return copy_credentials_list def update_datasource_credentials( - self, tenant_id: str, auth_id: str, provider: str, plugin_id: str, credentials: dict | None, name: str | None + self, + tenant_id: str, + auth_id: str, + provider: str, + plugin_id: str, + credentials: dict[str, Any] | None, + name: str | None, ) -> None: """ update datasource credentials. """ - with Session(db.engine) as session: - datasource_provider = ( - session.query(DatasourceProvider) - .filter_by(tenant_id=tenant_id, id=auth_id, provider=provider, plugin_id=plugin_id) - .first() + with sessionmaker(bind=db.engine).begin() as session: + datasource_provider = session.scalar( + select(DatasourceProvider) + .where( + DatasourceProvider.tenant_id == tenant_id, + DatasourceProvider.id == auth_id, + DatasourceProvider.provider == provider, + DatasourceProvider.plugin_id == plugin_id, + ) + .limit(1) ) if not datasource_provider: raise ValueError("Datasource provider not found") # update name if name and name != datasource_provider.name: if ( - session.query(DatasourceProvider) - .filter_by(tenant_id=tenant_id, name=name, provider=provider, plugin_id=plugin_id) - .count() - > 0 - ): + session.scalar( + select(func.count(DatasourceProvider.id)).where( + DatasourceProvider.tenant_id == tenant_id, + DatasourceProvider.name == name, + DatasourceProvider.provider == provider, + DatasourceProvider.plugin_id == plugin_id, + ) + ) + or 0 + ) > 0: raise ValueError("Authorization name is already exists") datasource_provider.name = name @@ -980,7 +1023,6 @@ class DatasourceProviderService: encrypted_credentials[key] = value datasource_provider.encrypted_credentials = encrypted_credentials - session.commit() def remove_datasource_credentials(self, tenant_id: str, auth_id: str, provider: str, plugin_id: str) -> None: """ diff --git a/api/services/end_user_service.py b/api/services/end_user_service.py index 29ada270ec..749d8dbc30 100644 --- a/api/services/end_user_service.py +++ b/api/services/end_user_service.py @@ -2,7 +2,7 @@ import logging from collections.abc import Mapping from sqlalchemy import case, select -from sqlalchemy.orm import Session +from sqlalchemy.orm import sessionmaker from core.app.entities.app_invoke_entities import InvokeFrom from extensions.ext_database import db @@ -24,7 +24,7 @@ class EndUserService: when an end-user ID is known. """ - with Session(db.engine, expire_on_commit=False) as session: + with sessionmaker(bind=db.engine, expire_on_commit=False).begin() as session: return session.scalar( select(EndUser) .where( @@ -54,7 +54,7 @@ class EndUserService: if not user_id: user_id = DefaultEndUserSessionID.DEFAULT_SESSION_ID - with Session(db.engine, expire_on_commit=False) as session: + with sessionmaker(bind=db.engine, expire_on_commit=False).begin() as session: # Query with ORDER BY to prioritize exact type matches while maintaining backward compatibility # This single query approach is more efficient than separate queries end_user = session.scalar( @@ -82,7 +82,6 @@ class EndUserService: user_id, ) end_user.type = type - session.commit() else: # Create new end user if none exists end_user = EndUser( @@ -94,7 +93,6 @@ class EndUserService: external_user_id=user_id, ) session.add(end_user) - session.commit() return end_user @@ -135,7 +133,7 @@ class EndUserService: if not unique_app_ids: return result - with Session(db.engine, expire_on_commit=False) as session: + with sessionmaker(bind=db.engine, expire_on_commit=False).begin() as session: # Fetch existing end users for all target apps in a single query existing_end_users: list[EndUser] = list( session.scalars( @@ -174,7 +172,6 @@ class EndUserService: ) session.add_all(new_end_users) - session.commit() for eu in new_end_users: result[eu.app_id] = eu diff --git a/api/services/entities/auth_entities.py b/api/services/entities/auth_entities.py index 6b720a4607..e3fb249692 100644 --- a/api/services/entities/auth_entities.py +++ b/api/services/entities/auth_entities.py @@ -1,9 +1,25 @@ +from enum import StrEnum, auto + from pydantic import BaseModel, Field, field_validator from libs.helper import EmailStr from libs.password import valid_password +class LoginFailureReason(StrEnum): + """Bounded reason codes for failed login audit logs.""" + + ACCOUNT_BANNED = auto() + ACCOUNT_IN_FREEZE = auto() + ACCOUNT_NOT_FOUND = auto() + EMAIL_CODE_EMAIL_MISMATCH = auto() + INVALID_CREDENTIALS = auto() + INVALID_EMAIL_CODE = auto() + INVALID_EMAIL_CODE_TOKEN = auto() + INVALID_INVITATION_EMAIL = auto() + LOGIN_RATE_LIMITED = auto() + + class LoginPayloadBase(BaseModel): email: EmailStr password: str diff --git a/api/services/entities/dsl_entities.py b/api/services/entities/dsl_entities.py new file mode 100644 index 0000000000..05baa51fbd --- /dev/null +++ b/api/services/entities/dsl_entities.py @@ -0,0 +1,21 @@ +from enum import StrEnum + +from pydantic import BaseModel, Field + +from core.plugin.entities.plugin import PluginDependency + + +class ImportMode(StrEnum): + YAML_CONTENT = "yaml-content" + YAML_URL = "yaml-url" + + +class ImportStatus(StrEnum): + COMPLETED = "completed" + COMPLETED_WITH_WARNINGS = "completed-with-warnings" + PENDING = "pending" + FAILED = "failed" + + +class CheckDependenciesResult(BaseModel): + leaked_dependencies: list[PluginDependency] = Field(default_factory=list) diff --git a/api/services/entities/external_knowledge_entities/external_knowledge_entities.py b/api/services/entities/external_knowledge_entities/external_knowledge_entities.py index c9fb1c9e21..110dbe5a5e 100644 --- a/api/services/entities/external_knowledge_entities/external_knowledge_entities.py +++ b/api/services/entities/external_knowledge_entities/external_knowledge_entities.py @@ -1,4 +1,4 @@ -from typing import Literal, Union +from typing import Any, Literal, Union from pydantic import BaseModel @@ -22,5 +22,5 @@ class ProcessStatusSetting(BaseModel): class ExternalKnowledgeApiSetting(BaseModel): url: str request_method: str - headers: dict | None = None - params: dict | None = None + headers: dict[str, Any] | None = None + params: dict[str, Any] | None = None diff --git a/api/services/entities/knowledge_entities/knowledge_entities.py b/api/services/entities/knowledge_entities/knowledge_entities.py index cb38104e8c..b1fe352861 100644 --- a/api/services/entities/knowledge_entities/knowledge_entities.py +++ b/api/services/entities/knowledge_entities/knowledge_entities.py @@ -1,4 +1,4 @@ -from typing import Literal +from typing import Any, Literal from pydantic import BaseModel, field_validator @@ -7,6 +7,11 @@ from core.rag.index_processor.constant.index_type import IndexStructureType from core.rag.retrieval.retrieval_methods import RetrievalMethod +class RerankingModel(BaseModel): + reranking_provider_name: str | None = None + reranking_model_name: str | None = None + + class NotionIcon(BaseModel): type: str url: str | None = None @@ -53,11 +58,6 @@ class ProcessRule(BaseModel): rules: Rule | None = None -class RerankingModel(BaseModel): - reranking_provider_name: str | None = None - reranking_model_name: str | None = None - - class WeightVectorSetting(BaseModel): vector_weight: float embedding_provider_name: str @@ -87,7 +87,7 @@ class RetrievalModel(BaseModel): class MetaDataConfig(BaseModel): doc_type: str - doc_metadata: dict + doc_metadata: dict[str, Any] class KnowledgeConfig(BaseModel): @@ -97,7 +97,7 @@ class KnowledgeConfig(BaseModel): data_source: DataSource | None = None process_rule: ProcessRule | None = None retrieval_model: RetrievalModel | None = None - summary_index_setting: dict | None = None + summary_index_setting: dict[str, Any] | None = None doc_form: str = "text_model" doc_language: str = "English" embedding_model: str | None = None diff --git a/api/services/entities/knowledge_entities/rag_pipeline_entities.py b/api/services/entities/knowledge_entities/rag_pipeline_entities.py index a360fd2854..7fb7ed12bf 100644 --- a/api/services/entities/knowledge_entities/rag_pipeline_entities.py +++ b/api/services/entities/knowledge_entities/rag_pipeline_entities.py @@ -1,4 +1,4 @@ -from typing import Literal +from typing import Any, Literal from pydantic import BaseModel, field_validator @@ -6,6 +6,24 @@ from core.rag.entities import KeywordSetting, VectorSetting from core.rag.retrieval.retrieval_methods import RetrievalMethod +class RerankingModelConfig(BaseModel): + """ + Reranking Model Config. + """ + + reranking_provider_name: str | None = "" + reranking_model_name: str | None = "" + + +class WeightedScoreConfig(BaseModel): + """ + Weighted score Config. + """ + + vector_setting: VectorSetting | None + keyword_setting: KeywordSetting | None + + class IconInfo(BaseModel): icon: str icon_background: str | None = None @@ -28,24 +46,6 @@ class RagPipelineDatasetCreateEntity(BaseModel): yaml_content: str | None = None -class RerankingModelConfig(BaseModel): - """ - Reranking Model Config. - """ - - reranking_provider_name: str | None = "" - reranking_model_name: str | None = "" - - -class WeightedScoreConfig(BaseModel): - """ - Weighted score Config. - """ - - vector_setting: VectorSetting | None - keyword_setting: KeywordSetting | None - - class RetrievalSetting(BaseModel): """ Retrieval Setting. @@ -73,7 +73,7 @@ class KnowledgeConfiguration(BaseModel): keyword_number: int | None = 10 retrieval_model: RetrievalSetting # add summary index setting - summary_index_setting: dict | None = None + summary_index_setting: dict[str, Any] | None = None @field_validator("embedding_model_provider", mode="before") @classmethod diff --git a/api/services/entities/model_provider_entities.py b/api/services/entities/model_provider_entities.py index a944ef6acd..6679c08ebd 100644 --- a/api/services/entities/model_provider_entities.py +++ b/api/services/entities/model_provider_entities.py @@ -1,15 +1,6 @@ from collections.abc import Sequence from enum import StrEnum -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.entities.provider_entities import ( - ConfigurateMethod, - ModelCredentialSchema, - ProviderCredentialSchema, - ProviderHelpEntity, - SimpleProviderEntity, -) from pydantic import BaseModel, ConfigDict, model_validator from configs import dify_config @@ -24,6 +15,15 @@ from core.entities.provider_entities import ( QuotaConfiguration, UnaddedModelConfiguration, ) +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.entities.provider_entities import ( + ConfigurateMethod, + ModelCredentialSchema, + ProviderCredentialSchema, + ProviderHelpEntity, + SimpleProviderEntity, +) from models.provider import ProviderType diff --git a/api/services/external_knowledge_service.py b/api/services/external_knowledge_service.py index d30ec940f5..60b457ecd0 100644 --- a/api/services/external_knowledge_service.py +++ b/api/services/external_knowledge_service.py @@ -1,16 +1,16 @@ import json from copy import deepcopy -from typing import Any, Union, cast +from typing import Any, cast from urllib.parse import urlparse import httpx -from graphon.nodes.http_request.exc import InvalidHttpMethodError from sqlalchemy import func, select from constants import HIDDEN_VALUE from core.helper import ssrf_proxy from core.rag.entities import MetadataFilteringCondition from extensions.ext_database import db +from graphon.nodes.http_request.exc import InvalidHttpMethodError from libs.datetime_utils import naive_utc_now from models.dataset import ( Dataset, @@ -47,7 +47,7 @@ class ExternalDatasetService: return external_knowledge_apis.items, external_knowledge_apis.total @classmethod - def validate_api_list(cls, api_settings: dict): + def validate_api_list(cls, api_settings: dict[str, Any]): if not api_settings: raise ValueError("api list is empty") if not api_settings.get("endpoint"): @@ -56,7 +56,7 @@ class ExternalDatasetService: raise ValueError("api_key is required") @staticmethod - def create_external_knowledge_api(tenant_id: str, user_id: str, args: dict) -> ExternalKnowledgeApis: + def create_external_knowledge_api(tenant_id: str, user_id: str, args: dict[str, Any]) -> ExternalKnowledgeApis: settings = args.get("settings") if settings is None: raise ValueError("settings is required") @@ -75,7 +75,7 @@ class ExternalDatasetService: return external_knowledge_api @staticmethod - def check_endpoint_and_api_key(settings: dict): + def check_endpoint_and_api_key(settings: dict[str, Any]): if "endpoint" not in settings or not settings["endpoint"]: raise ValueError("endpoint is required") if "api_key" not in settings or not settings["api_key"]: @@ -148,18 +148,23 @@ class ExternalDatasetService: db.session.commit() @staticmethod - def external_knowledge_api_use_check(external_knowledge_api_id: str) -> tuple[bool, int]: + def external_knowledge_api_use_check(external_knowledge_api_id: str, tenant_id: str) -> tuple[bool, int]: + """ + Return usage for an external knowledge API within a single tenant. + + The caller already scopes access by tenant, so this query must do the + same; otherwise the endpoint becomes a cross-tenant UUID oracle. + """ count = ( db.session.scalar( select(func.count(ExternalKnowledgeBindings.id)).where( - ExternalKnowledgeBindings.external_knowledge_api_id == external_knowledge_api_id + ExternalKnowledgeBindings.external_knowledge_api_id == external_knowledge_api_id, + ExternalKnowledgeBindings.tenant_id == tenant_id, ) ) or 0 ) - if count > 0: - return True, count - return False, 0 + return count > 0, count @staticmethod def get_external_knowledge_binding_with_dataset_id(tenant_id: str, dataset_id: str) -> ExternalKnowledgeBindings: @@ -173,7 +178,9 @@ class ExternalDatasetService: return external_knowledge_binding @staticmethod - def document_create_args_validate(tenant_id: str, external_knowledge_api_id: str, process_parameter: dict): + def document_create_args_validate( + tenant_id: str, external_knowledge_api_id: str, process_parameter: dict[str, Any] + ): external_knowledge_api = db.session.scalar( select(ExternalKnowledgeApis) .where(ExternalKnowledgeApis.id == external_knowledge_api_id, ExternalKnowledgeApis.tenant_id == tenant_id) @@ -190,9 +197,7 @@ class ExternalDatasetService: raise ValueError(f"{parameter.get('name')} is required") @staticmethod - def process_external_api( - settings: ExternalKnowledgeApiSetting, files: Union[None, dict[str, Any]] - ) -> httpx.Response: + def process_external_api(settings: ExternalKnowledgeApiSetting, files: dict[str, Any] | None) -> httpx.Response: """ do http request depending on api bundle """ @@ -219,7 +224,7 @@ class ExternalDatasetService: return response @staticmethod - def assembling_headers(authorization: Authorization, headers: dict | None = None) -> dict[str, Any]: + def assembling_headers(authorization: Authorization, headers: dict[str, Any] | None = None) -> dict[str, Any]: authorization = deepcopy(authorization) if headers: headers = deepcopy(headers) @@ -245,11 +250,11 @@ class ExternalDatasetService: return headers @staticmethod - def get_external_knowledge_api_settings(settings: dict) -> ExternalKnowledgeApiSetting: + def get_external_knowledge_api_settings(settings: dict[str, Any]) -> ExternalKnowledgeApiSetting: return ExternalKnowledgeApiSetting.model_validate(settings) @staticmethod - def create_external_dataset(tenant_id: str, user_id: str, args: dict) -> Dataset: + def create_external_dataset(tenant_id: str, user_id: str, args: dict[str, Any]) -> Dataset: # check if dataset name already exists if db.session.scalar( select(Dataset).where(Dataset.name == args.get("name"), Dataset.tenant_id == tenant_id).limit(1) @@ -301,7 +306,7 @@ class ExternalDatasetService: tenant_id: str, dataset_id: str, query: str, - external_retrieval_parameters: dict, + external_retrieval_parameters: dict[str, Any], metadata_condition: MetadataFilteringCondition | None = None, ): external_knowledge_binding = db.session.scalar( @@ -314,7 +319,10 @@ class ExternalDatasetService: external_knowledge_api = db.session.scalar( select(ExternalKnowledgeApis) - .where(ExternalKnowledgeApis.id == external_knowledge_binding.external_knowledge_api_id) + .where( + ExternalKnowledgeApis.id == external_knowledge_binding.external_knowledge_api_id, + ExternalKnowledgeApis.tenant_id == tenant_id, + ) .limit(1) ) if external_knowledge_api is None or external_knowledge_api.settings is None: diff --git a/api/services/feature_service.py b/api/services/feature_service.py index 9216a7fb99..e4eb9e7582 100644 --- a/api/services/feature_service.py +++ b/api/services/feature_service.py @@ -164,6 +164,7 @@ class SystemFeatureModel(BaseModel): enable_email_code_login: bool = False enable_email_password_login: bool = True enable_social_oauth_login: bool = False + enable_collaboration_mode: bool = False is_allow_register: bool = False is_allow_create_workspace: bool = False is_email_setup: bool = False @@ -244,6 +245,7 @@ class FeatureService: system_features.enable_email_code_login = dify_config.ENABLE_EMAIL_CODE_LOGIN system_features.enable_email_password_login = dify_config.ENABLE_EMAIL_PASSWORD_LOGIN system_features.enable_social_oauth_login = dify_config.ENABLE_SOCIAL_OAUTH_LOGIN + system_features.enable_collaboration_mode = dify_config.ENABLE_COLLABORATION_MODE system_features.is_allow_register = dify_config.ALLOW_REGISTER system_features.is_allow_create_workspace = dify_config.ALLOW_CREATE_WORKSPACE system_features.is_email_setup = dify_config.MAIL_TYPE is not None and dify_config.MAIL_TYPE != "" diff --git a/api/services/file_service.py b/api/services/file_service.py index 50a326d813..52da2a7951 100644 --- a/api/services/file_service.py +++ b/api/services/file_service.py @@ -5,10 +5,9 @@ import uuid from collections.abc import Iterator, Sequence from contextlib import contextmanager, suppress from tempfile import NamedTemporaryFile -from typing import Literal, Union +from typing import Literal from zipfile import ZIP_DEFLATED, ZipFile -from graphon.file import helpers as file_helpers from sqlalchemy import Engine, select from sqlalchemy.orm import Session, sessionmaker from werkzeug.exceptions import NotFound @@ -24,6 +23,7 @@ from core.rag.extractor.extract_processor import ExtractProcessor from extensions.ext_database import db from extensions.ext_storage import storage from extensions.storage.storage_type import StorageType +from graphon.file import helpers as file_helpers from libs.datetime_utils import naive_utc_now from libs.helper import extract_tenant_id from models import Account @@ -52,7 +52,7 @@ class FileService: filename: str, content: bytes, mimetype: str, - user: Union[Account, EndUser], + user: Account | EndUser, source: Literal["datasets"] | None = None, source_url: str = "", ) -> UploadFile: @@ -132,8 +132,8 @@ class FileService: return file_size <= file_size_limit def get_file_base64(self, file_id: str) -> str: - upload_file = ( - self._session_maker(expire_on_commit=False).query(UploadFile).where(UploadFile.id == file_id).first() + upload_file = self._session_maker(expire_on_commit=False).scalar( + select(UploadFile).where(UploadFile.id == file_id).limit(1) ) if not upload_file: raise NotFound("File not found") @@ -178,7 +178,7 @@ class FileService: Return a short text preview extracted from a document file. """ with self._session_maker(expire_on_commit=False) as session: - upload_file = session.query(UploadFile).where(UploadFile.id == file_id).first() + upload_file = session.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) if not upload_file: raise NotFound("File not found") @@ -200,7 +200,7 @@ class FileService: if not result: raise NotFound("File not found or signature is invalid") with self._session_maker(expire_on_commit=False) as session: - upload_file = session.query(UploadFile).where(UploadFile.id == file_id).first() + upload_file = session.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) if not upload_file: raise NotFound("File not found or signature is invalid") @@ -220,7 +220,7 @@ class FileService: raise NotFound("File not found or signature is invalid") with self._session_maker(expire_on_commit=False) as session: - upload_file = session.query(UploadFile).where(UploadFile.id == file_id).first() + upload_file = session.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) if not upload_file: raise NotFound("File not found or signature is invalid") @@ -231,7 +231,7 @@ class FileService: def get_public_image_preview(self, file_id: str): with self._session_maker(expire_on_commit=False) as session: - upload_file = session.query(UploadFile).where(UploadFile.id == file_id).first() + upload_file = session.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) if not upload_file: raise NotFound("File not found or signature is invalid") @@ -247,7 +247,7 @@ class FileService: def get_file_content(self, file_id: str) -> str: with self._session_maker(expire_on_commit=False) as session: - upload_file: UploadFile | None = session.query(UploadFile).where(UploadFile.id == file_id).first() + upload_file: UploadFile | None = session.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) if not upload_file: raise NotFound("File not found") diff --git a/api/services/hit_testing_service.py b/api/services/hit_testing_service.py index 7e0100212a..ca84b2a3d8 100644 --- a/api/services/hit_testing_service.py +++ b/api/services/hit_testing_service.py @@ -3,8 +3,6 @@ import logging import time from typing import Any, TypedDict -from graphon.model_runtime.entities import LLMMode - from core.app.app_config.entities import ModelConfig from core.rag.datasource.retrieval_service import RetrievalService from core.rag.index_processor.constant.query_type import QueryType @@ -12,6 +10,7 @@ from core.rag.models.document import Document from core.rag.retrieval.dataset_retrieval import DatasetRetrieval from core.rag.retrieval.retrieval_methods import RetrievalMethod from extensions.ext_database import db +from graphon.model_runtime.entities import LLMMode from models import Account from models.dataset import Dataset, DatasetQuery from models.enums import CreatorUserRole, DatasetQuerySource @@ -44,8 +43,8 @@ class HitTestingService: dataset: Dataset, query: str, account: Account, - retrieval_model: dict | None, - external_retrieval_model: dict, + retrieval_model: dict[str, Any] | None, + external_retrieval_model: dict[str, Any], attachment_ids: list | None = None, limit: int = 10, ): @@ -125,8 +124,8 @@ class HitTestingService: dataset: Dataset, query: str, account: Account, - external_retrieval_model: dict | None = None, - metadata_filtering_conditions: dict | None = None, + external_retrieval_model: dict[str, Any] | None = None, + metadata_filtering_conditions: dict[str, Any] | None = None, ): if dataset.provider != "external": return { diff --git a/api/services/human_input_delivery_test_service.py b/api/services/human_input_delivery_test_service.py index 77576fa4c0..68ef67dec1 100644 --- a/api/services/human_input_delivery_test_service.py +++ b/api/services/human_input_delivery_test_service.py @@ -4,7 +4,6 @@ from dataclasses import dataclass, field from enum import StrEnum from typing import Protocol -from graphon.runtime import VariablePool from sqlalchemy import Engine, select from sqlalchemy.orm import sessionmaker @@ -18,6 +17,7 @@ from core.workflow.human_input_compat import ( ) from extensions.ext_database import db from extensions.ext_mail import mail +from graphon.runtime import VariablePool from libs.email_template_renderer import render_email_template from models import Account, TenantAccountJoin from services.feature_service import FeatureService diff --git a/api/services/human_input_service.py b/api/services/human_input_service.py index 02a6620fc7..76598d31ac 100644 --- a/api/services/human_input_service.py +++ b/api/services/human_input_service.py @@ -3,12 +3,6 @@ from collections.abc import Mapping from datetime import datetime, timedelta from typing import Any -from graphon.nodes.human_input.entities import ( - FormDefinition, - HumanInputSubmissionValidationError, - validate_human_input_submission, -) -from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from sqlalchemy import Engine, select from sqlalchemy.orm import Session, sessionmaker @@ -17,6 +11,12 @@ from core.repositories.human_input_repository import ( HumanInputFormRecord, HumanInputFormSubmissionRepository, ) +from graphon.nodes.human_input.entities import ( + FormDefinition, + HumanInputSubmissionValidationError, + validate_human_input_submission, +) +from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from libs.datetime_utils import ensure_naive_utc, naive_utc_now from libs.exception import BaseHTTPException from models.human_input import RecipientType diff --git a/api/services/message_service.py b/api/services/message_service.py index 5b133b0c04..98f24dd6a6 100644 --- a/api/services/message_service.py +++ b/api/services/message_service.py @@ -1,6 +1,5 @@ from collections.abc import Sequence -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import TypeAdapter from sqlalchemy import select from sqlalchemy.orm import sessionmaker @@ -14,6 +13,7 @@ from core.ops.entities.trace_entity import TraceTaskName from core.ops.ops_trace_manager import TraceQueueManager, TraceTask from core.ops.utils import measure_time from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelType from libs.infinite_scroll_pagination import InfiniteScrollPagination from models import Account from models.enums import FeedbackFromSource, FeedbackRating diff --git a/api/services/model_load_balancing_service.py b/api/services/model_load_balancing_service.py index 3cce83a975..c269346f5f 100644 --- a/api/services/model_load_balancing_service.py +++ b/api/services/model_load_balancing_service.py @@ -1,13 +1,7 @@ import json import logging -from typing import Any, TypedDict, Union +from typing import Any, TypedDict -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.entities.provider_entities import ( - ModelCredentialSchema, - ProviderCredentialSchema, -) -from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory from sqlalchemy import or_, select from constants import HIDDEN_VALUE @@ -18,6 +12,12 @@ from core.model_manager import LBModelManager from core.plugin.impl.model_runtime_factory import create_plugin_model_assembly, create_plugin_provider_manager from core.provider_manager import ProviderManager from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.entities.provider_entities import ( + ModelCredentialSchema, + ProviderCredentialSchema, +) +from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory from libs.datetime_utils import naive_utc_now from models.enums import CredentialSourceType from models.provider import LoadBalancingModelConfig, ProviderCredential, ProviderModelCredential @@ -502,7 +502,7 @@ class ModelLoadBalancingService: provider: str, model: str, model_type: str, - credentials: dict, + credentials: dict[str, Any], config_id: str | None = None, ): """ @@ -561,7 +561,7 @@ class ModelLoadBalancingService: provider_configuration: ProviderConfiguration, model_type: ModelType, model: str, - credentials: dict, + credentials: dict[str, Any], load_balancing_model_config: LoadBalancingModelConfig | None = None, model_provider_factory: ModelProviderFactory | None = None, validate: bool = True, @@ -626,7 +626,7 @@ class ModelLoadBalancingService: def _get_credential_schema( self, provider_configuration: ProviderConfiguration - ) -> Union[ModelCredentialSchema, ProviderCredentialSchema]: + ) -> ModelCredentialSchema | ProviderCredentialSchema: """Get form schemas.""" if provider_configuration.provider.model_credential_schema: return provider_configuration.provider.model_credential_schema diff --git a/api/services/model_provider_service.py b/api/services/model_provider_service.py index 3f37c9b176..51cda79661 100644 --- a/api/services/model_provider_service.py +++ b/api/services/model_provider_service.py @@ -1,10 +1,10 @@ import logging - -from graphon.model_runtime.entities.model_entities import ModelType, ParameterRule +from typing import Any from core.entities.model_entities import ModelWithProviderEntity, ProviderModelWithStatusEntity from core.plugin.impl.model_runtime_factory import create_plugin_model_provider_factory, create_plugin_provider_manager from core.provider_manager import ProviderManager +from graphon.model_runtime.entities.model_entities import ModelType, ParameterRule from models.provider import ProviderType from services.entities.model_provider_entities import ( CustomConfigurationResponse, @@ -168,7 +168,9 @@ class ModelProviderService: model_name=model, ) - def get_provider_credential(self, tenant_id: str, provider: str, credential_id: str | None = None) -> dict | None: + def get_provider_credential( + self, tenant_id: str, provider: str, credential_id: str | None = None + ) -> dict[str, Any] | None: """ get provider credentials. @@ -180,7 +182,7 @@ class ModelProviderService: provider_configuration = self._get_provider_configuration(tenant_id, provider) return provider_configuration.get_provider_credential(credential_id=credential_id) - def validate_provider_credentials(self, tenant_id: str, provider: str, credentials: dict): + def validate_provider_credentials(self, tenant_id: str, provider: str, credentials: dict[str, Any]): """ validate provider credentials before saving. @@ -192,7 +194,7 @@ class ModelProviderService: provider_configuration.validate_provider_credentials(credentials) def create_provider_credential( - self, tenant_id: str, provider: str, credentials: dict, credential_name: str | None + self, tenant_id: str, provider: str, credentials: dict[str, Any], credential_name: str | None ) -> None: """ Create and save new provider credentials. @@ -210,7 +212,7 @@ class ModelProviderService: self, tenant_id: str, provider: str, - credentials: dict, + credentials: dict[str, Any], credential_id: str, credential_name: str | None, ) -> None: @@ -254,7 +256,7 @@ class ModelProviderService: def get_model_credential( self, tenant_id: str, provider: str, model_type: str, model: str, credential_id: str | None - ) -> dict | None: + ) -> dict[str, Any] | None: """ Retrieve model-specific credentials. @@ -270,7 +272,9 @@ class ModelProviderService: model_type=ModelType.value_of(model_type), model=model, credential_id=credential_id ) - def validate_model_credentials(self, tenant_id: str, provider: str, model_type: str, model: str, credentials: dict): + def validate_model_credentials( + self, tenant_id: str, provider: str, model_type: str, model: str, credentials: dict[str, Any] + ): """ validate model credentials. @@ -287,7 +291,13 @@ class ModelProviderService: ) def create_model_credential( - self, tenant_id: str, provider: str, model_type: str, model: str, credentials: dict, credential_name: str | None + self, + tenant_id: str, + provider: str, + model_type: str, + model: str, + credentials: dict[str, Any], + credential_name: str | None, ) -> None: """ create and save model credentials. @@ -314,7 +324,7 @@ class ModelProviderService: provider: str, model_type: str, model: str, - credentials: dict, + credentials: dict[str, Any], credential_id: str, credential_name: str | None, ) -> None: diff --git a/api/services/operation_service.py b/api/services/operation_service.py index c05e9d555c..903efd26ae 100644 --- a/api/services/operation_service.py +++ b/api/services/operation_service.py @@ -1,8 +1,22 @@ import os +from typing import TypedDict import httpx +class UtmInfo(TypedDict, total=False): + """Expected shape of the utm_info dict passed to record_utm. + + All fields are optional; missing keys default to an empty string. + """ + + utm_source: str + utm_medium: str + utm_campaign: str + utm_content: str + utm_term: str + + class OperationService: base_url = os.environ.get("BILLING_API_URL", "BILLING_API_URL") secret_key = os.environ.get("BILLING_API_SECRET_KEY", "BILLING_API_SECRET_KEY") @@ -17,7 +31,7 @@ class OperationService: return response.json() @classmethod - def record_utm(cls, tenant_id: str, utm_info: dict): + def record_utm(cls, tenant_id: str, utm_info: UtmInfo): params = { "tenant_id": tenant_id, "utm_source": utm_info.get("utm_source", ""), diff --git a/api/services/ops_service.py b/api/services/ops_service.py index 0db3d3efec..3ad42faf24 100644 --- a/api/services/ops_service.py +++ b/api/services/ops_service.py @@ -1,3 +1,5 @@ +from typing import Any + from sqlalchemy import select from core.ops.entities.config_entity import BaseTracingConfig @@ -135,7 +137,7 @@ class OpsService: return trace_config_data.to_dict() @classmethod - def create_tracing_app_config(cls, app_id: str, tracing_provider: str, tracing_config: dict): + def create_tracing_app_config(cls, app_id: str, tracing_provider: str, tracing_config: dict[str, Any]): """ Create tracing app config :param app_id: app id @@ -210,7 +212,7 @@ class OpsService: return {"result": "success"} @classmethod - def update_tracing_app_config(cls, app_id: str, tracing_provider: str, tracing_config: dict): + def update_tracing_app_config(cls, app_id: str, tracing_provider: str, tracing_config: dict[str, Any]): """ Update tracing app config :param app_id: app id diff --git a/api/services/plugin/endpoint_service.py b/api/services/plugin/endpoint_service.py index 11b8e0a3d9..1727cd7abd 100644 --- a/api/services/plugin/endpoint_service.py +++ b/api/services/plugin/endpoint_service.py @@ -1,9 +1,13 @@ +from typing import Any + from core.plugin.impl.endpoint import PluginEndpointClient class EndpointService: @classmethod - def create_endpoint(cls, tenant_id: str, user_id: str, plugin_unique_identifier: str, name: str, settings: dict): + def create_endpoint( + cls, tenant_id: str, user_id: str, plugin_unique_identifier: str, name: str, settings: dict[str, Any] + ): return PluginEndpointClient().create_endpoint( tenant_id=tenant_id, user_id=user_id, @@ -32,7 +36,7 @@ class EndpointService: ) @classmethod - def update_endpoint(cls, tenant_id: str, user_id: str, endpoint_id: str, name: str, settings: dict): + def update_endpoint(cls, tenant_id: str, user_id: str, endpoint_id: str, name: str, settings: dict[str, Any]): return PluginEndpointClient().update_endpoint( tenant_id=tenant_id, user_id=user_id, diff --git a/api/services/plugin/oauth_service.py b/api/services/plugin/oauth_service.py index 88dec062a0..789b5fa5b7 100644 --- a/api/services/plugin/oauth_service.py +++ b/api/services/plugin/oauth_service.py @@ -1,5 +1,6 @@ import json import uuid +from typing import Any from core.plugin.impl.base import BasePluginClient from extensions.ext_redis import redis_client @@ -16,7 +17,7 @@ class OAuthProxyService(BasePluginClient): tenant_id: str, plugin_id: str, provider: str, - extra_data: dict = {}, + extra_data: dict[str, Any] = {}, credential_id: str | None = None, ): """ diff --git a/api/services/plugin/plugin_auto_upgrade_service.py b/api/services/plugin/plugin_auto_upgrade_service.py index 174bed488d..9bb0ab6ae2 100644 --- a/api/services/plugin/plugin_auto_upgrade_service.py +++ b/api/services/plugin/plugin_auto_upgrade_service.py @@ -1,17 +1,17 @@ -from sqlalchemy.orm import Session +from sqlalchemy import select -from extensions.ext_database import db +from core.db.session_factory import session_factory from models.account import TenantPluginAutoUpgradeStrategy class PluginAutoUpgradeService: @staticmethod def get_strategy(tenant_id: str) -> TenantPluginAutoUpgradeStrategy | None: - with Session(db.engine) as session: - return ( - session.query(TenantPluginAutoUpgradeStrategy) + with session_factory.create_session() as session: + return session.scalar( + select(TenantPluginAutoUpgradeStrategy) .where(TenantPluginAutoUpgradeStrategy.tenant_id == tenant_id) - .first() + .limit(1) ) @staticmethod @@ -23,11 +23,11 @@ class PluginAutoUpgradeService: exclude_plugins: list[str], include_plugins: list[str], ) -> bool: - with Session(db.engine) as session: - exist_strategy = ( - session.query(TenantPluginAutoUpgradeStrategy) + with session_factory.create_session() as session: + exist_strategy = session.scalar( + select(TenantPluginAutoUpgradeStrategy) .where(TenantPluginAutoUpgradeStrategy.tenant_id == tenant_id) - .first() + .limit(1) ) if not exist_strategy: strategy = TenantPluginAutoUpgradeStrategy( @@ -46,16 +46,15 @@ class PluginAutoUpgradeService: exist_strategy.exclude_plugins = exclude_plugins exist_strategy.include_plugins = include_plugins - session.commit() return True @staticmethod def exclude_plugin(tenant_id: str, plugin_id: str) -> bool: - with Session(db.engine) as session: - exist_strategy = ( - session.query(TenantPluginAutoUpgradeStrategy) + with session_factory.create_session() as session: + exist_strategy = session.scalar( + select(TenantPluginAutoUpgradeStrategy) .where(TenantPluginAutoUpgradeStrategy.tenant_id == tenant_id) - .first() + .limit(1) ) if not exist_strategy: # create for this tenant @@ -83,5 +82,4 @@ class PluginAutoUpgradeService: exist_strategy.upgrade_mode = TenantPluginAutoUpgradeStrategy.UpgradeMode.EXCLUDE exist_strategy.exclude_plugins = [plugin_id] - session.commit() return True diff --git a/api/services/plugin/plugin_migration.py b/api/services/plugin/plugin_migration.py index d6f6ee8086..43a726b100 100644 --- a/api/services/plugin/plugin_migration.py +++ b/api/services/plugin/plugin_migration.py @@ -13,6 +13,7 @@ import sqlalchemy as sa import tqdm from flask import Flask, current_app from pydantic import TypeAdapter +from sqlalchemy import func, select from sqlalchemy.orm import Session from core.agent.entities import AgentToolEntity @@ -66,7 +67,7 @@ class PluginMigration: current_time = started_at with Session(db.engine) as session: - total_tenant_count = session.query(Tenant.id).count() + total_tenant_count = session.scalar(select(func.count(Tenant.id))) or 0 click.echo(click.style(f"Total tenant count: {total_tenant_count}", fg="white")) @@ -123,9 +124,12 @@ class PluginMigration: tenant_count = 0 for test_interval in test_intervals: tenant_count = ( - session.query(Tenant.id) - .where(Tenant.created_at.between(current_time, current_time + test_interval)) - .count() + session.scalar( + select(func.count(Tenant.id)).where( + Tenant.created_at.between(current_time, current_time + test_interval) + ) + ) + or 0 ) if tenant_count <= 100: interval = test_interval @@ -147,8 +151,8 @@ class PluginMigration: batch_end = min(current_time + interval, ended_at) - rs = ( - session.query(Tenant.id) + rs = session.execute( + select(Tenant.id) .where(Tenant.created_at.between(current_time, batch_end)) .order_by(Tenant.created_at) ) @@ -235,7 +239,7 @@ class PluginMigration: Extract tool tables. """ with Session(db.engine) as session: - rs = session.query(BuiltinToolProvider).where(BuiltinToolProvider.tenant_id == tenant_id).all() + rs = session.scalars(select(BuiltinToolProvider).where(BuiltinToolProvider.tenant_id == tenant_id)).all() result = [] for row in rs: result.append(ToolProviderID(row.provider).plugin_id) @@ -249,7 +253,7 @@ class PluginMigration: """ with Session(db.engine) as session: - rs = session.query(Workflow).where(Workflow.tenant_id == tenant_id).all() + rs = session.scalars(select(Workflow).where(Workflow.tenant_id == tenant_id)).all() result = [] for row in rs: graph = row.graph_dict @@ -272,7 +276,7 @@ class PluginMigration: Extract app tables. """ with Session(db.engine) as session: - apps = session.query(App).where(App.tenant_id == tenant_id).all() + apps = session.scalars(select(App).where(App.tenant_id == tenant_id)).all() if not apps: return [] @@ -280,7 +284,7 @@ class PluginMigration: app.app_model_config_id for app in apps if app.is_agent or app.mode == AppMode.AGENT_CHAT ] - rs = session.query(AppModelConfig).where(AppModelConfig.id.in_(agent_app_model_config_ids)).all() + rs = session.scalars(select(AppModelConfig).where(AppModelConfig.id.in_(agent_app_model_config_ids))).all() result = [] for row in rs: agent_config = row.agent_mode_dict diff --git a/api/services/plugin/plugin_permission_service.py b/api/services/plugin/plugin_permission_service.py index 60fa269640..3cca4268d0 100644 --- a/api/services/plugin/plugin_permission_service.py +++ b/api/services/plugin/plugin_permission_service.py @@ -1,14 +1,16 @@ -from sqlalchemy.orm import Session +from sqlalchemy import select -from extensions.ext_database import db +from core.db.session_factory import session_factory from models.account import TenantPluginPermission class PluginPermissionService: @staticmethod def get_permission(tenant_id: str) -> TenantPluginPermission | None: - with Session(db.engine) as session: - return session.query(TenantPluginPermission).where(TenantPluginPermission.tenant_id == tenant_id).first() + with session_factory.create_session() as session: + return session.scalar( + select(TenantPluginPermission).where(TenantPluginPermission.tenant_id == tenant_id).limit(1) + ) @staticmethod def change_permission( @@ -16,9 +18,9 @@ class PluginPermissionService: install_permission: TenantPluginPermission.InstallPermission, debug_permission: TenantPluginPermission.DebugPermission, ): - with Session(db.engine) as session: - permission = ( - session.query(TenantPluginPermission).where(TenantPluginPermission.tenant_id == tenant_id).first() + with session_factory.create_session() as session, session.begin(): + permission = session.scalar( + select(TenantPluginPermission).where(TenantPluginPermission.tenant_id == tenant_id).limit(1) ) if not permission: permission = TenantPluginPermission( @@ -30,5 +32,4 @@ class PluginPermissionService: permission.install_permission = install_permission permission.debug_permission = debug_permission - session.commit() return True diff --git a/api/services/rag_pipeline/pipeline_generate_service.py b/api/services/rag_pipeline/pipeline_generate_service.py index 10e89b1dba..56bc785958 100644 --- a/api/services/rag_pipeline/pipeline_generate_service.py +++ b/api/services/rag_pipeline/pipeline_generate_service.py @@ -1,5 +1,5 @@ from collections.abc import Mapping -from typing import Any, Union +from typing import Any from configs import dify_config from core.app.apps.pipeline.pipeline_generator import PipelineGenerator @@ -17,7 +17,7 @@ class PipelineGenerateService: def generate( cls, pipeline: Pipeline, - user: Union[Account, EndUser], + user: Account | EndUser, args: Mapping[str, Any], invoke_from: InvokeFrom, streaming: bool = True, diff --git a/api/services/rag_pipeline/pipeline_template/built_in/built_in_retrieval.py b/api/services/rag_pipeline/pipeline_template/built_in/built_in_retrieval.py index 24baeb73b5..aa7456dcd3 100644 --- a/api/services/rag_pipeline/pipeline_template/built_in/built_in_retrieval.py +++ b/api/services/rag_pipeline/pipeline_template/built_in/built_in_retrieval.py @@ -1,6 +1,7 @@ import json from os import path from pathlib import Path +from typing import Any from flask import current_app @@ -13,21 +14,21 @@ class BuiltInPipelineTemplateRetrieval(PipelineTemplateRetrievalBase): Retrieval pipeline template from built-in, the location is constants/pipeline_templates.json """ - builtin_data: dict | None = None + builtin_data: dict[str, Any] | None = None def get_type(self) -> str: return PipelineTemplateType.BUILTIN - def get_pipeline_templates(self, language: str) -> dict: + def get_pipeline_templates(self, language: str) -> dict[str, Any]: result = self.fetch_pipeline_templates_from_builtin(language) return result - def get_pipeline_template_detail(self, template_id: str): + def get_pipeline_template_detail(self, template_id: str) -> dict[str, Any] | None: result = self.fetch_pipeline_template_detail_from_builtin(template_id) return result @classmethod - def _get_builtin_data(cls) -> dict: + def _get_builtin_data(cls) -> dict[str, Any]: """ Get builtin data. :return: @@ -43,7 +44,7 @@ class BuiltInPipelineTemplateRetrieval(PipelineTemplateRetrievalBase): return cls.builtin_data or {} @classmethod - def fetch_pipeline_templates_from_builtin(cls, language: str) -> dict: + def fetch_pipeline_templates_from_builtin(cls, language: str) -> dict[str, Any]: """ Fetch pipeline templates from builtin. :param language: language @@ -53,7 +54,7 @@ class BuiltInPipelineTemplateRetrieval(PipelineTemplateRetrievalBase): return builtin_data.get("pipeline_templates", {}).get(language, {}) @classmethod - def fetch_pipeline_template_detail_from_builtin(cls, template_id: str) -> dict | None: + def fetch_pipeline_template_detail_from_builtin(cls, template_id: str) -> dict[str, Any] | None: """ Fetch pipeline template detail from builtin. :param template_id: Template ID diff --git a/api/services/rag_pipeline/pipeline_template/customized/customized_retrieval.py b/api/services/rag_pipeline/pipeline_template/customized/customized_retrieval.py index 2ee871a266..0ffbef8365 100644 --- a/api/services/rag_pipeline/pipeline_template/customized/customized_retrieval.py +++ b/api/services/rag_pipeline/pipeline_template/customized/customized_retrieval.py @@ -1,3 +1,5 @@ +from typing import Any + import yaml from sqlalchemy import select @@ -13,12 +15,12 @@ class CustomizedPipelineTemplateRetrieval(PipelineTemplateRetrievalBase): Retrieval recommended app from database """ - def get_pipeline_templates(self, language: str) -> dict: + def get_pipeline_templates(self, language: str) -> dict[str, Any]: _, current_tenant_id = current_account_with_tenant() result = self.fetch_pipeline_templates_from_customized(tenant_id=current_tenant_id, language=language) return result - def get_pipeline_template_detail(self, template_id: str): + def get_pipeline_template_detail(self, template_id: str) -> dict[str, Any] | None: result = self.fetch_pipeline_template_detail_from_db(template_id) return result @@ -26,7 +28,7 @@ class CustomizedPipelineTemplateRetrieval(PipelineTemplateRetrievalBase): return PipelineTemplateType.CUSTOMIZED @classmethod - def fetch_pipeline_templates_from_customized(cls, tenant_id: str, language: str) -> dict: + def fetch_pipeline_templates_from_customized(cls, tenant_id: str, language: str) -> dict[str, Any]: """ Fetch pipeline templates from db. :param tenant_id: tenant id @@ -53,7 +55,7 @@ class CustomizedPipelineTemplateRetrieval(PipelineTemplateRetrievalBase): return {"pipeline_templates": recommended_pipelines_results} @classmethod - def fetch_pipeline_template_detail_from_db(cls, template_id: str) -> dict | None: + def fetch_pipeline_template_detail_from_db(cls, template_id: str) -> dict[str, Any] | None: """ Fetch pipeline template detail from db. :param template_id: Template ID diff --git a/api/services/rag_pipeline/pipeline_template/database/database_retrieval.py b/api/services/rag_pipeline/pipeline_template/database/database_retrieval.py index 43b21a7b32..073eed221c 100644 --- a/api/services/rag_pipeline/pipeline_template/database/database_retrieval.py +++ b/api/services/rag_pipeline/pipeline_template/database/database_retrieval.py @@ -1,3 +1,5 @@ +from typing import Any + import yaml from sqlalchemy import select @@ -12,11 +14,11 @@ class DatabasePipelineTemplateRetrieval(PipelineTemplateRetrievalBase): Retrieval pipeline template from database """ - def get_pipeline_templates(self, language: str) -> dict: + def get_pipeline_templates(self, language: str) -> dict[str, Any]: result = self.fetch_pipeline_templates_from_db(language) return result - def get_pipeline_template_detail(self, template_id: str): + def get_pipeline_template_detail(self, template_id: str) -> dict[str, Any] | None: result = self.fetch_pipeline_template_detail_from_db(template_id) return result @@ -24,7 +26,7 @@ class DatabasePipelineTemplateRetrieval(PipelineTemplateRetrievalBase): return PipelineTemplateType.DATABASE @classmethod - def fetch_pipeline_templates_from_db(cls, language: str) -> dict: + def fetch_pipeline_templates_from_db(cls, language: str) -> dict[str, Any]: """ Fetch pipeline templates from db. :param language: language @@ -54,7 +56,7 @@ class DatabasePipelineTemplateRetrieval(PipelineTemplateRetrievalBase): return {"pipeline_templates": recommended_pipelines_results} @classmethod - def fetch_pipeline_template_detail_from_db(cls, template_id: str) -> dict | None: + def fetch_pipeline_template_detail_from_db(cls, template_id: str) -> dict[str, Any] | None: """ Fetch pipeline template detail from db. :param pipeline_id: Pipeline ID diff --git a/api/services/rag_pipeline/pipeline_template/pipeline_template_base.py b/api/services/rag_pipeline/pipeline_template/pipeline_template_base.py index 21c30a4986..0ed2a4b8f2 100644 --- a/api/services/rag_pipeline/pipeline_template/pipeline_template_base.py +++ b/api/services/rag_pipeline/pipeline_template/pipeline_template_base.py @@ -1,15 +1,16 @@ from abc import ABC, abstractmethod +from typing import Any class PipelineTemplateRetrievalBase(ABC): """Interface for pipeline template retrieval.""" @abstractmethod - def get_pipeline_templates(self, language: str) -> dict: + def get_pipeline_templates(self, language: str) -> dict[str, Any]: raise NotImplementedError @abstractmethod - def get_pipeline_template_detail(self, template_id: str) -> dict | None: + def get_pipeline_template_detail(self, template_id: str) -> dict[str, Any] | None: raise NotImplementedError @abstractmethod diff --git a/api/services/rag_pipeline/pipeline_template/remote/remote_retrieval.py b/api/services/rag_pipeline/pipeline_template/remote/remote_retrieval.py index f996db11dc..d5ef745bec 100644 --- a/api/services/rag_pipeline/pipeline_template/remote/remote_retrieval.py +++ b/api/services/rag_pipeline/pipeline_template/remote/remote_retrieval.py @@ -1,4 +1,5 @@ import logging +from typing import Any import httpx @@ -15,8 +16,8 @@ class RemotePipelineTemplateRetrieval(PipelineTemplateRetrievalBase): Retrieval recommended app from dify official """ - def get_pipeline_template_detail(self, template_id: str) -> dict | None: - result: dict | None + def get_pipeline_template_detail(self, template_id: str) -> dict[str, Any] | None: + result: dict[str, Any] | None try: result = self.fetch_pipeline_template_detail_from_dify_official(template_id) except Exception as e: @@ -24,7 +25,7 @@ class RemotePipelineTemplateRetrieval(PipelineTemplateRetrievalBase): result = DatabasePipelineTemplateRetrieval.fetch_pipeline_template_detail_from_db(template_id) return result - def get_pipeline_templates(self, language: str) -> dict: + def get_pipeline_templates(self, language: str) -> dict[str, Any]: try: result = self.fetch_pipeline_templates_from_dify_official(language) except Exception as e: @@ -36,7 +37,7 @@ class RemotePipelineTemplateRetrieval(PipelineTemplateRetrievalBase): return PipelineTemplateType.REMOTE @classmethod - def fetch_pipeline_template_detail_from_dify_official(cls, template_id: str) -> dict: + def fetch_pipeline_template_detail_from_dify_official(cls, template_id: str) -> dict[str, Any]: """ Fetch pipeline template detail from dify official. @@ -53,11 +54,11 @@ class RemotePipelineTemplateRetrieval(PipelineTemplateRetrievalBase): + f" status_code: {response.status_code}," + f" response: {response.text[:1000]}" ) - data: dict = response.json() + data: dict[str, Any] = response.json() return data @classmethod - def fetch_pipeline_templates_from_dify_official(cls, language: str) -> dict: + def fetch_pipeline_templates_from_dify_official(cls, language: str) -> dict[str, Any]: """ Fetch pipeline templates from dify official. :param language: language @@ -69,6 +70,6 @@ class RemotePipelineTemplateRetrieval(PipelineTemplateRetrievalBase): if response.status_code != 200: raise ValueError(f"fetch pipeline templates failed, status code: {response.status_code}") - result: dict = response.json() + result: dict[str, Any] = response.json() return result diff --git a/api/services/rag_pipeline/rag_pipeline.py b/api/services/rag_pipeline/rag_pipeline.py index b330e1a46a..968600d1bc 100644 --- a/api/services/rag_pipeline/rag_pipeline.py +++ b/api/services/rag_pipeline/rag_pipeline.py @@ -5,19 +5,10 @@ import threading import time from collections.abc import Callable, Generator, Mapping, Sequence from datetime import UTC, datetime -from typing import Any, Union, cast +from typing import Any, cast from uuid import uuid4 from flask_login import current_user -from graphon.entities import WorkflowNodeExecution -from graphon.enums import BuiltinNodeTypes, ErrorStrategy, NodeType, WorkflowNodeExecutionStatus -from graphon.errors import WorkflowNodeRunFailedError -from graphon.graph_events import GraphNodeEventBase, NodeRunFailedEvent, NodeRunSucceededEvent -from graphon.node_events import NodeRunResult -from graphon.nodes.base.node import Node -from graphon.nodes.http_request import HTTP_REQUEST_CONFIG_FILTER_KEY, build_http_request_config -from graphon.runtime import VariablePool -from graphon.variables.variables import Variable, VariableBase from sqlalchemy import func, select from sqlalchemy.orm import Session, sessionmaker @@ -53,6 +44,15 @@ from core.workflow.variable_pool_initializer import add_variables_to_pool from core.workflow.workflow_entry import WorkflowEntry from enterprise.telemetry.draft_trace import enqueue_draft_node_execution_trace from extensions.ext_database import db +from graphon.entities import WorkflowNodeExecution +from graphon.enums import BuiltinNodeTypes, ErrorStrategy, NodeType, WorkflowNodeExecutionStatus +from graphon.errors import WorkflowNodeRunFailedError +from graphon.graph_events import GraphNodeEventBase, NodeRunFailedEvent, NodeRunSucceededEvent +from graphon.node_events import NodeRunResult +from graphon.nodes.base.node import Node +from graphon.nodes.http_request import HTTP_REQUEST_CONFIG_FILTER_KEY, build_http_request_config +from graphon.runtime import VariablePool +from graphon.variables.variables import Variable, VariableBase from libs.infinite_scroll_pagination import InfiniteScrollPagination from models import Account from models.dataset import ( # type: ignore @@ -104,7 +104,7 @@ class RagPipelineService: self._workflow_run_repo = DifyAPIRepositoryFactory.create_api_workflow_run_repository(session_maker) @classmethod - def get_pipeline_templates(cls, type: str = "built-in", language: str = "en-US") -> dict: + def get_pipeline_templates(cls, type: str = "built-in", language: str = "en-US") -> dict[str, Any]: if type == "built-in": mode = dify_config.HOSTED_FETCH_PIPELINE_TEMPLATES_MODE retrieval_instance = PipelineTemplateRetrievalFactory.get_pipeline_template_factory(mode)() @@ -120,7 +120,7 @@ class RagPipelineService: return result @classmethod - def get_pipeline_template_detail(cls, template_id: str, type: str = "built-in") -> dict | None: + def get_pipeline_template_detail(cls, template_id: str, type: str = "built-in") -> dict[str, Any] | None: """ Get pipeline template detail. @@ -131,7 +131,7 @@ class RagPipelineService: if type == "built-in": mode = dify_config.HOSTED_FETCH_PIPELINE_TEMPLATES_MODE retrieval_instance = PipelineTemplateRetrievalFactory.get_pipeline_template_factory(mode)() - built_in_result: dict | None = retrieval_instance.get_pipeline_template_detail(template_id) + built_in_result: dict[str, Any] | None = retrieval_instance.get_pipeline_template_detail(template_id) if built_in_result is None: logger.warning( "pipeline template retrieval returned empty result, template_id: %s, mode: %s", @@ -142,7 +142,7 @@ class RagPipelineService: else: mode = "customized" retrieval_instance = PipelineTemplateRetrievalFactory.get_pipeline_template_factory(mode)() - customized_result: dict | None = retrieval_instance.get_pipeline_template_detail(template_id) + customized_result: dict[str, Any] | None = retrieval_instance.get_pipeline_template_detail(template_id) return customized_result @classmethod @@ -297,7 +297,7 @@ class RagPipelineService: self, *, pipeline: Pipeline, - graph: dict, + graph: dict[str, Any], unique_hash: str | None, account: Account, environment_variables: Sequence[VariableBase], @@ -467,7 +467,9 @@ class RagPipelineService: return default_block_configs - def get_default_block_config(self, node_type: str, filters: dict | None = None) -> Mapping[str, object] | None: + def get_default_block_config( + self, node_type: str, filters: dict[str, Any] | None = None + ) -> Mapping[str, object] | None: """ Get default config of node. :param node_type: node type @@ -500,7 +502,7 @@ class RagPipelineService: return default_config def run_draft_workflow_node( - self, pipeline: Pipeline, node_id: str, user_inputs: dict, account: Account + self, pipeline: Pipeline, node_id: str, user_inputs: dict[str, Any], account: Account ) -> WorkflowNodeExecutionModel | None: """ Run draft workflow node @@ -555,7 +557,7 @@ class RagPipelineService: workflow_node_execution.id ) - with Session(bind=db.engine) as session, session.begin(): + with sessionmaker(bind=db.engine).begin() as session: draft_var_saver = DraftVariableSaver( session=session, app_id=pipeline.id, @@ -569,7 +571,6 @@ class RagPipelineService: process_data=workflow_node_execution.process_data, outputs=workflow_node_execution.outputs, ) - session.commit() if isinstance(workflow_node_execution_db_model, WorkflowNodeExecutionModel): enqueue_draft_node_execution_trace( execution=workflow_node_execution_db_model, @@ -583,7 +584,7 @@ class RagPipelineService: self, pipeline: Pipeline, node_id: str, - user_inputs: dict, + user_inputs: dict[str, Any], account: Account, datasource_type: str, is_published: bool, @@ -750,7 +751,7 @@ class RagPipelineService: self, pipeline: Pipeline, node_id: str, - user_inputs: dict, + user_inputs: dict[str, Any], account: Account, datasource_type: str, is_published: bool, @@ -980,7 +981,7 @@ class RagPipelineService: return workflow_node_execution def update_workflow( - self, *, session: Session, workflow_id: str, tenant_id: str, account_id: str, data: dict + self, *, session: Session, workflow_id: str, tenant_id: str, account_id: str, data: dict[str, Any] ) -> Workflow | None: """ Update workflow attributes @@ -1100,7 +1101,9 @@ class RagPipelineService: ] return datasource_provider_variables - def get_rag_pipeline_paginate_workflow_runs(self, pipeline: Pipeline, args: dict) -> InfiniteScrollPagination: + def get_rag_pipeline_paginate_workflow_runs( + self, pipeline: Pipeline, args: dict[str, Any] + ) -> InfiniteScrollPagination: """ Get debug workflow run list Only return triggered_from == debugging @@ -1170,7 +1173,7 @@ class RagPipelineService: return list(node_executions) @classmethod - def publish_customized_pipeline_template(cls, pipeline_id: str, args: dict): + def publish_customized_pipeline_template(cls, pipeline_id: str, args: dict[str, Any]): """ Publish customized pipeline template """ @@ -1260,7 +1263,7 @@ class RagPipelineService: ) return node_exec - def set_datasource_variables(self, pipeline: Pipeline, args: dict, current_user: Account): + def set_datasource_variables(self, pipeline: Pipeline, args: dict[str, Any], current_user: Account): """ Set datasource variables """ @@ -1325,7 +1328,7 @@ class RagPipelineService: # Convert node_execution to WorkflowNodeExecution after save workflow_node_execution_db_model = repository._to_db_model(workflow_node_execution) # type: ignore - with Session(bind=db.engine) as session, session.begin(): + with sessionmaker(bind=db.engine).begin() as session: draft_var_saver = DraftVariableSaver( session=session, app_id=pipeline.id, @@ -1339,7 +1342,6 @@ class RagPipelineService: process_data=workflow_node_execution.process_data, outputs=workflow_node_execution.outputs, ) - session.commit() enqueue_draft_node_execution_trace( execution=workflow_node_execution_db_model, outputs=workflow_node_execution.outputs, @@ -1348,7 +1350,7 @@ class RagPipelineService: ) return workflow_node_execution_db_model - def get_recommended_plugins(self, type: str) -> dict: + def get_recommended_plugins(self, type: str) -> dict[str, Any]: # Query active recommended plugins stmt = select(PipelineRecommendedPlugin).where(PipelineRecommendedPlugin.active == True) if type and type != "all": @@ -1389,7 +1391,7 @@ class RagPipelineService: "uninstalled_recommended_plugins": uninstalled_plugin_list, } - def retry_error_document(self, dataset: Dataset, document: Document, user: Union[Account, EndUser]): + def retry_error_document(self, dataset: Dataset, document: Document, user: Account | EndUser): """ Retry error document """ diff --git a/api/services/rag_pipeline/rag_pipeline_dsl_service.py b/api/services/rag_pipeline/rag_pipeline_dsl_service.py index 04156713f4..f315d053cb 100644 --- a/api/services/rag_pipeline/rag_pipeline_dsl_service.py +++ b/api/services/rag_pipeline/rag_pipeline_dsl_service.py @@ -5,8 +5,7 @@ import logging import uuid from collections.abc import Mapping from datetime import UTC, datetime -from enum import StrEnum -from typing import cast +from typing import Any, cast from urllib.parse import urlparse from uuid import uuid4 @@ -14,14 +13,8 @@ import yaml # type: ignore from Crypto.Cipher import AES from Crypto.Util.Padding import pad, unpad from flask_login import current_user -from graphon.enums import BuiltinNodeTypes -from graphon.model_runtime.utils.encoders import jsonable_encoder -from graphon.nodes.llm.entities import LLMNodeData -from graphon.nodes.parameter_extractor.entities import ParameterExtractorNodeData -from graphon.nodes.question_classifier.entities import QuestionClassifierNodeData -from graphon.nodes.tool.entities import ToolNodeData from packaging import version -from pydantic import BaseModel, Field +from pydantic import BaseModel from sqlalchemy import select from sqlalchemy.orm import Session @@ -34,10 +27,17 @@ from core.workflow.nodes.knowledge_index import KNOWLEDGE_INDEX_NODE_TYPE from core.workflow.nodes.knowledge_retrieval.entities import KnowledgeRetrievalNodeData from extensions.ext_redis import redis_client from factories import variable_factory +from graphon.enums import BuiltinNodeTypes +from graphon.model_runtime.utils.encoders import jsonable_encoder +from graphon.nodes.llm.entities import LLMNodeData +from graphon.nodes.parameter_extractor.entities import ParameterExtractorNodeData +from graphon.nodes.question_classifier.entities import QuestionClassifierNodeData +from graphon.nodes.tool.entities import ToolNodeData from models import Account from models.dataset import Dataset, DatasetCollectionBinding, Pipeline from models.enums import CollectionBindingType, DatasetRuntimeMode from models.workflow import Workflow, WorkflowType +from services.entities.dsl_entities import CheckDependenciesResult, ImportMode, ImportStatus from services.entities.knowledge_entities.rag_pipeline_entities import ( IconInfo, KnowledgeConfiguration, @@ -54,18 +54,6 @@ DSL_MAX_SIZE = 10 * 1024 * 1024 # 10MB CURRENT_DSL_VERSION = "0.1.0" -class ImportMode(StrEnum): - YAML_CONTENT = "yaml-content" - YAML_URL = "yaml-url" - - -class ImportStatus(StrEnum): - COMPLETED = "completed" - COMPLETED_WITH_WARNINGS = "completed-with-warnings" - PENDING = "pending" - FAILED = "failed" - - class RagPipelineImportInfo(BaseModel): id: str status: ImportStatus @@ -76,10 +64,6 @@ class RagPipelineImportInfo(BaseModel): dataset_id: str | None = None -class CheckDependenciesResult(BaseModel): - leaked_dependencies: list[PluginDependency] = Field(default_factory=list) - - def _check_version_compatibility(imported_version: str) -> ImportStatus: """Determine import status based on version comparison""" try: @@ -299,7 +283,9 @@ class RagPipelineDslService: ): raise ValueError("Chunk structure is not compatible with the published pipeline") if not dataset: - datasets = self._session.query(Dataset).filter_by(tenant_id=account.current_tenant_id).all() + datasets = self._session.scalars( + select(Dataset).where(Dataset.tenant_id == account.current_tenant_id) + ).all() names = [dataset.name for dataset in datasets] generate_name = generate_incremental_name(names, name) dataset = Dataset( @@ -319,8 +305,8 @@ class RagPipelineDslService: chunk_structure=knowledge_configuration.chunk_structure, ) if knowledge_configuration.indexing_technique == IndexTechniqueType.HIGH_QUALITY: - dataset_collection_binding = ( - self._session.query(DatasetCollectionBinding) + dataset_collection_binding = self._session.scalar( + select(DatasetCollectionBinding) .where( DatasetCollectionBinding.provider_name == knowledge_configuration.embedding_model_provider, @@ -328,7 +314,7 @@ class RagPipelineDslService: DatasetCollectionBinding.type == CollectionBindingType.DATASET, ) .order_by(DatasetCollectionBinding.created_at) - .first() + .limit(1) ) if not dataset_collection_binding: @@ -456,8 +442,8 @@ class RagPipelineDslService: dataset.runtime_mode = DatasetRuntimeMode.RAG_PIPELINE dataset.chunk_structure = knowledge_configuration.chunk_structure if knowledge_configuration.indexing_technique == IndexTechniqueType.HIGH_QUALITY: - dataset_collection_binding = ( - self._session.query(DatasetCollectionBinding) + dataset_collection_binding = self._session.scalar( + select(DatasetCollectionBinding) .where( DatasetCollectionBinding.provider_name == knowledge_configuration.embedding_model_provider, @@ -465,7 +451,7 @@ class RagPipelineDslService: DatasetCollectionBinding.type == CollectionBindingType.DATASET, ) .order_by(DatasetCollectionBinding.created_at) - .first() + .limit(1) ) if not dataset_collection_binding: @@ -540,7 +526,7 @@ class RagPipelineDslService: self, *, pipeline: Pipeline | None, - data: dict, + data: dict[str, Any], account: Account, dependencies: list[PluginDependency] | None = None, ) -> Pipeline: @@ -607,14 +593,14 @@ class RagPipelineDslService: IMPORT_INFO_REDIS_EXPIRY, CheckDependenciesPendingData(pipeline_id=pipeline.id, dependencies=dependencies).model_dump_json(), ) - workflow = ( - self._session.query(Workflow) + workflow = self._session.scalar( + select(Workflow) .where( Workflow.tenant_id == pipeline.tenant_id, Workflow.app_id == pipeline.id, Workflow.version == "draft", ) - .first() + .limit(1) ) # create draft workflow if not found @@ -674,21 +660,21 @@ class RagPipelineDslService: return yaml.dump(export_data, allow_unicode=True) # type: ignore - def _append_workflow_export_data(self, *, export_data: dict, pipeline: Pipeline, include_secret: bool) -> None: + def _append_workflow_export_data( + self, *, export_data: dict[str, Any], pipeline: Pipeline, include_secret: bool + ) -> None: """ Append workflow export data :param export_data: export data :param pipeline: Pipeline instance """ - workflow = ( - self._session.query(Workflow) - .where( + workflow = self._session.scalar( + select(Workflow).where( Workflow.tenant_id == pipeline.tenant_id, Workflow.app_id == pipeline.id, Workflow.version == "draft", ) - .first() ) if not workflow: raise ValueError("Missing draft workflow configuration, please check.") @@ -920,15 +906,16 @@ class RagPipelineDslService: ): if rag_pipeline_dataset_create_entity.name: # check if dataset name already exists - if ( - self._session.query(Dataset) - .filter_by(name=rag_pipeline_dataset_create_entity.name, tenant_id=tenant_id) - .first() + if self._session.scalar( + select(Dataset).where( + Dataset.name == rag_pipeline_dataset_create_entity.name, + Dataset.tenant_id == tenant_id, + ) ): raise ValueError(f"Dataset with name {rag_pipeline_dataset_create_entity.name} already exists.") else: # generate a random name as Untitled 1 2 3 ... - datasets = self._session.query(Dataset).filter_by(tenant_id=tenant_id).all() + datasets = self._session.scalars(select(Dataset).where(Dataset.tenant_id == tenant_id)).all() names = [dataset.name for dataset in datasets] rag_pipeline_dataset_create_entity.name = generate_incremental_name( names, diff --git a/api/services/rag_pipeline/rag_pipeline_transform_service.py b/api/services/rag_pipeline/rag_pipeline_transform_service.py index c3b00fe109..f08ec7474b 100644 --- a/api/services/rag_pipeline/rag_pipeline_transform_service.py +++ b/api/services/rag_pipeline/rag_pipeline_transform_service.py @@ -2,6 +2,7 @@ import json import logging from datetime import UTC, datetime from pathlib import Path +from typing import Any from uuid import uuid4 import yaml @@ -154,7 +155,7 @@ class RagPipelineTransformService: raise ValueError("Unsupported doc form") return pipeline_yaml - def _deal_file_extensions(self, node: dict): + def _deal_file_extensions(self, node: dict[str, Any]): file_extensions = node.get("data", {}).get("fileExtensions", []) if not file_extensions: return node @@ -167,7 +168,7 @@ class RagPipelineTransformService: dataset: Dataset, indexing_technique: str | None, retrieval_model: RetrievalSetting | None, - node: dict, + node: dict[str, Any], ): knowledge_configuration_dict = node.get("data", {}) @@ -191,7 +192,7 @@ class RagPipelineTransformService: def _create_pipeline( self, - data: dict, + data: dict[str, Any], ) -> Pipeline: """Create a new app or update an existing one.""" pipeline_data = data.get("rag_pipeline", {}) @@ -258,7 +259,7 @@ class RagPipelineTransformService: db.session.add(pipeline) return pipeline - def _deal_dependencies(self, pipeline_yaml: dict, tenant_id: str): + def _deal_dependencies(self, pipeline_yaml: dict[str, Any], tenant_id: str): installer_manager = PluginInstaller() installed_plugins = installer_manager.list_plugins(tenant_id) diff --git a/api/services/recommend_app/buildin/buildin_retrieval.py b/api/services/recommend_app/buildin/buildin_retrieval.py index 64751d186c..16dc66cd76 100644 --- a/api/services/recommend_app/buildin/buildin_retrieval.py +++ b/api/services/recommend_app/buildin/buildin_retrieval.py @@ -1,6 +1,7 @@ import json from os import path from pathlib import Path +from typing import Any from flask import current_app @@ -13,7 +14,7 @@ class BuildInRecommendAppRetrieval(RecommendAppRetrievalBase): Retrieval recommended app from buildin, the location is constants/recommended_apps.json """ - builtin_data: dict | None = None + builtin_data: dict[str, Any] | None = None def get_type(self) -> str: return RecommendAppType.BUILDIN @@ -53,7 +54,7 @@ class BuildInRecommendAppRetrieval(RecommendAppRetrievalBase): return builtin_data.get("recommended_apps", {}).get(language, {}) @classmethod - def fetch_recommended_app_detail_from_builtin(cls, app_id: str) -> dict | None: + def fetch_recommended_app_detail_from_builtin(cls, app_id: str) -> dict[str, Any] | None: """ Fetch recommended app detail from builtin. :param app_id: App ID diff --git a/api/services/recommend_app/database/database_retrieval.py b/api/services/recommend_app/database/database_retrieval.py index 6fb90d356d..1df5fd13b6 100644 --- a/api/services/recommend_app/database/database_retrieval.py +++ b/api/services/recommend_app/database/database_retrieval.py @@ -1,3 +1,5 @@ +from typing import Any, TypedDict + from sqlalchemy import select from constants.languages import languages @@ -8,16 +10,43 @@ from services.recommend_app.recommend_app_base import RecommendAppRetrievalBase from services.recommend_app.recommend_app_type import RecommendAppType +class RecommendedAppItemDict(TypedDict): + id: str + app: App | None + app_id: str + description: Any + copyright: Any + privacy_policy: Any + custom_disclaimer: str + category: str + position: int + is_listed: bool + + +class RecommendedAppsResultDict(TypedDict): + recommended_apps: list[RecommendedAppItemDict] + categories: list[str] + + +class RecommendedAppDetailDict(TypedDict): + id: str + name: str + icon: Any + icon_background: str | None + mode: str + export_data: str + + class DatabaseRecommendAppRetrieval(RecommendAppRetrievalBase): """ Retrieval recommended app from database """ - def get_recommended_apps_and_categories(self, language: str): + def get_recommended_apps_and_categories(self, language: str) -> RecommendedAppsResultDict: result = self.fetch_recommended_apps_from_db(language) return result - def get_recommend_app_detail(self, app_id: str): + def get_recommend_app_detail(self, app_id: str) -> RecommendedAppDetailDict | None: result = self.fetch_recommended_app_detail_from_db(app_id) return result @@ -25,7 +54,7 @@ class DatabaseRecommendAppRetrieval(RecommendAppRetrievalBase): return RecommendAppType.DATABASE @classmethod - def fetch_recommended_apps_from_db(cls, language: str): + def fetch_recommended_apps_from_db(cls, language: str) -> RecommendedAppsResultDict: """ Fetch recommended apps from db. :param language: language @@ -41,7 +70,7 @@ class DatabaseRecommendAppRetrieval(RecommendAppRetrievalBase): ).all() categories = set() - recommended_apps_result = [] + recommended_apps_result: list[RecommendedAppItemDict] = [] for recommended_app in recommended_apps: app = recommended_app.app if not app or not app.is_public: @@ -51,7 +80,7 @@ class DatabaseRecommendAppRetrieval(RecommendAppRetrievalBase): if not site: continue - recommended_app_result = { + recommended_app_result: RecommendedAppItemDict = { "id": recommended_app.id, "app": recommended_app.app, "app_id": recommended_app.app_id, @@ -67,10 +96,10 @@ class DatabaseRecommendAppRetrieval(RecommendAppRetrievalBase): categories.add(recommended_app.category) - return {"recommended_apps": recommended_apps_result, "categories": sorted(categories)} + return RecommendedAppsResultDict(recommended_apps=recommended_apps_result, categories=sorted(categories)) @classmethod - def fetch_recommended_app_detail_from_db(cls, app_id: str) -> dict | None: + def fetch_recommended_app_detail_from_db(cls, app_id: str) -> RecommendedAppDetailDict | None: """ Fetch recommended app detail from db. :param app_id: App ID @@ -89,11 +118,11 @@ class DatabaseRecommendAppRetrieval(RecommendAppRetrievalBase): if not app_model or not app_model.is_public: return None - return { - "id": app_model.id, - "name": app_model.name, - "icon": app_model.icon, - "icon_background": app_model.icon_background, - "mode": app_model.mode, - "export_data": AppDslService.export_dsl(app_model=app_model), - } + return RecommendedAppDetailDict( + id=app_model.id, + name=app_model.name, + icon=app_model.icon, + icon_background=app_model.icon_background, + mode=app_model.mode, + export_data=AppDslService.export_dsl(app_model=app_model), + ) diff --git a/api/services/recommend_app/remote/remote_retrieval.py b/api/services/recommend_app/remote/remote_retrieval.py index b217c9026a..5818be0480 100644 --- a/api/services/recommend_app/remote/remote_retrieval.py +++ b/api/services/recommend_app/remote/remote_retrieval.py @@ -1,4 +1,5 @@ import logging +from typing import Any import httpx @@ -35,7 +36,7 @@ class RemoteRecommendAppRetrieval(RecommendAppRetrievalBase): return RecommendAppType.REMOTE @classmethod - def fetch_recommended_app_detail_from_dify_official(cls, app_id: str) -> dict | None: + def fetch_recommended_app_detail_from_dify_official(cls, app_id: str) -> dict[str, Any] | None: """ Fetch recommended app detail from dify official. :param app_id: App ID @@ -46,7 +47,7 @@ class RemoteRecommendAppRetrieval(RecommendAppRetrievalBase): response = httpx.get(url, timeout=httpx.Timeout(10.0, connect=3.0)) if response.status_code != 200: return None - data: dict = response.json() + data: dict[str, Any] = response.json() return data @classmethod @@ -62,7 +63,7 @@ class RemoteRecommendAppRetrieval(RecommendAppRetrievalBase): if response.status_code != 200: raise ValueError(f"fetch recommended apps failed, status code: {response.status_code}") - result: dict = response.json() + result: dict[str, Any] = response.json() if "categories" in result: result["categories"] = sorted(result["categories"]) diff --git a/api/services/recommended_app_service.py b/api/services/recommended_app_service.py index 9819822103..134dd37a3e 100644 --- a/api/services/recommended_app_service.py +++ b/api/services/recommended_app_service.py @@ -1,3 +1,5 @@ +from typing import Any + from sqlalchemy import select from configs import dify_config @@ -37,7 +39,7 @@ class RecommendedAppService: return result @classmethod - def get_recommend_app_detail(cls, app_id: str) -> dict | None: + def get_recommend_app_detail(cls, app_id: str) -> dict[str, Any] | None: """ Get recommend app detail. :param app_id: app id @@ -45,7 +47,7 @@ class RecommendedAppService: """ mode = dify_config.HOSTED_FETCH_APP_TEMPLATES_MODE retrieval_instance = RecommendAppRetrievalFactory.get_recommend_app_factory(mode)() - result: dict = retrieval_instance.get_recommend_app_detail(app_id) + result: dict[str, Any] = retrieval_instance.get_recommend_app_detail(app_id) if FeatureService.get_system_features().enable_trial_app: app_id = result["id"] trial_app_model = db.session.scalar(select(TrialApp).where(TrialApp.app_id == app_id).limit(1)) diff --git a/api/services/retention/conversation/messages_clean_service.py b/api/services/retention/conversation/messages_clean_service.py index 0e0dbab2d1..1e9f0bf149 100644 --- a/api/services/retention/conversation/messages_clean_service.py +++ b/api/services/retention/conversation/messages_clean_service.py @@ -8,7 +8,7 @@ from typing import TYPE_CHECKING, TypedDict, cast import sqlalchemy as sa from sqlalchemy import delete, select, tuple_ from sqlalchemy.engine import CursorResult -from sqlalchemy.orm import Session +from sqlalchemy.orm import Session, sessionmaker from configs import dify_config from extensions.ext_database import db @@ -369,7 +369,7 @@ class MessagesCleanService: batch_deleted_messages = 0 # Step 1: Fetch a batch of messages using cursor - with Session(db.engine, expire_on_commit=False) as session: + with sessionmaker(bind=db.engine, expire_on_commit=False).begin() as session: fetch_messages_start = time.monotonic() msg_stmt = ( select(Message.id, Message.app_id, Message.created_at) @@ -477,7 +477,7 @@ class MessagesCleanService: # Step 4: Batch delete messages and their relations if not self._dry_run: - with Session(db.engine, expire_on_commit=False) as session: + with sessionmaker(bind=db.engine, expire_on_commit=False).begin() as session: delete_relations_start = time.monotonic() # Delete related records first self._batch_delete_message_relations(session, message_ids_to_delete) @@ -489,9 +489,7 @@ class MessagesCleanService: delete_result = cast(CursorResult, session.execute(delete_stmt)) messages_deleted = delete_result.rowcount delete_messages_ms = int((time.monotonic() - delete_messages_start) * 1000) - commit_start = time.monotonic() - session.commit() - commit_ms = int((time.monotonic() - commit_start) * 1000) + commit_ms = 0 stats["total_deleted"] += messages_deleted batch_deleted_messages = messages_deleted diff --git a/api/services/retention/workflow_run/archive_paid_plan_workflow_run.py b/api/services/retention/workflow_run/archive_paid_plan_workflow_run.py index ab60986bfe..21be411bea 100644 --- a/api/services/retention/workflow_run/archive_paid_plan_workflow_run.py +++ b/api/services/retention/workflow_run/archive_paid_plan_workflow_run.py @@ -27,13 +27,13 @@ from dataclasses import dataclass, field from typing import Any, TypedDict import click -from graphon.enums import WorkflowType from sqlalchemy import inspect from sqlalchemy.orm import Session, sessionmaker from configs import dify_config from enums.cloud_plan import CloudPlan from extensions.ext_database import db +from graphon.enums import WorkflowType from libs.archive_storage import ( ArchiveStorage, ArchiveStorageNotConfiguredError, diff --git a/api/services/summary_index_service.py b/api/services/summary_index_service.py index 8760d60de0..a91f49e9e6 100644 --- a/api/services/summary_index_service.py +++ b/api/services/summary_index_service.py @@ -6,8 +6,7 @@ import uuid from datetime import UTC, datetime from typing import TypedDict, cast -from graphon.model_runtime.entities.llm_entities import LLMUsage -from graphon.model_runtime.entities.model_entities import ModelType +from sqlalchemy import select from sqlalchemy.orm import Session from core.db.session_factory import session_factory @@ -17,6 +16,8 @@ from core.rag.index_processor.constant.doc_type import DocType from core.rag.index_processor.constant.index_type import IndexTechniqueType from core.rag.index_processor.index_processor_base import SummaryIndexSettingDict from core.rag.models.document import Document +from graphon.model_runtime.entities.llm_entities import LLMUsage +from graphon.model_runtime.entities.model_entities import ModelType from libs import helper from models.dataset import Dataset, DocumentSegment, DocumentSegmentSummary from models.dataset import Document as DatasetDocument @@ -109,8 +110,13 @@ class SummaryIndexService: """ with session_factory.create_session() as session: # Check if summary record already exists - existing_summary = ( - session.query(DocumentSegmentSummary).filter_by(chunk_id=segment.id, dataset_id=dataset.id).first() + existing_summary = session.scalar( + select(DocumentSegmentSummary) + .where( + DocumentSegmentSummary.chunk_id == segment.id, + DocumentSegmentSummary.dataset_id == dataset.id, + ) + .limit(1) ) if existing_summary: @@ -309,8 +315,10 @@ class SummaryIndexService: summary_record_id, segment.id, ) - summary_record_in_session = ( - session.query(DocumentSegmentSummary).filter_by(id=summary_record_id).first() + summary_record_in_session = session.scalar( + select(DocumentSegmentSummary) + .where(DocumentSegmentSummary.id == summary_record_id) + .limit(1) ) if not summary_record_in_session: @@ -323,10 +331,13 @@ class SummaryIndexService: dataset.id, segment.id, ) - summary_record_in_session = ( - session.query(DocumentSegmentSummary) - .filter_by(chunk_id=segment.id, dataset_id=dataset.id) - .first() + summary_record_in_session = session.scalar( + select(DocumentSegmentSummary) + .where( + DocumentSegmentSummary.chunk_id == segment.id, + DocumentSegmentSummary.dataset_id == dataset.id, + ) + .limit(1) ) if not summary_record_in_session: @@ -487,8 +498,10 @@ class SummaryIndexService: with session_factory.create_session() as error_session: # Try to find the record by id first # Note: Using assignment only (no type annotation) to avoid redeclaration error - summary_record_in_session = ( - error_session.query(DocumentSegmentSummary).filter_by(id=summary_record_id).first() + summary_record_in_session = error_session.scalar( + select(DocumentSegmentSummary) + .where(DocumentSegmentSummary.id == summary_record_id) + .limit(1) ) if not summary_record_in_session: # Try to find by chunk_id and dataset_id @@ -500,10 +513,13 @@ class SummaryIndexService: dataset.id, segment.id, ) - summary_record_in_session = ( - error_session.query(DocumentSegmentSummary) - .filter_by(chunk_id=segment.id, dataset_id=dataset.id) - .first() + summary_record_in_session = error_session.scalar( + select(DocumentSegmentSummary) + .where( + DocumentSegmentSummary.chunk_id == segment.id, + DocumentSegmentSummary.dataset_id == dataset.id, + ) + .limit(1) ) if summary_record_in_session: @@ -551,14 +567,12 @@ class SummaryIndexService: with session_factory.create_session() as session: # Query existing summary records - existing_summaries = ( - session.query(DocumentSegmentSummary) - .filter( + existing_summaries = session.scalars( + select(DocumentSegmentSummary).where( DocumentSegmentSummary.chunk_id.in_(segment_ids), DocumentSegmentSummary.dataset_id == dataset.id, ) - .all() - ) + ).all() existing_summary_map = {summary.chunk_id: summary for summary in existing_summaries} # Create or update records @@ -603,8 +617,13 @@ class SummaryIndexService: error: Error message """ with session_factory.create_session() as session: - summary_record = ( - session.query(DocumentSegmentSummary).filter_by(chunk_id=segment.id, dataset_id=dataset.id).first() + summary_record = session.scalar( + select(DocumentSegmentSummary) + .where( + DocumentSegmentSummary.chunk_id == segment.id, + DocumentSegmentSummary.dataset_id == dataset.id, + ) + .limit(1) ) if summary_record: @@ -639,8 +658,13 @@ class SummaryIndexService: with session_factory.create_session() as session: try: # Get or refresh summary record in this session - summary_record_in_session = ( - session.query(DocumentSegmentSummary).filter_by(chunk_id=segment.id, dataset_id=dataset.id).first() + summary_record_in_session = session.scalar( + select(DocumentSegmentSummary) + .where( + DocumentSegmentSummary.chunk_id == segment.id, + DocumentSegmentSummary.dataset_id == dataset.id, + ) + .limit(1) ) if not summary_record_in_session: @@ -710,8 +734,13 @@ class SummaryIndexService: except Exception as e: logger.exception("Failed to generate summary for segment %s", segment.id) # Update summary record with error status - summary_record_in_session = ( - session.query(DocumentSegmentSummary).filter_by(chunk_id=segment.id, dataset_id=dataset.id).first() + summary_record_in_session = session.scalar( + select(DocumentSegmentSummary) + .where( + DocumentSegmentSummary.chunk_id == segment.id, + DocumentSegmentSummary.dataset_id == dataset.id, + ) + .limit(1) ) if summary_record_in_session: summary_record_in_session.status = SummaryStatus.ERROR @@ -769,17 +798,17 @@ class SummaryIndexService: with session_factory.create_session() as session: # Query segments (only enabled segments) - query = session.query(DocumentSegment).filter_by( - dataset_id=dataset.id, - document_id=document.id, - status="completed", - enabled=True, # Only generate summaries for enabled segments + stmt = select(DocumentSegment).where( + DocumentSegment.dataset_id == dataset.id, + DocumentSegment.document_id == document.id, + DocumentSegment.status == "completed", + DocumentSegment.enabled.is_(True), # Only generate summaries for enabled segments ) if segment_ids: - query = query.filter(DocumentSegment.id.in_(segment_ids)) + stmt = stmt.where(DocumentSegment.id.in_(segment_ids)) - segments = query.all() + segments = list(session.scalars(stmt).all()) if not segments: logger.info("No segments found for document %s", document.id) @@ -848,15 +877,15 @@ class SummaryIndexService: from libs.datetime_utils import naive_utc_now with session_factory.create_session() as session: - query = session.query(DocumentSegmentSummary).filter_by( - dataset_id=dataset.id, - enabled=True, # Only disable enabled summaries + stmt = select(DocumentSegmentSummary).where( + DocumentSegmentSummary.dataset_id == dataset.id, + DocumentSegmentSummary.enabled.is_(True), # Only disable enabled summaries ) if segment_ids: - query = query.filter(DocumentSegmentSummary.chunk_id.in_(segment_ids)) + stmt = stmt.where(DocumentSegmentSummary.chunk_id.in_(segment_ids)) - summaries = query.all() + summaries = session.scalars(stmt).all() if not summaries: return @@ -911,15 +940,15 @@ class SummaryIndexService: return with session_factory.create_session() as session: - query = session.query(DocumentSegmentSummary).filter_by( - dataset_id=dataset.id, - enabled=False, # Only enable disabled summaries + stmt = select(DocumentSegmentSummary).where( + DocumentSegmentSummary.dataset_id == dataset.id, + DocumentSegmentSummary.enabled.is_(False), # Only enable disabled summaries ) if segment_ids: - query = query.filter(DocumentSegmentSummary.chunk_id.in_(segment_ids)) + stmt = stmt.where(DocumentSegmentSummary.chunk_id.in_(segment_ids)) - summaries = query.all() + summaries = session.scalars(stmt).all() if not summaries: return @@ -935,13 +964,13 @@ class SummaryIndexService: enabled_count = 0 for summary in summaries: # Get the original segment - segment = ( - session.query(DocumentSegment) - .filter_by( - id=summary.chunk_id, - dataset_id=dataset.id, + segment = session.scalar( + select(DocumentSegment) + .where( + DocumentSegment.id == summary.chunk_id, + DocumentSegment.dataset_id == dataset.id, ) - .first() + .limit(1) ) # Summary.enabled stays in sync with chunk.enabled, @@ -988,12 +1017,12 @@ class SummaryIndexService: segment_ids: List of segment IDs to delete summaries for. If None, delete all. """ with session_factory.create_session() as session: - query = session.query(DocumentSegmentSummary).filter_by(dataset_id=dataset.id) + stmt = select(DocumentSegmentSummary).where(DocumentSegmentSummary.dataset_id == dataset.id) if segment_ids: - query = query.filter(DocumentSegmentSummary.chunk_id.in_(segment_ids)) + stmt = stmt.where(DocumentSegmentSummary.chunk_id.in_(segment_ids)) - summaries = query.all() + summaries = session.scalars(stmt).all() if not summaries: return @@ -1046,10 +1075,13 @@ class SummaryIndexService: # Check if summary_content is empty (whitespace-only strings are considered empty) if not summary_content or not summary_content.strip(): # If summary is empty, only delete existing summary vector and record - summary_record = ( - session.query(DocumentSegmentSummary) - .filter_by(chunk_id=segment.id, dataset_id=dataset.id) - .first() + summary_record = session.scalar( + select(DocumentSegmentSummary) + .where( + DocumentSegmentSummary.chunk_id == segment.id, + DocumentSegmentSummary.dataset_id == dataset.id, + ) + .limit(1) ) if summary_record: @@ -1077,8 +1109,13 @@ class SummaryIndexService: return None # Find existing summary record - summary_record = ( - session.query(DocumentSegmentSummary).filter_by(chunk_id=segment.id, dataset_id=dataset.id).first() + summary_record = session.scalar( + select(DocumentSegmentSummary) + .where( + DocumentSegmentSummary.chunk_id == segment.id, + DocumentSegmentSummary.dataset_id == dataset.id, + ) + .limit(1) ) if summary_record: @@ -1162,8 +1199,13 @@ class SummaryIndexService: except Exception as e: logger.exception("Failed to update summary for segment %s", segment.id) # Update summary record with error status if it exists - summary_record = ( - session.query(DocumentSegmentSummary).filter_by(chunk_id=segment.id, dataset_id=dataset.id).first() + summary_record = session.scalar( + select(DocumentSegmentSummary) + .where( + DocumentSegmentSummary.chunk_id == segment.id, + DocumentSegmentSummary.dataset_id == dataset.id, + ) + .limit(1) ) if summary_record: summary_record.status = SummaryStatus.ERROR @@ -1185,14 +1227,14 @@ class SummaryIndexService: DocumentSegmentSummary instance if found, None otherwise """ with session_factory.create_session() as session: - return ( - session.query(DocumentSegmentSummary) + return session.scalar( + select(DocumentSegmentSummary) .where( DocumentSegmentSummary.chunk_id == segment_id, DocumentSegmentSummary.dataset_id == dataset_id, - DocumentSegmentSummary.enabled == True, # Only return enabled summaries + DocumentSegmentSummary.enabled.is_(True), # Only return enabled summaries ) - .first() + .limit(1) ) @staticmethod @@ -1211,15 +1253,13 @@ class SummaryIndexService: return {} with session_factory.create_session() as session: - summary_records = ( - session.query(DocumentSegmentSummary) - .where( + summary_records = session.scalars( + select(DocumentSegmentSummary).where( DocumentSegmentSummary.chunk_id.in_(segment_ids), DocumentSegmentSummary.dataset_id == dataset_id, - DocumentSegmentSummary.enabled == True, # Only return enabled summaries + DocumentSegmentSummary.enabled.is_(True), # Only return enabled summaries ) - .all() - ) + ).all() return {summary.chunk_id: summary for summary in summary_records} @@ -1239,16 +1279,16 @@ class SummaryIndexService: List of DocumentSegmentSummary instances (only enabled summaries) """ with session_factory.create_session() as session: - query = session.query(DocumentSegmentSummary).filter( + stmt = select(DocumentSegmentSummary).where( DocumentSegmentSummary.document_id == document_id, DocumentSegmentSummary.dataset_id == dataset_id, - DocumentSegmentSummary.enabled == True, # Only return enabled summaries + DocumentSegmentSummary.enabled.is_(True), # Only return enabled summaries ) if segment_ids: - query = query.filter(DocumentSegmentSummary.chunk_id.in_(segment_ids)) + stmt = stmt.where(DocumentSegmentSummary.chunk_id.in_(segment_ids)) - return query.all() + return list(session.scalars(stmt).all()) @staticmethod def get_document_summary_index_status(document_id: str, dataset_id: str, tenant_id: str) -> str | None: @@ -1265,16 +1305,15 @@ class SummaryIndexService: """ # Get all segments for this document (excluding qa_model and re_segment) with session_factory.create_session() as session: - segments = ( - session.query(DocumentSegment.id) - .where( - DocumentSegment.document_id == document_id, - DocumentSegment.status != "re_segment", - DocumentSegment.tenant_id == tenant_id, - ) - .all() + segment_ids = list( + session.scalars( + select(DocumentSegment.id).where( + DocumentSegment.document_id == document_id, + DocumentSegment.status != "re_segment", + DocumentSegment.tenant_id == tenant_id, + ) + ).all() ) - segment_ids = [seg.id for seg in segments] if not segment_ids: return None @@ -1312,15 +1351,13 @@ class SummaryIndexService: # Get all segments for these documents (excluding qa_model and re_segment) with session_factory.create_session() as session: - segments = ( - session.query(DocumentSegment.id, DocumentSegment.document_id) - .where( + segments = session.execute( + select(DocumentSegment.id, DocumentSegment.document_id).where( DocumentSegment.document_id.in_(document_ids), DocumentSegment.status != "re_segment", DocumentSegment.tenant_id == tenant_id, ) - .all() - ) + ).all() # Group segments by document_id document_segments_map: dict[str, list[str]] = {} diff --git a/api/services/tools/api_tools_manage_service.py b/api/services/tools/api_tools_manage_service.py index dfc0c2c63f..5ff2c21749 100644 --- a/api/services/tools/api_tools_manage_service.py +++ b/api/services/tools/api_tools_manage_service.py @@ -2,9 +2,9 @@ import json import logging from typing import Any, TypedDict, cast -from graphon.model_runtime.utils.encoders import jsonable_encoder from httpx import get from sqlalchemy import select +from sqlalchemy.orm import sessionmaker from core.entities.provider_entities import ProviderConfig from core.tools.__base.tool_runtime import ToolRuntime @@ -16,11 +16,13 @@ from core.tools.entities.tool_entities import ( ApiProviderAuthType, ApiProviderSchemaType, ) +from core.tools.errors import ApiToolProviderNotFoundError from core.tools.tool_label_manager import ToolLabelManager from core.tools.tool_manager import ToolManager from core.tools.utils.encryption import create_tool_provider_encrypter from core.tools.utils.parser import ApiBasedToolSchemaParser from extensions.ext_database import db +from graphon.model_runtime.utils.encoders import jsonable_encoder from models.tools import ApiToolProvider from services.tools.tools_transform_service import ToolTransformService @@ -92,7 +94,7 @@ class ApiToolManageService: @staticmethod def convert_schema_to_tool_bundles( - schema: str, extra_info: dict | None = None + schema: str, extra_info: dict[str, Any] | None = None ) -> tuple[list[ApiToolBundle], ApiProviderSchemaType]: """ convert schema to tool bundles @@ -109,78 +111,92 @@ class ApiToolManageService: user_id: str, tenant_id: str, provider_name: str, - icon: dict, - credentials: dict, + icon: dict[str, Any], + credentials: dict[str, Any], schema_type: ApiProviderSchemaType, schema: str, privacy_policy: str, custom_disclaimer: str, labels: list[str], - ): + ) -> dict[str, Any]: """ - create api tool provider + Create a new API tool provider. + + :param user_id: The ID of the user creating the provider. + :param tenant_id: The ID of the workspace/tenant. + :param provider_name: The name of the API tool provider. + :param icon: The icon configuration for the provider. + :param credentials: The credentials for the provider. + :param schema_type: The type of schema (e.g., OpenAPI). + :param schema: The raw schema string. + :param privacy_policy: The privacy policy URL or text. + :param custom_disclaimer: Custom disclaimer text. + :param labels: A list of labels for the provider. + :return: A dictionary indicating the result status. """ + provider_name = provider_name.strip() # check if the provider exists - provider = db.session.scalar( - select(ApiToolProvider) - .where( - ApiToolProvider.tenant_id == tenant_id, - ApiToolProvider.name == provider_name, + # Create new session with automatic transaction management + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + provider: ApiToolProvider | None = _session.scalar( + select(ApiToolProvider) + .where( + ApiToolProvider.tenant_id == tenant_id, + ApiToolProvider.name == provider_name, + ) + .limit(1) ) - .limit(1) - ) - if provider is not None: - raise ValueError(f"provider {provider_name} already exists") + if provider is not None: + raise ValueError(f"provider {provider_name} already exists") - # parse openapi to tool bundle - extra_info: dict[str, str] = {} - # extra info like description will be set here - tool_bundles, schema_type = ApiToolManageService.convert_schema_to_tool_bundles(schema, extra_info) + # parse openapi to tool bundle + extra_info: dict[str, str] = {} + # extra info like description will be set here + tool_bundles, schema_type = ApiToolManageService.convert_schema_to_tool_bundles(schema, extra_info) - if len(tool_bundles) > 100: - raise ValueError("the number of apis should be less than 100") + if len(tool_bundles) > 100: + raise ValueError("the number of apis should be less than 100") - # create db provider - db_provider = ApiToolProvider( - tenant_id=tenant_id, - user_id=user_id, - name=provider_name, - icon=json.dumps(icon), - schema=schema, - description=extra_info.get("description", ""), - schema_type_str=schema_type, - tools_str=json.dumps(jsonable_encoder(tool_bundles)), - credentials_str="{}", - privacy_policy=privacy_policy, - custom_disclaimer=custom_disclaimer, - ) + # create API tool provider + api_tool_provider = ApiToolProvider( + tenant_id=tenant_id, + user_id=user_id, + name=provider_name, + icon=json.dumps(icon), + schema=schema, + description=extra_info.get("description", ""), + schema_type_str=schema_type, + tools_str=json.dumps(jsonable_encoder(tool_bundles)), + credentials_str="{}", + privacy_policy=privacy_policy, + custom_disclaimer=custom_disclaimer, + ) - if "auth_type" not in credentials: - raise ValueError("auth_type is required") + if "auth_type" not in credentials: + raise ValueError("auth_type is required") - # get auth type, none or api key - auth_type = ApiProviderAuthType.value_of(credentials["auth_type"]) + # get auth type, none or api key + auth_type = ApiProviderAuthType.value_of(credentials["auth_type"]) - # create provider entity - provider_controller = ApiToolProviderController.from_db(db_provider, auth_type) - # load tools into provider entity - provider_controller.load_bundled_tools(tool_bundles) + # create provider entity + provider_controller = ApiToolProviderController.from_db(api_tool_provider, auth_type) + # load tools into provider entity + provider_controller.load_bundled_tools(tool_bundles) - # encrypt credentials - encrypter, _ = create_tool_provider_encrypter( - tenant_id=tenant_id, - controller=provider_controller, - ) - db_provider.credentials_str = json.dumps(encrypter.encrypt(credentials)) + # encrypt credentials + encrypter, _ = create_tool_provider_encrypter( + tenant_id=tenant_id, + controller=provider_controller, + ) + api_tool_provider.credentials_str = json.dumps(encrypter.encrypt(credentials)) - db.session.add(db_provider) - db.session.commit() + _session.add(api_tool_provider) - # update labels - ToolLabelManager.update_tool_labels(provider_controller, labels) + # update labels + ToolLabelManager.update_tool_labels(provider_controller, labels, _session) return {"result": "success"} @@ -212,16 +228,25 @@ class ApiToolManageService: @staticmethod def list_api_tool_provider_tools(user_id: str, tenant_id: str, provider_name: str) -> list[ToolApiEntity]: """ - list api tool provider tools + List tools provided by a specific API tool provider. + + :param user_id: The ID of the user requesting the list. + :param tenant_id: The ID of the workspace/tenant. + :param provider_name: The name of the API tool provider. + :return: A list of ToolApiEntity objects. """ - provider: ApiToolProvider | None = db.session.scalar( - select(ApiToolProvider) - .where( - ApiToolProvider.tenant_id == tenant_id, - ApiToolProvider.name == provider_name, + + # create new session with automatic transaction management + provider: ApiToolProvider | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + provider = _session.scalar( + select(ApiToolProvider) + .where( + ApiToolProvider.tenant_id == tenant_id, + ApiToolProvider.name == provider_name, + ) + .limit(1) ) - .limit(1) - ) if provider is None: raise ValueError(f"you have not added provider {provider_name}") @@ -244,110 +269,140 @@ class ApiToolManageService: tenant_id: str, provider_name: str, original_provider: str, - icon: dict, - credentials: dict, + icon: dict[str, Any], + credentials: dict[str, Any], _schema_type: ApiProviderSchemaType, schema: str, privacy_policy: str | None, custom_disclaimer: str, labels: list[str], - ): + ) -> dict[str, Any]: """ - update api tool provider + Update an existing API tool provider. + + :param user_id: The ID of the user updating the provider. + :param tenant_id: The ID of the workspace/tenant. + :param provider_name: The new name of the API tool provider. + :param original_provider: The original name of the API tool provider. + :param icon: The icon configuration for the provider. + :param credentials: The credentials for the provider. + :param _schema_type: The type of schema (e.g., OpenAPI). + :param schema: The raw schema string. + :param privacy_policy: The privacy policy URL or text. + :param custom_disclaimer: Custom disclaimer text. + :param labels: A list of labels for the provider. + :return: A dictionary indicating the result status. """ + provider_name = provider_name.strip() # check if the provider exists - provider = db.session.scalar( - select(ApiToolProvider) - .where( - ApiToolProvider.tenant_id == tenant_id, - ApiToolProvider.name == original_provider, + # create new session with automatic transaction management + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + provider: ApiToolProvider | None = _session.scalar( + select(ApiToolProvider) + .where( + ApiToolProvider.tenant_id == tenant_id, + ApiToolProvider.name == original_provider, + ) + .limit(1) ) - .limit(1) - ) - if provider is None: - raise ValueError(f"api provider {provider_name} does not exists") - # parse openapi to tool bundle - extra_info: dict[str, str] = {} - # extra info like description will be set here - tool_bundles, schema_type = ApiToolManageService.convert_schema_to_tool_bundles(schema, extra_info) + if provider is None: + raise ApiToolProviderNotFoundError(provider_name=original_provider, tenant_id=tenant_id) - # update db provider - provider.name = provider_name - provider.icon = json.dumps(icon) - provider.schema = schema - provider.description = extra_info.get("description", "") - provider.schema_type_str = schema_type - provider.tools_str = json.dumps(jsonable_encoder(tool_bundles)) - provider.privacy_policy = privacy_policy - provider.custom_disclaimer = custom_disclaimer + # parse openapi to tool bundle + extra_info: dict[str, str] = {} + # extra info like description will be set here + tool_bundles, schema_type = ApiToolManageService.convert_schema_to_tool_bundles(schema, extra_info) - if "auth_type" not in credentials: - raise ValueError("auth_type is required") + # update db provider + provider.name = provider_name + provider.icon = json.dumps(icon) + provider.schema = schema + provider.description = extra_info.get("description", "") + provider.schema_type_str = schema_type + provider.tools_str = json.dumps(jsonable_encoder(tool_bundles)) + provider.privacy_policy = privacy_policy + provider.custom_disclaimer = custom_disclaimer - # get auth type, none or api key - auth_type = ApiProviderAuthType.value_of(credentials["auth_type"]) + if "auth_type" not in credentials: + raise ValueError("auth_type is required") - # create provider entity - provider_controller = ApiToolProviderController.from_db(provider, auth_type) - # load tools into provider entity - provider_controller.load_bundled_tools(tool_bundles) + # get auth type, none or api key + auth_type = ApiProviderAuthType.value_of(credentials["auth_type"]) - # get original credentials if exists - encrypter, cache = create_tool_provider_encrypter( - tenant_id=tenant_id, - controller=provider_controller, - ) + # create provider entity + provider_controller = ApiToolProviderController.from_db(provider, auth_type) + # load tools into provider entity + provider_controller.load_bundled_tools(tool_bundles) - original_credentials = encrypter.decrypt(provider.credentials) - masked_credentials = encrypter.mask_plugin_credentials(original_credentials) - # check if the credential has changed, save the original credential - for name, value in credentials.items(): - if name in masked_credentials and value == masked_credentials[name]: - credentials[name] = original_credentials[name] + # get original credentials if exists + encrypter, cache = create_tool_provider_encrypter( + tenant_id=tenant_id, + controller=provider_controller, + ) - credentials = dict(encrypter.encrypt(credentials)) - provider.credentials_str = json.dumps(credentials) + original_credentials = encrypter.decrypt(provider.credentials) + masked_credentials = encrypter.mask_plugin_credentials(original_credentials) - db.session.add(provider) - db.session.commit() + # check if the credential has changed, save the original credential + for name, value in credentials.items(): + if name in masked_credentials and value == masked_credentials[name]: + credentials[name] = original_credentials[name] + + credentials = dict(encrypter.encrypt(credentials)) + provider.credentials_str = json.dumps(credentials) + + _session.add(provider) + + # update labels + ToolLabelManager.update_tool_labels(provider_controller, labels, _session) # delete cache cache.delete() - # update labels - ToolLabelManager.update_tool_labels(provider_controller, labels) - return {"result": "success"} @staticmethod def delete_api_tool_provider(user_id: str, tenant_id: str, provider_name: str): """ - delete tool provider + Delete an API tool provider. + + :param user_id: The ID of the user performing the deletion operation. + :param tenant_id: The ID of the workspace/tenant where the provider belongs. + :param provider_name: The unique name of the API tool provider to be deleted. + :raises ValueError: If the specified provider does not exist in the tenant. + :return: A dictionary indicating the result status. """ - provider = db.session.scalar( - select(ApiToolProvider) - .where( - ApiToolProvider.tenant_id == tenant_id, - ApiToolProvider.name == provider_name, + + # create new session with automatic transaction management + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + provider: ApiToolProvider | None = _session.scalar( + select(ApiToolProvider) + .where( + ApiToolProvider.tenant_id == tenant_id, + ApiToolProvider.name == provider_name, + ) + .limit(1) ) - .limit(1) - ) - if provider is None: - raise ValueError(f"you have not added provider {provider_name}") + if provider is None: + raise ValueError(f"you have not added provider {provider_name}") - db.session.delete(provider) - db.session.commit() + _session.delete(provider) return {"result": "success"} @staticmethod - def get_api_tool_provider(user_id: str, tenant_id: str, provider: str): + def get_api_tool_provider(user_id: str, tenant_id: str, provider: str) -> dict[str, Any]: """ - get api tool provider + Get API tool provider details. + + :param user_id: The ID of the user requesting the provider. + :param tenant_id: The ID of the workspace/tenant. + :param provider: The name of the API tool provider. + :return: A dictionary containing the provider details. """ return ToolManager.user_get_api_provider(provider=provider, tenant_id=tenant_id) @@ -356,14 +411,24 @@ class ApiToolManageService: tenant_id: str, provider_name: str, tool_name: str, - credentials: dict, - parameters: dict, + credentials: dict[str, Any], + parameters: dict[str, Any], schema_type: ApiProviderSchemaType, schema: str, - ): + ) -> dict[str, Any]: """ - test api tool before adding api tool provider + Test an API tool before adding the API tool provider. + + :param tenant_id: The ID of the workspace/tenant. + :param provider_name: The name of the API tool provider. + :param tool_name: The name of the specific tool to test. + :param credentials: The credentials for the provider. + :param parameters: The parameters to pass to the tool. + :param schema_type: The type of schema (e.g., OpenAPI). + :param schema: The raw schema string. + :return: A dictionary containing the result or error message. """ + if schema_type not in [member.value for member in ApiProviderSchemaType]: raise ValueError(f"invalid schema type {schema_type}") @@ -377,18 +442,21 @@ class ApiToolManageService: if tool_bundle is None: raise ValueError(f"invalid tool name {tool_name}") - db_provider = db.session.scalar( - select(ApiToolProvider) - .where( - ApiToolProvider.tenant_id == tenant_id, - ApiToolProvider.name == provider_name, + # create new session with automatic transaction management to get the provider + provider: ApiToolProvider | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + provider = _session.scalar( + select(ApiToolProvider) + .where( + ApiToolProvider.tenant_id == tenant_id, + ApiToolProvider.name == provider_name, + ) + .limit(1) ) - .limit(1) - ) - if not db_provider: + if provider is None: # create a fake db provider - db_provider = ApiToolProvider( + provider = ApiToolProvider( tenant_id="", user_id="", name="", @@ -407,12 +475,12 @@ class ApiToolManageService: auth_type = ApiProviderAuthType.value_of(credentials["auth_type"]) # create provider entity - provider_controller = ApiToolProviderController.from_db(db_provider, auth_type) + provider_controller = ApiToolProviderController.from_db(provider, auth_type) # load tools into provider entity provider_controller.load_bundled_tools(tool_bundles) # decrypt credentials - if db_provider.id: + if provider.id: encrypter, _ = create_tool_provider_encrypter( tenant_id=tenant_id, controller=provider_controller, @@ -443,14 +511,21 @@ class ApiToolManageService: @staticmethod def list_api_tools(tenant_id: str) -> list[ToolProviderApiEntity]: """ - list api tools + List all API tools for a specific tenant. + + :param tenant_id: The ID of the workspace/tenant. + :return: A list of ToolProviderApiEntity objects. """ # get all api providers - db_providers = db.session.scalars(select(ApiToolProvider).where(ApiToolProvider.tenant_id == tenant_id)).all() + # create new session with automatic transaction management + providers: list[ApiToolProvider] = [] + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + providers = list( + _session.scalars(select(ApiToolProvider).where(ApiToolProvider.tenant_id == tenant_id)).all() + ) result: list[ToolProviderApiEntity] = [] - - for provider in db_providers: + for provider in providers: # convert provider controller to user provider provider_controller = ToolTransformService.api_provider_to_controller(db_provider=provider) labels = ToolLabelManager.get_tool_labels(provider_controller) diff --git a/api/services/tools/builtin_tools_manage_service.py b/api/services/tools/builtin_tools_manage_service.py index d529d2f065..7bd056b8a0 100644 --- a/api/services/tools/builtin_tools_manage_service.py +++ b/api/services/tools/builtin_tools_manage_service.py @@ -4,8 +4,8 @@ from collections.abc import Mapping from pathlib import Path from typing import Any -from sqlalchemy import exists, select -from sqlalchemy.orm import Session +from sqlalchemy import delete, exists, func, select, update +from sqlalchemy.orm import Session, sessionmaker from configs import dify_config from constants import HIDDEN_VALUE, UNKNOWN_VALUE @@ -46,13 +46,16 @@ class BuiltinToolManageService: delete custom oauth client params """ tool_provider = ToolProviderID(provider) - with Session(db.engine) as session: - session.query(ToolOAuthTenantClient).filter_by( - tenant_id=tenant_id, - provider=tool_provider.provider_name, - plugin_id=tool_provider.plugin_id, - ).delete() - session.commit() + with sessionmaker(bind=db.engine).begin() as session: + session.execute( + delete(ToolOAuthTenantClient) + .where( + ToolOAuthTenantClient.tenant_id == tenant_id, + ToolOAuthTenantClient.provider == tool_provider.provider_name, + ToolOAuthTenantClient.plugin_id == tool_provider.plugin_id, + ) + .execution_options(synchronize_session=False) + ) return {"result": "success"} @staticmethod @@ -144,21 +147,21 @@ class BuiltinToolManageService: tenant_id: str, provider: str, credential_id: str, - credentials: dict | None = None, + credentials: dict[str, Any] | None = None, name: str | None = None, ): """ update builtin tool provider """ - with Session(db.engine) as session: + with sessionmaker(bind=db.engine).begin() as session: # get if the provider exists - db_provider = ( - session.query(BuiltinToolProvider) + db_provider = session.scalar( + select(BuiltinToolProvider) .where( BuiltinToolProvider.tenant_id == tenant_id, BuiltinToolProvider.id == credential_id, ) - .first() + .limit(1) ) if db_provider is None: raise ValueError(f"you have not added provider {provider}") @@ -174,7 +177,7 @@ class BuiltinToolManageService: ) original_credentials = encrypter.decrypt(db_provider.credentials) - new_credentials: dict = { + new_credentials: dict[str, Any] = { key: value if value != HIDDEN_VALUE else original_credentials.get(key, UNKNOWN_VALUE) for key, value in credentials.items() } @@ -203,9 +206,7 @@ class BuiltinToolManageService: db_provider.name = name - session.commit() except Exception as e: - session.rollback() raise ValueError(str(e)) return {"result": "success"} @@ -215,14 +216,14 @@ class BuiltinToolManageService: api_type: CredentialType, tenant_id: str, provider: str, - credentials: dict, + credentials: dict[str, Any], expires_at: int = -1, name: str | None = None, ): """ add builtin tool provider """ - with Session(db.engine) as session: + with sessionmaker(bind=db.engine).begin() as session: try: lock = f"builtin_tool_provider_create_lock:{tenant_id}_{provider}" with redis_client.lock(lock, timeout=20): @@ -231,7 +232,13 @@ class BuiltinToolManageService: raise ValueError(f"provider {provider} does not need credentials") provider_count = ( - session.query(BuiltinToolProvider).filter_by(tenant_id=tenant_id, provider=provider).count() + session.scalar( + select(func.count(BuiltinToolProvider.id)).where( + BuiltinToolProvider.tenant_id == tenant_id, + BuiltinToolProvider.provider == provider, + ) + ) + or 0 ) # check if the provider count is reached the limit @@ -281,9 +288,7 @@ class BuiltinToolManageService: ) session.add(db_provider) - session.commit() except Exception as e: - session.rollback() raise ValueError(str(e)) return {"result": "success"} @@ -309,16 +314,15 @@ class BuiltinToolManageService: def generate_builtin_tool_provider_name( session: Session, tenant_id: str, provider: str, credential_type: CredentialType ) -> str: - db_providers = ( - session.query(BuiltinToolProvider) - .filter_by( - tenant_id=tenant_id, - provider=provider, - credential_type=credential_type, + db_providers = session.scalars( + select(BuiltinToolProvider) + .where( + BuiltinToolProvider.tenant_id == tenant_id, + BuiltinToolProvider.provider == provider, + BuiltinToolProvider.credential_type == credential_type, ) .order_by(BuiltinToolProvider.created_at.desc()) - .all() - ) + ).all() return generate_incremental_name( [provider.name for provider in db_providers], f"{credential_type.get_name()}", @@ -379,21 +383,20 @@ class BuiltinToolManageService: """ delete tool provider """ - with Session(db.engine) as session: - db_provider = ( - session.query(BuiltinToolProvider) + with sessionmaker(bind=db.engine).begin() as session: + db_provider = session.scalar( + select(BuiltinToolProvider) .where( BuiltinToolProvider.tenant_id == tenant_id, BuiltinToolProvider.id == credential_id, ) - .first() + .limit(1) ) if db_provider is None: raise ValueError(f"you have not added provider {provider}") session.delete(db_provider) - session.commit() # delete cache provider_controller = ToolManager.get_builtin_provider(provider, tenant_id) @@ -409,20 +412,31 @@ class BuiltinToolManageService: """ set default provider """ - with Session(db.engine) as session: + with sessionmaker(bind=db.engine).begin() as session: # get provider - target_provider = session.query(BuiltinToolProvider).filter_by(id=id, tenant_id=tenant_id).first() + target_provider = session.scalar( + select(BuiltinToolProvider) + .where(BuiltinToolProvider.id == id, BuiltinToolProvider.tenant_id == tenant_id) + .limit(1) + ) if target_provider is None: raise ValueError("provider not found") # clear default provider - session.query(BuiltinToolProvider).filter_by( - tenant_id=tenant_id, user_id=user_id, provider=provider, is_default=True - ).update({"is_default": False}) + session.execute( + update(BuiltinToolProvider) + .where( + BuiltinToolProvider.tenant_id == tenant_id, + BuiltinToolProvider.user_id == user_id, + BuiltinToolProvider.provider == provider, + BuiltinToolProvider.is_default.is_(True), + ) + .values(is_default=False) + .execution_options(synchronize_session=False) + ) # set new default provider target_provider.is_default = True - session.commit() return {"result": "success"} @@ -433,10 +447,13 @@ class BuiltinToolManageService: """ tool_provider = ToolProviderID(provider_name) with Session(db.engine, autoflush=False) as session: - system_client: ToolOAuthSystemClient | None = ( - session.query(ToolOAuthSystemClient) - .filter_by(plugin_id=tool_provider.plugin_id, provider=tool_provider.provider_name) - .first() + system_client = session.scalar( + select(ToolOAuthSystemClient) + .where( + ToolOAuthSystemClient.plugin_id == tool_provider.plugin_id, + ToolOAuthSystemClient.provider == tool_provider.provider_name, + ) + .limit(1) ) return system_client is not None @@ -447,15 +464,15 @@ class BuiltinToolManageService: """ tool_provider = ToolProviderID(provider) with Session(db.engine, autoflush=False) as session: - user_client: ToolOAuthTenantClient | None = ( - session.query(ToolOAuthTenantClient) - .filter_by( - tenant_id=tenant_id, - provider=tool_provider.provider_name, - plugin_id=tool_provider.plugin_id, - enabled=True, + user_client = session.scalar( + select(ToolOAuthTenantClient) + .where( + ToolOAuthTenantClient.tenant_id == tenant_id, + ToolOAuthTenantClient.provider == tool_provider.provider_name, + ToolOAuthTenantClient.plugin_id == tool_provider.plugin_id, + ToolOAuthTenantClient.enabled.is_(True), ) - .first() + .limit(1) ) return user_client is not None and user_client.enabled @@ -472,15 +489,15 @@ class BuiltinToolManageService: cache=NoOpProviderCredentialCache(), ) with Session(db.engine, autoflush=False) as session: - user_client: ToolOAuthTenantClient | None = ( - session.query(ToolOAuthTenantClient) - .filter_by( - tenant_id=tenant_id, - provider=tool_provider.provider_name, - plugin_id=tool_provider.plugin_id, - enabled=True, + user_client = session.scalar( + select(ToolOAuthTenantClient) + .where( + ToolOAuthTenantClient.tenant_id == tenant_id, + ToolOAuthTenantClient.provider == tool_provider.provider_name, + ToolOAuthTenantClient.plugin_id == tool_provider.plugin_id, + ToolOAuthTenantClient.enabled.is_(True), ) - .first() + .limit(1) ) oauth_params: Mapping[str, Any] | None = None if user_client: @@ -494,10 +511,13 @@ class BuiltinToolManageService: if not is_verified: return oauth_params - system_client: ToolOAuthSystemClient | None = ( - session.query(ToolOAuthSystemClient) - .filter_by(plugin_id=tool_provider.plugin_id, provider=tool_provider.provider_name) - .first() + system_client = session.scalar( + select(ToolOAuthSystemClient) + .where( + ToolOAuthSystemClient.plugin_id == tool_provider.plugin_id, + ToolOAuthSystemClient.provider == tool_provider.provider_name, + ) + .limit(1) ) if system_client: try: @@ -589,8 +609,8 @@ class BuiltinToolManageService: provider_name = provider_id_entity.provider_name if provider_id_entity.organization != "langgenius": - provider = ( - session.query(BuiltinToolProvider) + provider = session.scalar( + select(BuiltinToolProvider) .where( BuiltinToolProvider.tenant_id == tenant_id, BuiltinToolProvider.provider == full_provider_name, @@ -599,11 +619,11 @@ class BuiltinToolManageService: BuiltinToolProvider.is_default.desc(), # default=True first BuiltinToolProvider.created_at.asc(), # oldest first ) - .first() + .limit(1) ) else: - provider = ( - session.query(BuiltinToolProvider) + provider = session.scalar( + select(BuiltinToolProvider) .where( BuiltinToolProvider.tenant_id == tenant_id, (BuiltinToolProvider.provider == provider_name) @@ -613,7 +633,7 @@ class BuiltinToolManageService: BuiltinToolProvider.is_default.desc(), # default=True first BuiltinToolProvider.created_at.asc(), # oldest first ) - .first() + .limit(1) ) if provider is None: @@ -623,21 +643,21 @@ class BuiltinToolManageService: return provider except Exception: # it's an old provider without organization - return ( - session.query(BuiltinToolProvider) + return session.scalar( + select(BuiltinToolProvider) .where(BuiltinToolProvider.tenant_id == tenant_id, BuiltinToolProvider.provider == provider_name) .order_by( BuiltinToolProvider.is_default.desc(), # default=True first BuiltinToolProvider.created_at.asc(), # oldest first ) - .first() + .limit(1) ) @staticmethod def save_custom_oauth_client_params( tenant_id: str, provider: str, - client_params: dict | None = None, + client_params: dict[str, Any] | None = None, enable_oauth_custom_client: bool | None = None, ): """ @@ -654,15 +674,15 @@ class BuiltinToolManageService: if not isinstance(provider_controller, (BuiltinToolProviderController, PluginToolProviderController)): raise ValueError(f"Provider {provider} is not a builtin or plugin provider") - with Session(db.engine) as session: - custom_client_params = ( - session.query(ToolOAuthTenantClient) - .filter_by( - tenant_id=tenant_id, - plugin_id=tool_provider.plugin_id, - provider=tool_provider.provider_name, + with sessionmaker(bind=db.engine).begin() as session: + custom_client_params = session.scalar( + select(ToolOAuthTenantClient) + .where( + ToolOAuthTenantClient.tenant_id == tenant_id, + ToolOAuthTenantClient.plugin_id == tool_provider.plugin_id, + ToolOAuthTenantClient.provider == tool_provider.provider_name, ) - .first() + .limit(1) ) # if the record does not exist, create a basic record @@ -690,7 +710,6 @@ class BuiltinToolManageService: if enable_oauth_custom_client is not None: custom_client_params.enabled = enable_oauth_custom_client - session.commit() return {"result": "success"} @staticmethod @@ -700,14 +719,14 @@ class BuiltinToolManageService: """ with Session(db.engine) as session: tool_provider = ToolProviderID(provider) - custom_oauth_client_params: ToolOAuthTenantClient | None = ( - session.query(ToolOAuthTenantClient) - .filter_by( - tenant_id=tenant_id, - plugin_id=tool_provider.plugin_id, - provider=tool_provider.provider_name, + custom_oauth_client_params = session.scalar( + select(ToolOAuthTenantClient) + .where( + ToolOAuthTenantClient.tenant_id == tenant_id, + ToolOAuthTenantClient.plugin_id == tool_provider.plugin_id, + ToolOAuthTenantClient.provider == tool_provider.provider_name, ) - .first() + .limit(1) ) if custom_oauth_client_params is None: return {} diff --git a/api/services/tools/mcp_tools_manage_service.py b/api/services/tools/mcp_tools_manage_service.py index 690b06ea7d..89762d6772 100644 --- a/api/services/tools/mcp_tools_manage_service.py +++ b/api/services/tools/mcp_tools_manage_service.py @@ -17,6 +17,7 @@ from core.helper import encrypter from core.helper.provider_cache import NoOpProviderCredentialCache from core.mcp.auth.auth_flow import auth from core.mcp.auth_client import MCPClientWithAuthRetry +from core.mcp.entities import AuthActionType, AuthResult from core.mcp.error import MCPAuthError, MCPError from core.mcp.types import Tool as MCPTool from core.tools.entities.api_entities import ToolProviderApiEntity @@ -496,7 +497,13 @@ class MCPToolManageService: ) as mcp_client: return mcp_client.list_tools() - def execute_auth_actions(self, auth_result: Any) -> dict[str, str]: + _ACTION_TO_OAUTH: dict[AuthActionType, OAuthDataType] = { + AuthActionType.SAVE_CLIENT_INFO: OAuthDataType.CLIENT_INFO, + AuthActionType.SAVE_TOKENS: OAuthDataType.TOKENS, + AuthActionType.SAVE_CODE_VERIFIER: OAuthDataType.CODE_VERIFIER, + } + + def execute_auth_actions(self, auth_result: AuthResult) -> dict[str, str]: """ Execute the actions returned by the auth function. @@ -508,19 +515,13 @@ class MCPToolManageService: Returns: The response from the auth result """ - from core.mcp.entities import AuthAction, AuthActionType - - action: AuthAction for action in auth_result.actions: if action.provider_id is None or action.tenant_id is None: continue - if action.action_type == AuthActionType.SAVE_CLIENT_INFO: - self.save_oauth_data(action.provider_id, action.tenant_id, action.data, OAuthDataType.CLIENT_INFO) - elif action.action_type == AuthActionType.SAVE_TOKENS: - self.save_oauth_data(action.provider_id, action.tenant_id, action.data, OAuthDataType.TOKENS) - elif action.action_type == AuthActionType.SAVE_CODE_VERIFIER: - self.save_oauth_data(action.provider_id, action.tenant_id, action.data, OAuthDataType.CODE_VERIFIER) + oauth_type = self._ACTION_TO_OAUTH.get(action.action_type) + if oauth_type is not None: + self.save_oauth_data(action.provider_id, action.tenant_id, action.data, oauth_type) return auth_result.response diff --git a/api/services/tools/tools_transform_service.py b/api/services/tools/tools_transform_service.py index b24f001133..47aca9b0af 100644 --- a/api/services/tools/tools_transform_service.py +++ b/api/services/tools/tools_transform_service.py @@ -1,6 +1,6 @@ import logging from collections.abc import Mapping -from typing import Any, Union +from typing import Any from pydantic import TypeAdapter, ValidationError from yarl import URL @@ -48,24 +48,30 @@ class ToolTransformService: URL(dify_config.CONSOLE_API_URL or "/") / "console" / "api" / "workspaces" / "current" / "tool-provider" ) - if provider_type == ToolProviderType.BUILT_IN: - return str(url_prefix / "builtin" / provider_name / "icon") - elif provider_type in {ToolProviderType.API, ToolProviderType.WORKFLOW}: - try: - if isinstance(icon, str): - parsed = emoji_icon_adapter.validate_json(icon) - return {"background": parsed["background"], "content": parsed["content"]} - return {"background": icon["background"], "content": icon["content"]} - except (ValueError, ValidationError, KeyError): - return {"background": "#252525", "content": "\ud83d\ude01"} - elif provider_type == ToolProviderType.MCP: - if isinstance(icon, Mapping): - return {"background": icon.get("background", ""), "content": icon.get("content", "")} - return icon - return "" + match provider_type: + case ToolProviderType.BUILT_IN: + return str(url_prefix / "builtin" / provider_name / "icon") + case ToolProviderType.API | ToolProviderType.WORKFLOW: + try: + if isinstance(icon, str): + parsed = emoji_icon_adapter.validate_json(icon) + return {"background": parsed["background"], "content": parsed["content"]} + return {"background": icon["background"], "content": icon["content"]} + except (ValueError, ValidationError, KeyError): + return {"background": "#252525", "content": "\ud83d\ude01"} + case ToolProviderType.MCP: + if isinstance(icon, Mapping): + return {"background": icon.get("background", ""), "content": icon.get("content", "")} + return icon + case ToolProviderType.PLUGIN | ToolProviderType.APP | ToolProviderType.DATASET_RETRIEVAL: + return "" + case _: + return "" @staticmethod - def repack_provider(tenant_id: str, provider: Union[dict, ToolProviderApiEntity, PluginDatasourceProviderEntity]): + def repack_provider( + tenant_id: str, provider: dict[str, Any] | ToolProviderApiEntity | PluginDatasourceProviderEntity + ): """ repack provider @@ -422,7 +428,7 @@ class ToolTransformService: @staticmethod def convert_builtin_provider_to_credential_entity( - provider: BuiltinToolProvider, credentials: dict + provider: BuiltinToolProvider, credentials: dict[str, Any] ) -> ToolProviderCredentialApiEntity: return ToolProviderCredentialApiEntity( id=provider.id, diff --git a/api/services/tools/workflow_tools_manage_service.py b/api/services/tools/workflow_tools_manage_service.py index 8f5144c866..8f6600af03 100644 --- a/api/services/tools/workflow_tools_manage_service.py +++ b/api/services/tools/workflow_tools_manage_service.py @@ -1,10 +1,10 @@ import json import logging from datetime import datetime +from typing import Any -from graphon.model_runtime.utils.encoders import jsonable_encoder from sqlalchemy import delete, or_, select -from sqlalchemy.orm import Session +from sqlalchemy.orm import sessionmaker from core.tools.__base.tool_provider import ToolProviderController from core.tools.entities.api_entities import ToolApiEntity, ToolProviderApiEntity @@ -14,6 +14,7 @@ from core.tools.utils.workflow_configuration_sync import WorkflowToolConfigurati from core.tools.workflow_as_tool.provider import WorkflowToolProviderController from core.tools.workflow_as_tool.tool import WorkflowTool from extensions.ext_database import db +from graphon.model_runtime.utils.encoders import jsonable_encoder from models.model import App from models.tools import WorkflowToolProvider from models.workflow import Workflow @@ -35,39 +36,50 @@ class WorkflowToolManageService: workflow_app_id: str, name: str, label: str, - icon: dict, + icon: dict[str, Any], description: str, parameters: list[WorkflowToolParameterConfiguration], privacy_policy: str = "", labels: list[str] | None = None, ): # check if the name is unique - existing_workflow_tool_provider = db.session.scalar( - select(WorkflowToolProvider) - .where( - WorkflowToolProvider.tenant_id == tenant_id, - # name or app_id - or_(WorkflowToolProvider.name == name, WorkflowToolProvider.app_id == workflow_app_id), + existing_workflow_tool_provider: WorkflowToolProvider | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + # query if the name or app_id exists + existing_workflow_tool_provider = _session.scalar( + select(WorkflowToolProvider) + .where( + WorkflowToolProvider.tenant_id == tenant_id, + # name or app_id + or_(WorkflowToolProvider.name == name, WorkflowToolProvider.app_id == workflow_app_id), + ) + .limit(1) ) - .limit(1) - ) + # if the name or app_id exists raise error if existing_workflow_tool_provider is not None: raise ValueError(f"Tool with name {name} or app_id {workflow_app_id} already exists") - app: App | None = db.session.scalar( - select(App).where(App.id == workflow_app_id, App.tenant_id == tenant_id).limit(1) - ) + # query the app + app: App | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + app = _session.scalar(select(App).where(App.id == workflow_app_id, App.tenant_id == tenant_id).limit(1)) + # if not found raise error if app is None: raise ValueError(f"App {workflow_app_id} not found") + # query the workflow workflow: Workflow | None = app.workflow + + # if not found raise error if workflow is None: raise ValueError(f"Workflow not found for app {workflow_app_id}") + # check if workflow configuration is synced WorkflowToolConfigurationUtils.ensure_no_human_input_nodes(workflow.graph_dict) + # create workflow tool provider workflow_tool_provider = WorkflowToolProvider( tenant_id=tenant_id, user_id=user_id, @@ -84,15 +96,18 @@ class WorkflowToolManageService: try: WorkflowToolProviderController.from_db(workflow_tool_provider) except Exception as e: + logger.warning(e, exc_info=True) raise ValueError(str(e)) - with Session(db.engine, expire_on_commit=False) as session, session.begin(): - session.add(workflow_tool_provider) + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + _session.add(workflow_tool_provider) + # keep the session open to make orm instances in the same session if labels is not None: ToolLabelManager.update_tool_labels( ToolTransformService.workflow_provider_to_controller(workflow_tool_provider), labels ) + return {"result": "success"} @classmethod @@ -103,7 +118,7 @@ class WorkflowToolManageService: workflow_tool_id: str, name: str, label: str, - icon: dict, + icon: dict[str, Any], description: str, parameters: list[WorkflowToolParameterConfiguration], privacy_policy: str = "", @@ -111,6 +126,7 @@ class WorkflowToolManageService: ): """ Update a workflow tool. + :param user_id: the user id :param tenant_id: the tenant id :param workflow_tool_id: workflow tool id @@ -123,62 +139,82 @@ class WorkflowToolManageService: :param labels: labels :return: the updated tool """ - # check if the name is unique - existing_workflow_tool_provider = db.session.scalar( - select(WorkflowToolProvider) - .where( - WorkflowToolProvider.tenant_id == tenant_id, - WorkflowToolProvider.name == name, - WorkflowToolProvider.id != workflow_tool_id, - ) - .limit(1) - ) + existing_workflow_tool_provider: WorkflowToolProvider | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + # query if the name exists for other tools + existing_workflow_tool_provider = _session.scalar( + select(WorkflowToolProvider) + .where( + WorkflowToolProvider.tenant_id == tenant_id, + WorkflowToolProvider.name == name, + WorkflowToolProvider.id != workflow_tool_id, + ) + .limit(1) + ) + + # if the name exists raise error if existing_workflow_tool_provider is not None: raise ValueError(f"Tool with name {name} already exists") - workflow_tool_provider: WorkflowToolProvider | None = db.session.scalar( - select(WorkflowToolProvider) - .where(WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.id == workflow_tool_id) - .limit(1) - ) + # query the workflow tool provider + workflow_tool_provider: WorkflowToolProvider | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + workflow_tool_provider = _session.scalar( + select(WorkflowToolProvider) + .where(WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.id == workflow_tool_id) + .limit(1) + ) + # if not found raise error if workflow_tool_provider is None: raise ValueError(f"Tool {workflow_tool_id} not found") - app: App | None = db.session.scalar( - select(App).where(App.id == workflow_tool_provider.app_id, App.tenant_id == tenant_id).limit(1) - ) + # query the app + app: App | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + app = _session.scalar( + select(App).where(App.id == workflow_tool_provider.app_id, App.tenant_id == tenant_id).limit(1) + ) + # if not found raise error if app is None: raise ValueError(f"App {workflow_tool_provider.app_id} not found") + # query the workflow workflow: Workflow | None = app.workflow + + # if not found raise error if workflow is None: raise ValueError(f"Workflow not found for app {workflow_tool_provider.app_id}") + # check if workflow configuration is synced WorkflowToolConfigurationUtils.ensure_no_human_input_nodes(workflow.graph_dict) - workflow_tool_provider.name = name - workflow_tool_provider.label = label - workflow_tool_provider.icon = json.dumps(icon) - workflow_tool_provider.description = description - workflow_tool_provider.parameter_configuration = json.dumps([p.model_dump() for p in parameters]) - workflow_tool_provider.privacy_policy = privacy_policy - workflow_tool_provider.version = workflow.version - workflow_tool_provider.updated_at = datetime.now() + with sessionmaker(db.engine).begin() as _session: + _session.add(workflow_tool_provider) - try: - WorkflowToolProviderController.from_db(workflow_tool_provider) - except Exception as e: - raise ValueError(str(e)) + # update workflow tool provider + workflow_tool_provider.name = name + workflow_tool_provider.label = label + workflow_tool_provider.icon = json.dumps(icon) + workflow_tool_provider.description = description + workflow_tool_provider.parameter_configuration = json.dumps([p.model_dump() for p in parameters]) + workflow_tool_provider.privacy_policy = privacy_policy + workflow_tool_provider.version = workflow.version + workflow_tool_provider.updated_at = datetime.now() - db.session.commit() + try: + WorkflowToolProviderController.from_db(workflow_tool_provider) + except Exception as e: + raise ValueError(str(e)) - if labels is not None: - ToolLabelManager.update_tool_labels( - ToolTransformService.workflow_provider_to_controller(workflow_tool_provider), labels - ) + if labels is not None: + ToolLabelManager.update_tool_labels( + ToolTransformService.workflow_provider_to_controller(workflow_tool_provider), + labels, + session=_session, + ) return {"result": "success"} @@ -186,28 +222,32 @@ class WorkflowToolManageService: def list_tenant_workflow_tools(cls, user_id: str, tenant_id: str) -> list[ToolProviderApiEntity]: """ List workflow tools. + :param user_id: the user id :param tenant_id: the tenant id :return: the list of tools """ - db_tools = db.session.scalars( - select(WorkflowToolProvider).where(WorkflowToolProvider.tenant_id == tenant_id) - ).all() + + providers: list[WorkflowToolProvider] = [] + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + providers = list( + _session.scalars(select(WorkflowToolProvider).where(WorkflowToolProvider.tenant_id == tenant_id)).all() + ) # Create a mapping from provider_id to app_id - provider_id_to_app_id = {provider.id: provider.app_id for provider in db_tools} + provider_id_to_app_id = {provider.id: provider.app_id for provider in providers} tools: list[WorkflowToolProviderController] = [] - for provider in db_tools: + for provider in providers: try: tools.append(ToolTransformService.workflow_provider_to_controller(provider)) except Exception: # skip deleted tools logger.exception("Failed to load workflow tool provider %s", provider.id) - labels = ToolLabelManager.get_tools_labels([t for t in tools if isinstance(t, ToolProviderController)]) + labels = ToolLabelManager.get_tools_labels([tool for tool in tools if isinstance(tool, ToolProviderController)]) - result = [] + result: list[ToolProviderApiEntity] = [] for tool in tools: workflow_app_id = provider_id_to_app_id.get(tool.provider_id) @@ -232,17 +272,18 @@ class WorkflowToolManageService: def delete_workflow_tool(cls, user_id: str, tenant_id: str, workflow_tool_id: str): """ Delete a workflow tool. + :param user_id: the user id :param tenant_id: the tenant id :param workflow_tool_id: the workflow tool id """ - db.session.execute( - delete(WorkflowToolProvider).where( - WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.id == workflow_tool_id - ) - ) - db.session.commit() + with sessionmaker(db.engine).begin() as _session: + _ = _session.execute( + delete(WorkflowToolProvider).where( + WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.id == workflow_tool_id + ) + ) return {"result": "success"} @@ -250,47 +291,59 @@ class WorkflowToolManageService: def get_workflow_tool_by_tool_id(cls, user_id: str, tenant_id: str, workflow_tool_id: str): """ Get a workflow tool. + :param user_id: the user id :param tenant_id: the tenant id :param workflow_tool_id: the workflow tool id :return: the tool """ - db_tool: WorkflowToolProvider | None = db.session.scalar( - select(WorkflowToolProvider) - .where(WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.id == workflow_tool_id) - .limit(1) - ) - return cls._get_workflow_tool(tenant_id, db_tool) + + tool_provider: WorkflowToolProvider | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + tool_provider = _session.scalar( + select(WorkflowToolProvider) + .where(WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.id == workflow_tool_id) + .limit(1) + ) + + return cls._get_workflow_tool(tenant_id, tool_provider) @classmethod def get_workflow_tool_by_app_id(cls, user_id: str, tenant_id: str, workflow_app_id: str): """ Get a workflow tool. + :param user_id: the user id :param tenant_id: the tenant id :param workflow_app_id: the workflow app id :return: the tool """ - db_tool: WorkflowToolProvider | None = db.session.scalar( - select(WorkflowToolProvider) - .where(WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.app_id == workflow_app_id) - .limit(1) - ) - return cls._get_workflow_tool(tenant_id, db_tool) + + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + tool_provider: WorkflowToolProvider | None = _session.scalar( + select(WorkflowToolProvider) + .where(WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.app_id == workflow_app_id) + .limit(1) + ) + + return cls._get_workflow_tool(tenant_id, tool_provider) @classmethod def _get_workflow_tool(cls, tenant_id: str, db_tool: WorkflowToolProvider | None): """ Get a workflow tool. + :db_tool: the database tool :return: the tool """ if db_tool is None: raise ValueError("Tool not found") - workflow_app: App | None = db.session.scalar( - select(App).where(App.id == db_tool.app_id, App.tenant_id == db_tool.tenant_id).limit(1) - ) + workflow_app: App | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + workflow_app = _session.scalar( + select(App).where(App.id == db_tool.app_id, App.tenant_id == db_tool.tenant_id).limit(1) + ) if workflow_app is None: raise ValueError(f"App {db_tool.app_id} not found") @@ -330,28 +383,32 @@ class WorkflowToolManageService: def list_single_workflow_tools(cls, user_id: str, tenant_id: str, workflow_tool_id: str) -> list[ToolApiEntity]: """ List workflow tool provider tools. + :param user_id: the user id :param tenant_id: the tenant id :param workflow_tool_id: the workflow tool id :return: the list of tools """ - db_tool: WorkflowToolProvider | None = db.session.scalar( - select(WorkflowToolProvider) - .where(WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.id == workflow_tool_id) - .limit(1) - ) - if db_tool is None: + provider: WorkflowToolProvider | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + provider = _session.scalar( + select(WorkflowToolProvider) + .where(WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.id == workflow_tool_id) + .limit(1) + ) + + if provider is None: raise ValueError(f"Tool {workflow_tool_id} not found") - tool = ToolTransformService.workflow_provider_to_controller(db_tool) + tool = ToolTransformService.workflow_provider_to_controller(provider) workflow_tools: list[WorkflowTool] = tool.get_tools(tenant_id) if len(workflow_tools) == 0: raise ValueError(f"Tool {workflow_tool_id} not found") return [ ToolTransformService.convert_tool_entity_to_api_entity( - tool=tool.get_tools(db_tool.tenant_id)[0], + tool=tool.get_tools(provider.tenant_id)[0], labels=ToolLabelManager.get_tool_labels(tool), tenant_id=tenant_id, ) diff --git a/api/services/trigger/app_trigger_service.py b/api/services/trigger/app_trigger_service.py index 6d5a719f63..723d29e947 100644 --- a/api/services/trigger/app_trigger_service.py +++ b/api/services/trigger/app_trigger_service.py @@ -8,7 +8,7 @@ This service centralizes all AppTrigger-related business logic. import logging from sqlalchemy import update -from sqlalchemy.orm import Session +from sqlalchemy.orm import sessionmaker from extensions.ext_database import db from models.enums import AppTriggerStatus @@ -34,13 +34,12 @@ class AppTriggerService: """ try: - with Session(db.engine) as session: + with sessionmaker(bind=db.engine).begin() as session: session.execute( update(AppTrigger) .where(AppTrigger.tenant_id == tenant_id, AppTrigger.status == AppTriggerStatus.ENABLED) .values(status=AppTriggerStatus.RATE_LIMITED) ) - session.commit() logger.info("Marked all enabled triggers as rate limited for tenant %s", tenant_id) except Exception: logger.exception("Failed to mark all enabled triggers as rate limited for tenant %s", tenant_id) diff --git a/api/services/trigger/schedule_service.py b/api/services/trigger/schedule_service.py index 25e80770b8..a827222c1d 100644 --- a/api/services/trigger/schedule_service.py +++ b/api/services/trigger/schedule_service.py @@ -2,7 +2,6 @@ import json import logging from datetime import datetime -from graphon.entities.graph_config import NodeConfigDict from sqlalchemy import select from sqlalchemy.orm import Session @@ -14,6 +13,7 @@ from core.workflow.nodes.trigger_schedule.entities import ( VisualConfig, ) from core.workflow.nodes.trigger_schedule.exc import ScheduleConfigError, ScheduleNotFoundError +from graphon.entities.graph_config import NodeConfigDict from libs.schedule_utils import calculate_next_run_at, convert_12h_to_24h from models.account import Account, TenantAccountJoin from models.trigger import WorkflowSchedulePlan diff --git a/api/services/trigger/trigger_provider_service.py b/api/services/trigger/trigger_provider_service.py index 008d8bdb8a..6e14d996ea 100644 --- a/api/services/trigger/trigger_provider_service.py +++ b/api/services/trigger/trigger_provider_service.py @@ -3,10 +3,10 @@ import logging import time as _time import uuid from collections.abc import Mapping -from typing import Any +from typing import Any, TypedDict -from sqlalchemy import desc, func -from sqlalchemy.orm import Session +from sqlalchemy import delete, desc, func, select +from sqlalchemy.orm import Session, sessionmaker from configs import dify_config from constants import HIDDEN_VALUE, UNKNOWN_VALUE @@ -42,6 +42,10 @@ from services.plugin.plugin_service import PluginService logger = logging.getLogger(__name__) +class VerifyCredentialsResult(TypedDict): + verified: bool + + class TriggerProviderService: """Service for managing trigger providers and credentials""" @@ -69,27 +73,28 @@ class TriggerProviderService: workflows_in_use_map: dict[str, int] = {} with Session(db.engine, expire_on_commit=False) as session: # Get all subscriptions - subscriptions_db = ( - session.query(TriggerSubscription) - .filter_by(tenant_id=tenant_id, provider_id=str(provider_id)) + subscriptions_db = session.scalars( + select(TriggerSubscription) + .where( + TriggerSubscription.tenant_id == tenant_id, + TriggerSubscription.provider_id == str(provider_id), + ) .order_by(desc(TriggerSubscription.created_at)) - .all() - ) + ).all() subscriptions = [subscription.to_api_entity() for subscription in subscriptions_db] if not subscriptions: return [] - usage_counts = ( - session.query( + usage_counts = session.execute( + select( WorkflowPluginTrigger.subscription_id, func.count(func.distinct(WorkflowPluginTrigger.app_id)).label("app_count"), ) - .filter( + .where( WorkflowPluginTrigger.tenant_id == tenant_id, WorkflowPluginTrigger.subscription_id.in_([s.id for s in subscriptions]), ) .group_by(WorkflowPluginTrigger.subscription_id) - .all() - ) + ).all() workflows_in_use_map = {str(row.subscription_id): int(row.app_count) for row in usage_counts} provider_controller = TriggerManager.get_trigger_provider(tenant_id, provider_id) @@ -146,15 +151,19 @@ class TriggerProviderService: """ try: provider_controller = TriggerManager.get_trigger_provider(tenant_id, provider_id) - with Session(db.engine, expire_on_commit=False) as session: + with sessionmaker(bind=db.engine, expire_on_commit=False).begin() as session: # Use distributed lock to prevent race conditions lock_key = f"trigger_provider_create_lock:{tenant_id}_{provider_id}" with redis_client.lock(lock_key, timeout=20): # Check provider count limit provider_count = ( - session.query(TriggerSubscription) - .filter_by(tenant_id=tenant_id, provider_id=str(provider_id)) - .count() + session.scalar( + select(func.count(TriggerSubscription.id)).where( + TriggerSubscription.tenant_id == tenant_id, + TriggerSubscription.provider_id == str(provider_id), + ) + ) + or 0 ) if provider_count >= cls.__MAX_TRIGGER_PROVIDER_COUNT__: @@ -164,10 +173,14 @@ class TriggerProviderService: ) # Check if name already exists - existing = ( - session.query(TriggerSubscription) - .filter_by(tenant_id=tenant_id, provider_id=str(provider_id), name=name) - .first() + existing = session.scalar( + select(TriggerSubscription) + .where( + TriggerSubscription.tenant_id == tenant_id, + TriggerSubscription.provider_id == str(provider_id), + TriggerSubscription.name == name, + ) + .limit(1) ) if existing: raise ValueError(f"Credential name '{name}' already exists for this provider") @@ -205,7 +218,6 @@ class TriggerProviderService: subscription.id = subscription_id or str(uuid.uuid4()) session.add(subscription) - session.commit() return { "result": "success", @@ -241,12 +253,17 @@ class TriggerProviderService: :param expires_at: Optional new expiration timestamp :return: Success response with updated subscription info """ - with Session(db.engine, expire_on_commit=False) as session: + with sessionmaker(bind=db.engine, expire_on_commit=False).begin() as session: # Use distributed lock to prevent race conditions on the same subscription lock_key = f"trigger_subscription_update_lock:{tenant_id}_{subscription_id}" with redis_client.lock(lock_key, timeout=20): - subscription: TriggerSubscription | None = ( - session.query(TriggerSubscription).filter_by(tenant_id=tenant_id, id=subscription_id).first() + subscription = session.scalar( + select(TriggerSubscription) + .where( + TriggerSubscription.tenant_id == tenant_id, + TriggerSubscription.id == subscription_id, + ) + .limit(1) ) if not subscription: raise ValueError(f"Trigger subscription {subscription_id} not found") @@ -256,10 +273,14 @@ class TriggerProviderService: # Check for name uniqueness if name is being updated if name is not None and name != subscription.name: - existing = ( - session.query(TriggerSubscription) - .filter_by(tenant_id=tenant_id, provider_id=str(provider_id), name=name) - .first() + existing = session.scalar( + select(TriggerSubscription) + .where( + TriggerSubscription.tenant_id == tenant_id, + TriggerSubscription.provider_id == str(provider_id), + TriggerSubscription.name == name, + ) + .limit(1) ) if existing: raise ValueError(f"Subscription name '{name}' already exists for this provider") @@ -302,8 +323,6 @@ class TriggerProviderService: if expires_at is not None: subscription.expires_at = expires_at - session.commit() - # Clear subscription cache delete_cache_for_subscription( tenant_id=tenant_id, @@ -319,11 +338,18 @@ class TriggerProviderService: with Session(db.engine, expire_on_commit=False) as session: subscription: TriggerSubscription | None = None if subscription_id: - subscription = ( - session.query(TriggerSubscription).filter_by(tenant_id=tenant_id, id=subscription_id).first() + subscription = session.scalar( + select(TriggerSubscription) + .where( + TriggerSubscription.tenant_id == tenant_id, + TriggerSubscription.id == subscription_id, + ) + .limit(1) ) else: - subscription = session.query(TriggerSubscription).filter_by(tenant_id=tenant_id).first() + subscription = session.scalar( + select(TriggerSubscription).where(TriggerSubscription.tenant_id == tenant_id).limit(1) + ) if subscription: provider_controller = TriggerManager.get_trigger_provider( tenant_id, TriggerProviderID(subscription.provider_id) @@ -352,8 +378,13 @@ class TriggerProviderService: :param subscription_id: Subscription instance ID :return: Success response """ - subscription: TriggerSubscription | None = ( - session.query(TriggerSubscription).filter_by(tenant_id=tenant_id, id=subscription_id).first() + subscription = session.scalar( + select(TriggerSubscription) + .where( + TriggerSubscription.tenant_id == tenant_id, + TriggerSubscription.id == subscription_id, + ) + .limit(1) ) if not subscription: raise ValueError(f"Trigger provider subscription {subscription_id} not found") @@ -404,8 +435,15 @@ class TriggerProviderService: :param subscription_id: Subscription instance ID :return: New token info """ - with Session(db.engine) as session: - subscription = session.query(TriggerSubscription).filter_by(tenant_id=tenant_id, id=subscription_id).first() + with sessionmaker(bind=db.engine).begin() as session: + subscription = session.scalar( + select(TriggerSubscription) + .where( + TriggerSubscription.tenant_id == tenant_id, + TriggerSubscription.id == subscription_id, + ) + .limit(1) + ) if not subscription: raise ValueError(f"Trigger provider subscription {subscription_id} not found") @@ -448,7 +486,6 @@ class TriggerProviderService: # Update credentials subscription.credentials = dict(encrypter.encrypt(dict(refreshed_credentials.credentials))) subscription.credential_expires_at = refreshed_credentials.expires_at - session.commit() # Clear cache cache.delete() @@ -478,9 +515,14 @@ class TriggerProviderService: """ now_ts: int = int(now if now is not None else _time.time()) - with Session(db.engine) as session: - subscription: TriggerSubscription | None = ( - session.query(TriggerSubscription).filter_by(tenant_id=tenant_id, id=subscription_id).first() + with sessionmaker(bind=db.engine).begin() as session: + subscription = session.scalar( + select(TriggerSubscription) + .where( + TriggerSubscription.tenant_id == tenant_id, + TriggerSubscription.id == subscription_id, + ) + .limit(1) ) if subscription is None: raise ValueError(f"Trigger provider subscription {subscription_id} not found") @@ -531,7 +573,6 @@ class TriggerProviderService: # Persist refreshed properties and expires_at subscription.properties = dict(properties_encrypter.encrypt(dict(refreshed.properties))) subscription.expires_at = int(refreshed.expires_at) - session.commit() properties_cache.delete() logger.info( @@ -557,15 +598,15 @@ class TriggerProviderService: tenant_id=tenant_id, provider_id=provider_id ) with Session(db.engine, expire_on_commit=False) as session: - tenant_client: TriggerOAuthTenantClient | None = ( - session.query(TriggerOAuthTenantClient) - .filter_by( - tenant_id=tenant_id, - provider=provider_id.provider_name, - plugin_id=provider_id.plugin_id, - enabled=True, + tenant_client = session.scalar( + select(TriggerOAuthTenantClient) + .where( + TriggerOAuthTenantClient.tenant_id == tenant_id, + TriggerOAuthTenantClient.provider == provider_id.provider_name, + TriggerOAuthTenantClient.plugin_id == provider_id.plugin_id, + TriggerOAuthTenantClient.enabled.is_(True), ) - .first() + .limit(1) ) oauth_params: Mapping[str, Any] | None = None @@ -583,10 +624,13 @@ class TriggerProviderService: return None # Check for system-level OAuth client - system_client: TriggerOAuthSystemClient | None = ( - session.query(TriggerOAuthSystemClient) - .filter_by(plugin_id=provider_id.plugin_id, provider=provider_id.provider_name) - .first() + system_client = session.scalar( + select(TriggerOAuthSystemClient) + .where( + TriggerOAuthSystemClient.plugin_id == provider_id.plugin_id, + TriggerOAuthSystemClient.provider == provider_id.provider_name, + ) + .limit(1) ) if system_client: @@ -607,10 +651,13 @@ class TriggerProviderService: if not is_verified: return False with Session(db.engine, expire_on_commit=False) as session: - system_client: TriggerOAuthSystemClient | None = ( - session.query(TriggerOAuthSystemClient) - .filter_by(plugin_id=provider_id.plugin_id, provider=provider_id.provider_name) - .first() + system_client = session.scalar( + select(TriggerOAuthSystemClient) + .where( + TriggerOAuthSystemClient.plugin_id == provider_id.plugin_id, + TriggerOAuthSystemClient.provider == provider_id.provider_name, + ) + .limit(1) ) return system_client is not None @@ -639,16 +686,16 @@ class TriggerProviderService: tenant_id=tenant_id, provider_id=provider_id ) - with Session(db.engine) as session: + with sessionmaker(bind=db.engine).begin() as session: # Find existing custom client params - custom_client = ( - session.query(TriggerOAuthTenantClient) - .filter_by( - tenant_id=tenant_id, - plugin_id=provider_id.plugin_id, - provider=provider_id.provider_name, + custom_client = session.scalar( + select(TriggerOAuthTenantClient) + .where( + TriggerOAuthTenantClient.tenant_id == tenant_id, + TriggerOAuthTenantClient.plugin_id == provider_id.plugin_id, + TriggerOAuthTenantClient.provider == provider_id.provider_name, ) - .first() + .limit(1) ) # Create new record if doesn't exist @@ -683,8 +730,6 @@ class TriggerProviderService: if enabled is not None: custom_client.enabled = enabled - session.commit() - return {"result": "success"} @classmethod @@ -697,14 +742,14 @@ class TriggerProviderService: :return: Masked OAuth client parameters """ with Session(db.engine) as session: - custom_client = ( - session.query(TriggerOAuthTenantClient) - .filter_by( - tenant_id=tenant_id, - plugin_id=provider_id.plugin_id, - provider=provider_id.provider_name, + custom_client = session.scalar( + select(TriggerOAuthTenantClient) + .where( + TriggerOAuthTenantClient.tenant_id == tenant_id, + TriggerOAuthTenantClient.plugin_id == provider_id.plugin_id, + TriggerOAuthTenantClient.provider == provider_id.provider_name, ) - .first() + .limit(1) ) if custom_client is None: @@ -733,13 +778,16 @@ class TriggerProviderService: :param provider_id: Provider identifier :return: Success response """ - with Session(db.engine) as session: - session.query(TriggerOAuthTenantClient).filter_by( - tenant_id=tenant_id, - provider=provider_id.provider_name, - plugin_id=provider_id.plugin_id, - ).delete() - session.commit() + with sessionmaker(bind=db.engine).begin() as session: + session.execute( + delete(TriggerOAuthTenantClient) + .where( + TriggerOAuthTenantClient.tenant_id == tenant_id, + TriggerOAuthTenantClient.provider == provider_id.provider_name, + TriggerOAuthTenantClient.plugin_id == provider_id.plugin_id, + ) + .execution_options(synchronize_session=False) + ) return {"result": "success"} @@ -753,15 +801,15 @@ class TriggerProviderService: :return: True if enabled, False otherwise """ with Session(db.engine, expire_on_commit=False) as session: - custom_client = ( - session.query(TriggerOAuthTenantClient) - .filter_by( - tenant_id=tenant_id, - plugin_id=provider_id.plugin_id, - provider=provider_id.provider_name, - enabled=True, + custom_client = session.scalar( + select(TriggerOAuthTenantClient) + .where( + TriggerOAuthTenantClient.tenant_id == tenant_id, + TriggerOAuthTenantClient.plugin_id == provider_id.plugin_id, + TriggerOAuthTenantClient.provider == provider_id.provider_name, + TriggerOAuthTenantClient.enabled.is_(True), ) - .first() + .limit(1) ) return custom_client is not None @@ -771,7 +819,9 @@ class TriggerProviderService: Get a trigger subscription by the endpoint ID. """ with Session(db.engine, expire_on_commit=False) as session: - subscription = session.query(TriggerSubscription).filter_by(endpoint_id=endpoint_id).first() + subscription = session.scalar( + select(TriggerSubscription).where(TriggerSubscription.endpoint_id == endpoint_id).limit(1) + ) if not subscription: return None provider_controller: PluginTriggerProviderController = TriggerManager.get_trigger_provider( @@ -800,7 +850,7 @@ class TriggerProviderService: provider_id: TriggerProviderID, subscription_id: str, credentials: dict[str, Any], - ) -> dict[str, Any]: + ) -> VerifyCredentialsResult: """ Verify credentials for an existing subscription without updating it. diff --git a/api/services/trigger/trigger_service.py b/api/services/trigger/trigger_service.py index d72c041609..911331e357 100644 --- a/api/services/trigger/trigger_service.py +++ b/api/services/trigger/trigger_service.py @@ -5,10 +5,9 @@ from collections.abc import Mapping from typing import Any from flask import Request, Response -from graphon.entities.graph_config import NodeConfigDict from pydantic import BaseModel from sqlalchemy import select -from sqlalchemy.orm import Session +from sqlalchemy.orm import sessionmaker from core.plugin.entities.plugin_daemon import CredentialType from core.plugin.entities.request import TriggerDispatchResponse, TriggerInvokeEventResponse @@ -21,6 +20,7 @@ from core.trigger.utils.encryption import create_trigger_provider_encrypter_for_ from core.workflow.nodes.trigger_plugin.entities import TriggerEventNodeData from extensions.ext_database import db from extensions.ext_redis import redis_client +from graphon.entities.graph_config import NodeConfigDict from models.model import App from models.provider_ids import TriggerProviderID from models.trigger import TriggerSubscription, WorkflowPluginTrigger @@ -215,7 +215,7 @@ class TriggerService: not_found_in_cache.append(node_info) continue - with Session(db.engine) as session: + with sessionmaker(bind=db.engine, expire_on_commit=False).begin() as session: try: # lock the concurrent plugin trigger creation redis_client.lock(f"{cls.__PLUGIN_TRIGGER_NODE_CACHE_KEY__}:apps:{app.id}:lock", timeout=10) @@ -260,7 +260,6 @@ class TriggerService: cache.model_dump_json(), ex=60 * 60, ) - session.commit() # Update existing records if subscription_id changed for node_info in nodes_in_graph: @@ -290,14 +289,12 @@ class TriggerService: cache.model_dump_json(), ex=60 * 60, ) - session.commit() # delete the nodes not found in the graph for node_id in nodes_id_in_db: if node_id not in nodes_id_in_graph: session.delete(nodes_id_in_db[node_id]) redis_client.delete(f"{cls.__PLUGIN_TRIGGER_NODE_CACHE_KEY__}:{app.id}:{node_id}") - session.commit() except Exception: logger.exception("Failed to sync plugin trigger relationships for app %s", app.id) raise diff --git a/api/services/trigger/webhook_service.py b/api/services/trigger/webhook_service.py index c624a22e41..d562220fa7 100644 --- a/api/services/trigger/webhook_service.py +++ b/api/services/trigger/webhook_service.py @@ -7,12 +7,9 @@ from typing import Any, NotRequired, TypedDict import orjson from flask import request -from graphon.entities.graph_config import NodeConfigDict -from graphon.file import FileTransferMethod -from graphon.variables.types import ArrayValidation, SegmentType from pydantic import BaseModel from sqlalchemy import select -from sqlalchemy.orm import Session +from sqlalchemy.orm import Session, sessionmaker from werkzeug.datastructures import FileStorage from werkzeug.exceptions import RequestEntityTooLarge @@ -31,6 +28,9 @@ from enums.quota_type import QuotaType from extensions.ext_database import db from extensions.ext_redis import redis_client from factories import file_factory +from graphon.entities.graph_config import NodeConfigDict +from graphon.file import FileTransferMethod +from graphon.variables.types import ArrayValidation, SegmentType from models.enums import AppTriggerStatus, AppTriggerType from models.model import App from models.trigger import AppTrigger, WorkflowWebhookTrigger @@ -105,32 +105,32 @@ class WebhookService: """ with Session(db.engine) as session: # Get webhook trigger - webhook_trigger = ( - session.query(WorkflowWebhookTrigger).where(WorkflowWebhookTrigger.webhook_id == webhook_id).first() + webhook_trigger = session.scalar( + select(WorkflowWebhookTrigger).where(WorkflowWebhookTrigger.webhook_id == webhook_id).limit(1) ) if not webhook_trigger: raise ValueError(f"Webhook not found: {webhook_id}") if is_debug: - workflow = ( - session.query(Workflow) - .filter( + workflow = session.scalar( + select(Workflow) + .where( Workflow.app_id == webhook_trigger.app_id, Workflow.version == Workflow.VERSION_DRAFT, ) .order_by(Workflow.created_at.desc()) - .first() + .limit(1) ) else: # Check if the corresponding AppTrigger exists - app_trigger = ( - session.query(AppTrigger) - .filter( + app_trigger = session.scalar( + select(AppTrigger) + .where( AppTrigger.app_id == webhook_trigger.app_id, AppTrigger.node_id == webhook_trigger.node_id, AppTrigger.trigger_type == AppTriggerType.TRIGGER_WEBHOOK, ) - .first() + .limit(1) ) if not app_trigger: @@ -147,14 +147,14 @@ class WebhookService: raise ValueError(f"Webhook trigger is disabled for webhook {webhook_id}") # Get workflow - workflow = ( - session.query(Workflow) - .filter( + workflow = session.scalar( + select(Workflow) + .where( Workflow.app_id == webhook_trigger.app_id, Workflow.version != Workflow.VERSION_DRAFT, ) .order_by(Workflow.created_at.desc()) - .first() + .limit(1) ) if not workflow: raise ValueError(f"Workflow not found for app {webhook_trigger.app_id}") @@ -598,21 +598,38 @@ class WebhookService: Raises: ValueError: If the value cannot be converted to the specified type """ - if param_type == SegmentType.STRING: - return value - elif param_type == SegmentType.NUMBER: - if not cls._can_convert_to_number(value): - raise ValueError(f"Cannot convert '{value}' to number") - numeric_value = float(value) - return int(numeric_value) if numeric_value.is_integer() else numeric_value - elif param_type == SegmentType.BOOLEAN: - lower_value = value.lower() - bool_map = {"true": True, "false": False, "1": True, "0": False, "yes": True, "no": False} - if lower_value not in bool_map: - raise ValueError(f"Cannot convert '{value}' to boolean") - return bool_map[lower_value] - else: - raise ValueError(f"Unsupported type '{param_type}' for form data parameter '{param_name}'") + match param_type: + case SegmentType.STRING: + return value + case SegmentType.NUMBER: + if not cls._can_convert_to_number(value): + raise ValueError(f"Cannot convert '{value}' to number") + numeric_value = float(value) + return int(numeric_value) if numeric_value.is_integer() else numeric_value + case SegmentType.BOOLEAN: + lower_value = value.lower() + bool_map = {"true": True, "false": False, "1": True, "0": False, "yes": True, "no": False} + if lower_value not in bool_map: + raise ValueError(f"Cannot convert '{value}' to boolean") + return bool_map[lower_value] + case ( + SegmentType.OBJECT + | SegmentType.FILE + | SegmentType.ARRAY_ANY + | SegmentType.ARRAY_STRING + | SegmentType.ARRAY_NUMBER + | SegmentType.ARRAY_OBJECT + | SegmentType.ARRAY_FILE + | SegmentType.ARRAY_BOOLEAN + | SegmentType.SECRET + | SegmentType.INTEGER + | SegmentType.FLOAT + | SegmentType.NONE + | SegmentType.GROUP + ): + raise ValueError(f"Unsupported type '{param_type}' for form data parameter '{param_name}'") + case _: + raise ValueError(f"Unsupported type '{param_type}' for form data parameter '{param_name}'") @classmethod def _validate_json_value(cls, param_name: str, value: Any, param_type: SegmentType | str) -> Any: @@ -918,7 +935,7 @@ class WebhookService: logger.warning("Failed to acquire lock for webhook sync, app %s", app.id) raise RuntimeError("Failed to acquire lock for webhook trigger synchronization") - with Session(db.engine) as session: + with sessionmaker(bind=db.engine, expire_on_commit=False).begin() as session: # fetch the non-cached nodes from DB all_records = session.scalars( select(WorkflowWebhookTrigger).where( @@ -947,14 +964,12 @@ class WebhookService: redis_client.set( f"{cls.__WEBHOOK_NODE_CACHE_KEY__}:{app.id}:{node_id}", cache.model_dump_json(), ex=60 * 60 ) - session.commit() # delete the nodes not found in the graph for node_id in nodes_id_in_db: if node_id not in nodes_id_in_graph: session.delete(nodes_id_in_db[node_id]) redis_client.delete(f"{cls.__WEBHOOK_NODE_CACHE_KEY__}:{app.id}:{node_id}") - session.commit() except Exception: logger.exception("Failed to sync webhook relationships for app %s", app.id) raise diff --git a/api/services/variable_truncator.py b/api/services/variable_truncator.py index 4d58a9cf12..c96050ce13 100644 --- a/api/services/variable_truncator.py +++ b/api/services/variable_truncator.py @@ -5,6 +5,7 @@ from abc import ABC, abstractmethod from collections.abc import Mapping from typing import Any, overload +from configs import dify_config from graphon.file import File from graphon.nodes.variable_assigner.common.helpers import UpdatedVariable from graphon.variables.segments import ( @@ -21,8 +22,6 @@ from graphon.variables.segments import ( ) from graphon.variables.utils import dumps_with_segments -from configs import dify_config - _MAX_DEPTH = 100 diff --git a/api/services/vector_service.py b/api/services/vector_service.py index 9827c8dfbc..58193d75a9 100644 --- a/api/services/vector_service.py +++ b/api/services/vector_service.py @@ -1,6 +1,5 @@ import logging -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import delete, select from core.model_manager import ModelInstance, ModelManager @@ -13,6 +12,7 @@ from core.rag.index_processor.index_processor_base import BaseIndexProcessor from core.rag.index_processor.index_processor_factory import IndexProcessorFactory from core.rag.models.document import AttachmentDocument, Document from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelType from models import UploadFile from models.dataset import ChildChunk, Dataset, DatasetProcessRule, DocumentSegment, SegmentAttachmentBinding from models.dataset import Document as DatasetDocument diff --git a/api/services/website_service.py b/api/services/website_service.py index 2471c2cee8..ea584088bb 100644 --- a/api/services/website_service.py +++ b/api/services/website_service.py @@ -91,7 +91,7 @@ class WebsiteCrawlApiRequest: return CrawlRequest(url=self.url, provider=self.provider, options=options) @classmethod - def from_args(cls, args: dict) -> WebsiteCrawlApiRequest: + def from_args(cls, args: dict[str, Any]) -> WebsiteCrawlApiRequest: """Create from Flask-RESTful parsed arguments.""" provider = args.get("provider") url = args.get("url") @@ -115,7 +115,7 @@ class WebsiteCrawlStatusApiRequest: job_id: str @classmethod - def from_args(cls, args: dict, job_id: str) -> WebsiteCrawlStatusApiRequest: + def from_args(cls, args: dict[str, Any], job_id: str) -> WebsiteCrawlStatusApiRequest: """Create from Flask-RESTful parsed arguments.""" provider = args.get("provider") if not provider: @@ -163,7 +163,7 @@ class WebsiteService: raise ValueError("Invalid provider") @classmethod - def _get_decrypted_api_key(cls, tenant_id: str, config: dict) -> str: + def _get_decrypted_api_key(cls, tenant_id: str, config: dict[str, Any]) -> str: """Decrypt and return the API key from config.""" api_key = config.get("api_key") if not api_key: @@ -171,7 +171,7 @@ class WebsiteService: return encrypter.decrypt_token(tenant_id=tenant_id, token=api_key) @classmethod - def document_create_args_validate(cls, args: dict): + def document_create_args_validate(cls, args: dict[str, Any]): """Validate arguments for document creation.""" try: WebsiteCrawlApiRequest.from_args(args) @@ -195,7 +195,7 @@ class WebsiteService: raise ValueError("Invalid provider") @classmethod - def _crawl_with_firecrawl(cls, request: CrawlRequest, api_key: str, config: dict) -> dict[str, Any]: + def _crawl_with_firecrawl(cls, request: CrawlRequest, api_key: str, config: dict[str, Any]) -> dict[str, Any]: firecrawl_app = FirecrawlApp(api_key=api_key, base_url=config.get("base_url")) params: dict[str, Any] @@ -225,7 +225,7 @@ class WebsiteService: return {"status": "active", "job_id": job_id} @classmethod - def _crawl_with_watercrawl(cls, request: CrawlRequest, api_key: str, config: dict) -> dict[str, Any]: + def _crawl_with_watercrawl(cls, request: CrawlRequest, api_key: str, config: dict[str, Any]) -> dict[str, Any]: # Convert CrawlOptions back to dict format for WaterCrawlProvider options = { "limit": request.options.limit, @@ -290,7 +290,7 @@ class WebsiteService: raise ValueError("Invalid provider") @classmethod - def _get_firecrawl_status(cls, job_id: str, api_key: str, config: dict) -> CrawlStatusDict: + def _get_firecrawl_status(cls, job_id: str, api_key: str, config: dict[str, Any]) -> CrawlStatusDict: firecrawl_app = FirecrawlApp(api_key=api_key, base_url=config.get("base_url")) result: CrawlStatusResponse = firecrawl_app.check_crawl_status(job_id) crawl_status_data: CrawlStatusDict = { @@ -364,7 +364,9 @@ class WebsiteService: raise ValueError("Invalid provider") @classmethod - def _get_firecrawl_url_data(cls, job_id: str, url: str, api_key: str, config: dict) -> dict[str, Any] | None: + def _get_firecrawl_url_data( + cls, job_id: str, url: str, api_key: str, config: dict[str, Any] + ) -> dict[str, Any] | None: crawl_data: list[FirecrawlDocumentData] | None = None file_key = "website_files/" + job_id + ".txt" if storage.exists(file_key): @@ -438,7 +440,7 @@ class WebsiteService: raise ValueError("Invalid provider") @classmethod - def _scrape_with_firecrawl(cls, request: ScrapeRequest, api_key: str, config: dict) -> dict[str, Any]: + def _scrape_with_firecrawl(cls, request: ScrapeRequest, api_key: str, config: dict[str, Any]) -> dict[str, Any]: firecrawl_app = FirecrawlApp(api_key=api_key, base_url=config.get("base_url")) params = {"onlyMainContent": request.only_main_content} return dict(firecrawl_app.scrape_url(url=request.url, params=params)) diff --git a/api/services/workflow/workflow_converter.py b/api/services/workflow/workflow_converter.py index 1582bcd46c..5dedb9e372 100644 --- a/api/services/workflow/workflow_converter.py +++ b/api/services/workflow/workflow_converter.py @@ -1,11 +1,6 @@ import json from typing import Any, TypedDict -from graphon.file import FileUploadConfig -from graphon.model_runtime.entities.llm_entities import LLMMode -from graphon.model_runtime.utils.encoders import jsonable_encoder -from graphon.nodes import BuiltinNodeTypes -from graphon.variables.input_entities import VariableEntity from sqlalchemy import select from core.app.app_config.entities import ( @@ -24,6 +19,11 @@ from core.prompt.simple_prompt_transform import SimplePromptTransform from core.prompt.utils.prompt_template_parser import PromptTemplateParser from events.app_event import app_was_created from extensions.ext_database import db +from graphon.file import FileUploadConfig +from graphon.model_runtime.entities.llm_entities import LLMMode +from graphon.model_runtime.utils.encoders import jsonable_encoder +from graphon.nodes import BuiltinNodeTypes +from graphon.variables.input_entities import VariableEntity from models import Account from models.api_based_extension import APIBasedExtension, APIBasedExtensionPoint from models.model import App, AppMode, AppModelConfig, IconType diff --git a/api/services/workflow_app_service.py b/api/services/workflow_app_service.py index b5ab176ad2..59e02ec9b9 100644 --- a/api/services/workflow_app_service.py +++ b/api/services/workflow_app_service.py @@ -3,10 +3,10 @@ import uuid from datetime import datetime from typing import Any, TypedDict -from graphon.enums import WorkflowExecutionStatus from sqlalchemy import and_, func, or_, select from sqlalchemy.orm import Session +from graphon.enums import WorkflowExecutionStatus from models import Account, App, EndUser, TenantAccountJoin, WorkflowAppLog, WorkflowArchiveLog, WorkflowRun from models.enums import AppTriggerType, CreatorUserRole from models.trigger import WorkflowTriggerLog diff --git a/api/services/workflow_collaboration_service.py b/api/services/workflow_collaboration_service.py new file mode 100644 index 0000000000..cf2f509052 --- /dev/null +++ b/api/services/workflow_collaboration_service.py @@ -0,0 +1,295 @@ +from __future__ import annotations + +import logging +import time +from collections.abc import Mapping + +from sqlalchemy import select + +from core.db.session_factory import session_factory +from models.account import Account +from models.model import App +from repositories.workflow_collaboration_repository import WorkflowCollaborationRepository, WorkflowSessionInfo + +logger = logging.getLogger(__name__) + + +class WorkflowCollaborationService: + def __init__(self, repository: WorkflowCollaborationRepository, socketio) -> None: + self._repository = repository + self._socketio = socketio + + def __repr__(self) -> str: + return f"{self.__class__.__name__}(repository={self._repository})" + + def save_socket_identity(self, sid: str, user: Account) -> None: + """Persist the authenticated console user on the raw socket session.""" + self._socketio.save_session( + sid, + { + "user_id": user.id, + "username": user.name, + "avatar": user.avatar, + "tenant_id": user.current_tenant_id, + }, + ) + + def authorize_and_join_workflow_room(self, workflow_id: str, sid: str) -> tuple[str, bool] | None: + """ + Join a collaboration room only after validating the socket session and tenant-scoped app access. + + The Socket.IO payload still calls the room key `workflow_id`, but the identifier is the workflow app's + `App.id`. Returning `None` lets the controller reject the join before any Redis or room state is created. + """ + session = self._socketio.get_session(sid) + user_id = session.get("user_id") + tenant_id = session.get("tenant_id") + if not user_id or not tenant_id: + return None + + if not self._can_access_workflow(workflow_id, str(tenant_id)): + logger.warning( + "Workflow collaboration join rejected: workflow_id=%s tenant_id=%s user_id=%s sid=%s", + workflow_id, + tenant_id, + user_id, + sid, + ) + return None + + session_info: WorkflowSessionInfo = { + "user_id": str(user_id), + "username": str(session.get("username", "Unknown")), + "avatar": session.get("avatar"), + "sid": sid, + "connected_at": int(time.time()), + } + + self._repository.set_session_info(workflow_id, session_info) + + leader_sid = self.get_or_set_leader(workflow_id, sid) + is_leader = leader_sid == sid + + self._socketio.enter_room(sid, workflow_id) + self.broadcast_online_users(workflow_id) + + self._socketio.emit("status", {"isLeader": is_leader}, room=sid) + + return str(user_id), is_leader + + def _can_access_workflow(self, workflow_id: str, tenant_id: str) -> bool: + """Check room access without relying on Flask's app-context-bound scoped session.""" + with session_factory.create_session() as session: + app_id = session.scalar(select(App.id).where(App.id == workflow_id, App.tenant_id == tenant_id).limit(1)) + return app_id is not None + + def disconnect_session(self, sid: str) -> None: + mapping = self._repository.get_sid_mapping(sid) + if not mapping: + return + + workflow_id = mapping["workflow_id"] + self._repository.delete_session(workflow_id, sid) + + self.handle_leader_disconnect(workflow_id, sid) + self.broadcast_online_users(workflow_id) + + def relay_collaboration_event(self, sid: str, data: Mapping[str, object]) -> tuple[dict[str, str], int]: + mapping = self._repository.get_sid_mapping(sid) + if not mapping: + return {"msg": "unauthorized"}, 401 + + workflow_id = mapping["workflow_id"] + user_id = mapping["user_id"] + self.refresh_session_state(workflow_id, sid) + + event_type = data.get("type") + event_data = data.get("data") + timestamp = data.get("timestamp", int(time.time())) + + if not event_type: + return {"msg": "invalid event type"}, 400 + + if event_type == "sync_request": + leader_sid = self._repository.get_current_leader(workflow_id) + target_sid: str | None + if leader_sid and self.is_session_active(workflow_id, leader_sid): + target_sid = leader_sid + else: + if leader_sid: + self._repository.delete_leader(workflow_id) + target_sid = self._select_graph_leader(workflow_id, preferred_sid=sid) + if target_sid: + self._repository.set_leader(workflow_id, target_sid) + self.broadcast_leader_change(workflow_id, target_sid) + + if not target_sid: + return {"msg": "no_active_leader"}, 200 + + self._socketio.emit( + "collaboration_update", + {"type": event_type, "userId": user_id, "data": event_data, "timestamp": timestamp}, + room=target_sid, + ) + + return {"msg": "sync_request_forwarded"}, 200 + + self._socketio.emit( + "collaboration_update", + {"type": event_type, "userId": user_id, "data": event_data, "timestamp": timestamp}, + room=workflow_id, + skip_sid=sid, + ) + + return {"msg": "event_broadcasted"}, 200 + + def relay_graph_event(self, sid: str, data: object) -> tuple[dict[str, str], int]: + mapping = self._repository.get_sid_mapping(sid) + if not mapping: + return {"msg": "unauthorized"}, 401 + + workflow_id = mapping["workflow_id"] + self.refresh_session_state(workflow_id, sid) + + self._socketio.emit("graph_update", data, room=workflow_id, skip_sid=sid) + + return {"msg": "graph_update_broadcasted"}, 200 + + def get_or_set_leader(self, workflow_id: str, sid: str) -> str: + current_leader = self._repository.get_current_leader(workflow_id) + + if current_leader: + if self.is_session_active(workflow_id, current_leader): + return current_leader + self._repository.delete_session(workflow_id, current_leader) + self._repository.delete_leader(workflow_id) + + was_set = self._repository.set_leader_if_absent(workflow_id, sid) + + if was_set: + if current_leader: + self.broadcast_leader_change(workflow_id, sid) + return sid + + current_leader = self._repository.get_current_leader(workflow_id) + if current_leader: + return current_leader + + return sid + + def handle_leader_disconnect(self, workflow_id: str, disconnected_sid: str) -> None: + current_leader = self._repository.get_current_leader(workflow_id) + if not current_leader: + return + + if current_leader != disconnected_sid: + return + + new_leader_sid = self._select_graph_leader(workflow_id) + if new_leader_sid: + self._repository.set_leader(workflow_id, new_leader_sid) + self.broadcast_leader_change(workflow_id, new_leader_sid) + else: + self._repository.delete_leader(workflow_id) + + def broadcast_leader_change(self, workflow_id: str, new_leader_sid: str | None) -> None: + for sid in self._repository.get_session_sids(workflow_id): + try: + is_leader = new_leader_sid is not None and sid == new_leader_sid + self._socketio.emit("status", {"isLeader": is_leader}, room=sid) + except Exception: + logging.exception("Failed to emit leader status to session %s", sid) + + def get_current_leader(self, workflow_id: str) -> str | None: + return self._repository.get_current_leader(workflow_id) + + def _prune_inactive_sessions(self, workflow_id: str) -> list[WorkflowSessionInfo]: + """Remove inactive sessions from storage and return active sessions only.""" + sessions = self._repository.list_sessions(workflow_id) + if not sessions: + return [] + + active_sessions: list[WorkflowSessionInfo] = [] + stale_sids: list[str] = [] + for session in sessions: + sid = session["sid"] + if self.is_session_active(workflow_id, sid): + active_sessions.append(session) + else: + stale_sids.append(sid) + + for sid in stale_sids: + self._repository.delete_session(workflow_id, sid) + + return active_sessions + + def broadcast_online_users(self, workflow_id: str) -> None: + users = self._prune_inactive_sessions(workflow_id) + users.sort(key=lambda x: x.get("connected_at") or 0) + + leader_sid = self.get_current_leader(workflow_id) + previous_leader = leader_sid + active_sids = {user["sid"] for user in users} + if leader_sid and leader_sid not in active_sids: + self._repository.delete_leader(workflow_id) + leader_sid = None + + if not leader_sid and users: + leader_sid = self._select_graph_leader(workflow_id) + if leader_sid: + self._repository.set_leader(workflow_id, leader_sid) + + if leader_sid != previous_leader: + self.broadcast_leader_change(workflow_id, leader_sid) + + self._socketio.emit( + "online_users", + {"workflow_id": workflow_id, "users": users, "leader": leader_sid}, + room=workflow_id, + ) + + def refresh_session_state(self, workflow_id: str, sid: str) -> None: + self._repository.refresh_session_state(workflow_id, sid) + self._ensure_leader(workflow_id, sid) + + def _ensure_leader(self, workflow_id: str, sid: str) -> None: + current_leader = self._repository.get_current_leader(workflow_id) + if current_leader and self.is_session_active(workflow_id, current_leader): + self._repository.expire_leader(workflow_id) + return + + if current_leader: + self._repository.delete_leader(workflow_id) + + self._repository.set_leader(workflow_id, sid) + self.broadcast_leader_change(workflow_id, sid) + + def _select_graph_leader(self, workflow_id: str, preferred_sid: str | None = None) -> str | None: + session_sids = [ + session["sid"] + for session in self._repository.list_sessions(workflow_id) + if session.get("graph_active", True) and self.is_session_active(workflow_id, session["sid"]) + ] + if not session_sids: + return None + if preferred_sid and preferred_sid in session_sids: + return preferred_sid + return session_sids[0] + + def is_session_active(self, workflow_id: str, sid: str) -> bool: + if not sid: + return False + + try: + if not self._socketio.manager.is_connected(sid, "/"): + return False + except AttributeError: + return False + + if not self._repository.session_exists(workflow_id, sid): + return False + + if not self._repository.sid_mapping_exists(sid): + return False + + return True diff --git a/api/services/workflow_comment_service.py b/api/services/workflow_comment_service.py new file mode 100644 index 0000000000..ff47e4f253 --- /dev/null +++ b/api/services/workflow_comment_service.py @@ -0,0 +1,564 @@ +import logging +from collections.abc import Sequence + +from sqlalchemy import desc, select +from sqlalchemy.orm import Session, selectinload +from werkzeug.exceptions import Forbidden, NotFound + +from configs import dify_config +from extensions.ext_database import db +from libs.datetime_utils import naive_utc_now +from libs.helper import uuid_value +from models import App, TenantAccountJoin, WorkflowComment, WorkflowCommentMention, WorkflowCommentReply +from models.account import Account +from tasks.mail_workflow_comment_task import send_workflow_comment_mention_email_task + +logger = logging.getLogger(__name__) + + +class WorkflowCommentService: + """Service for managing workflow comments.""" + + @staticmethod + def _validate_content(content: str) -> None: + if len(content.strip()) == 0: + raise ValueError("Comment content cannot be empty") + + if len(content) > 1000: + raise ValueError("Comment content cannot exceed 1000 characters") + + @staticmethod + def _filter_valid_mentioned_user_ids( + mentioned_user_ids: Sequence[str], *, session: Session, tenant_id: str + ) -> list[str]: + """Return deduplicated UUID user IDs that belong to the tenant, preserving input order.""" + unique_user_ids: list[str] = [] + seen: set[str] = set() + for user_id in mentioned_user_ids: + if not isinstance(user_id, str): + continue + if not uuid_value(user_id): + continue + if user_id in seen: + continue + seen.add(user_id) + unique_user_ids.append(user_id) + if not unique_user_ids: + return [] + + tenant_member_ids = { + str(account_id) + for account_id in session.scalars( + select(TenantAccountJoin.account_id).where( + TenantAccountJoin.tenant_id == tenant_id, + TenantAccountJoin.account_id.in_(unique_user_ids), + ) + ).all() + } + + return [user_id for user_id in unique_user_ids if user_id in tenant_member_ids] + + @staticmethod + def _format_comment_excerpt(content: str, max_length: int = 200) -> str: + """Trim comment content for email display.""" + trimmed = content.strip() + if len(trimmed) <= max_length: + return trimmed + if max_length <= 3: + return trimmed[:max_length] + return f"{trimmed[: max_length - 3].rstrip()}..." + + @staticmethod + def _build_mention_email_payloads( + session: Session, + tenant_id: str, + app_id: str, + mentioner_id: str, + mentioned_user_ids: Sequence[str], + content: str, + ) -> list[dict[str, str]]: + """Prepare email payloads for mentioned users, including workflow app link.""" + if not mentioned_user_ids: + return [] + + candidate_user_ids = [user_id for user_id in mentioned_user_ids if user_id != mentioner_id] + if not candidate_user_ids: + return [] + + app_name_value = session.scalar(select(App.name).where(App.id == app_id, App.tenant_id == tenant_id)) + app_name = app_name_value if isinstance(app_name_value, str) and app_name_value else "Dify app" + commenter_name_value = session.scalar(select(Account.name).where(Account.id == mentioner_id)) + commenter_name = ( + commenter_name_value if isinstance(commenter_name_value, str) and commenter_name_value else "Dify user" + ) + comment_excerpt = WorkflowCommentService._format_comment_excerpt(content) + base_url = dify_config.CONSOLE_WEB_URL.rstrip("/") + app_url = f"{base_url}/app/{app_id}/workflow" + + accounts = session.scalars( + select(Account) + .join(TenantAccountJoin, TenantAccountJoin.account_id == Account.id) + .where(TenantAccountJoin.tenant_id == tenant_id, Account.id.in_(candidate_user_ids)) + ).all() + + payloads: list[dict[str, str]] = [] + for account in accounts: + email = account.email + if not isinstance(email, str) or not email: + continue + mentioned_name = account.name if isinstance(account.name, str) and account.name else email + language = ( + account.interface_language + if isinstance(account.interface_language, str) and account.interface_language + else "en-US" + ) + payloads.append( + { + "language": language, + "to": email, + "mentioned_name": mentioned_name, + "commenter_name": commenter_name, + "app_name": app_name, + "comment_content": comment_excerpt, + "app_url": app_url, + } + ) + return payloads + + @staticmethod + def _dispatch_mention_emails(payloads: Sequence[dict[str, str]]) -> None: + """Enqueue mention notification emails.""" + for payload in payloads: + send_workflow_comment_mention_email_task.delay(**payload) + + @staticmethod + def get_comments(tenant_id: str, app_id: str) -> Sequence[WorkflowComment]: + """Get all comments for a workflow.""" + with Session(db.engine) as session: + # Get all comments with eager loading + stmt = ( + select(WorkflowComment) + .options(selectinload(WorkflowComment.replies), selectinload(WorkflowComment.mentions)) + .where(WorkflowComment.tenant_id == tenant_id, WorkflowComment.app_id == app_id) + .order_by(desc(WorkflowComment.created_at)) + ) + + comments = session.scalars(stmt).all() + + # Batch preload all Account objects to avoid N+1 queries + WorkflowCommentService._preload_accounts(session, comments) + + return comments + + @staticmethod + def _preload_accounts(session: Session, comments: Sequence[WorkflowComment]) -> None: + """Batch preload Account objects for comments, replies, and mentions.""" + # Collect all user IDs + user_ids: set[str] = set() + for comment in comments: + user_ids.add(comment.created_by) + if comment.resolved_by: + user_ids.add(comment.resolved_by) + user_ids.update(reply.created_by for reply in comment.replies) + user_ids.update(mention.mentioned_user_id for mention in comment.mentions) + + if not user_ids: + return + + # Batch query all accounts + accounts = session.scalars(select(Account).where(Account.id.in_(user_ids))).all() + account_map = {str(account.id): account for account in accounts} + + # Cache accounts on objects + for comment in comments: + comment.cache_created_by_account(account_map.get(comment.created_by)) + comment.cache_resolved_by_account(account_map.get(comment.resolved_by) if comment.resolved_by else None) + for reply in comment.replies: + reply.cache_created_by_account(account_map.get(reply.created_by)) + for mention in comment.mentions: + mention.cache_mentioned_user_account(account_map.get(mention.mentioned_user_id)) + + @staticmethod + def get_comment(tenant_id: str, app_id: str, comment_id: str, session: Session | None = None) -> WorkflowComment: + """Get a specific comment.""" + + def _get_comment(session: Session) -> WorkflowComment: + stmt = ( + select(WorkflowComment) + .options(selectinload(WorkflowComment.replies), selectinload(WorkflowComment.mentions)) + .where( + WorkflowComment.id == comment_id, + WorkflowComment.tenant_id == tenant_id, + WorkflowComment.app_id == app_id, + ) + ) + comment = session.scalar(stmt) + + if not comment: + raise NotFound("Comment not found") + + # Preload accounts to avoid N+1 queries + WorkflowCommentService._preload_accounts(session, [comment]) + + return comment + + if session is not None: + return _get_comment(session) + else: + with Session(db.engine, expire_on_commit=False) as session: + return _get_comment(session) + + @staticmethod + def create_comment( + tenant_id: str, + app_id: str, + created_by: str, + content: str, + position_x: float, + position_y: float, + mentioned_user_ids: list[str] | None = None, + ) -> dict: + """Create a new workflow comment and send mention notification emails.""" + WorkflowCommentService._validate_content(content) + + with Session(db.engine) as session: + comment = WorkflowComment( + tenant_id=tenant_id, + app_id=app_id, + position_x=position_x, + position_y=position_y, + content=content, + created_by=created_by, + ) + + session.add(comment) + session.flush() # Get the comment ID for mentions + + # Create mentions if specified + mentioned_user_ids = WorkflowCommentService._filter_valid_mentioned_user_ids( + mentioned_user_ids or [], + session=session, + tenant_id=tenant_id, + ) + for user_id in mentioned_user_ids: + mention = WorkflowCommentMention( + comment_id=comment.id, + reply_id=None, # This is a comment mention, not reply mention + mentioned_user_id=user_id, + ) + session.add(mention) + + mention_email_payloads = WorkflowCommentService._build_mention_email_payloads( + session=session, + tenant_id=tenant_id, + app_id=app_id, + mentioner_id=created_by, + mentioned_user_ids=mentioned_user_ids, + content=content, + ) + + session.commit() + WorkflowCommentService._dispatch_mention_emails(mention_email_payloads) + + # Return only what we need - id and created_at + return {"id": comment.id, "created_at": comment.created_at} + + @staticmethod + def update_comment( + tenant_id: str, + app_id: str, + comment_id: str, + user_id: str, + content: str, + position_x: float | None = None, + position_y: float | None = None, + mentioned_user_ids: list[str] | None = None, + ) -> dict: + """Update a workflow comment and notify newly mentioned users. + + `mentioned_user_ids=None` means "leave mentions unchanged". + Passing an explicit list replaces the existing comment mentions, including clearing them with `[]`. + """ + WorkflowCommentService._validate_content(content) + + with Session(db.engine, expire_on_commit=False) as session: + # Get comment with validation + stmt = select(WorkflowComment).where( + WorkflowComment.id == comment_id, + WorkflowComment.tenant_id == tenant_id, + WorkflowComment.app_id == app_id, + ) + comment = session.scalar(stmt) + + if not comment: + raise NotFound("Comment not found") + + # Only the creator can update the comment + if comment.created_by != user_id: + raise Forbidden("Only the comment creator can update it") + + # Update comment fields + comment.content = content + if position_x is not None: + comment.position_x = position_x + if position_y is not None: + comment.position_y = position_y + + mention_email_payloads: list[dict[str, str]] = [] + if mentioned_user_ids is not None: + # Replace comment mentions only when the client explicitly sends the mention list. + existing_mentions = session.scalars( + select(WorkflowCommentMention).where( + WorkflowCommentMention.comment_id == comment.id, + WorkflowCommentMention.reply_id.is_(None), # Only comment mentions, not reply mentions + ) + ).all() + existing_mentioned_user_ids = {mention.mentioned_user_id for mention in existing_mentions} + for mention in existing_mentions: + session.delete(mention) + + filtered_mentioned_user_ids = WorkflowCommentService._filter_valid_mentioned_user_ids( + mentioned_user_ids, + session=session, + tenant_id=tenant_id, + ) + new_mentioned_user_ids = [ + mentioned_user_id + for mentioned_user_id in filtered_mentioned_user_ids + if mentioned_user_id not in existing_mentioned_user_ids + ] + for mentioned_user_id in filtered_mentioned_user_ids: + mention = WorkflowCommentMention( + comment_id=comment.id, + reply_id=None, # This is a comment mention + mentioned_user_id=mentioned_user_id, + ) + session.add(mention) + + mention_email_payloads = WorkflowCommentService._build_mention_email_payloads( + session=session, + tenant_id=tenant_id, + app_id=app_id, + mentioner_id=user_id, + mentioned_user_ids=new_mentioned_user_ids, + content=content, + ) + + session.commit() + WorkflowCommentService._dispatch_mention_emails(mention_email_payloads) + + return {"id": comment.id, "updated_at": comment.updated_at} + + @staticmethod + def delete_comment(tenant_id: str, app_id: str, comment_id: str, user_id: str) -> None: + """Delete a workflow comment.""" + with Session(db.engine, expire_on_commit=False) as session: + comment = WorkflowCommentService.get_comment(tenant_id, app_id, comment_id, session) + + # Only the creator can delete the comment + if comment.created_by != user_id: + raise Forbidden("Only the comment creator can delete it") + + # Delete associated mentions (both comment and reply mentions) + mentions = session.scalars( + select(WorkflowCommentMention).where(WorkflowCommentMention.comment_id == comment_id) + ).all() + for mention in mentions: + session.delete(mention) + + # Delete associated replies + replies = session.scalars( + select(WorkflowCommentReply).where(WorkflowCommentReply.comment_id == comment_id) + ).all() + for reply in replies: + session.delete(reply) + + session.delete(comment) + session.commit() + + @staticmethod + def resolve_comment(tenant_id: str, app_id: str, comment_id: str, user_id: str) -> WorkflowComment: + """Resolve a workflow comment.""" + with Session(db.engine, expire_on_commit=False) as session: + comment = WorkflowCommentService.get_comment(tenant_id, app_id, comment_id, session) + if comment.resolved: + return comment + + comment.resolved = True + comment.resolved_at = naive_utc_now() + comment.resolved_by = user_id + session.commit() + + return comment + + @staticmethod + def create_reply( + comment_id: str, content: str, created_by: str, mentioned_user_ids: list[str] | None = None + ) -> dict: + """Add a reply to a workflow comment and notify mentioned users.""" + WorkflowCommentService._validate_content(content) + + with Session(db.engine, expire_on_commit=False) as session: + # Check if comment exists + comment = session.get(WorkflowComment, comment_id) + if not comment: + raise NotFound("Comment not found") + + reply = WorkflowCommentReply(comment_id=comment_id, content=content, created_by=created_by) + + session.add(reply) + session.flush() # Get the reply ID for mentions + + # Create mentions if specified + mentioned_user_ids = WorkflowCommentService._filter_valid_mentioned_user_ids( + mentioned_user_ids or [], + session=session, + tenant_id=comment.tenant_id, + ) + for user_id in mentioned_user_ids: + # Create mention linking to specific reply + mention = WorkflowCommentMention(comment_id=comment_id, reply_id=reply.id, mentioned_user_id=user_id) + session.add(mention) + + mention_email_payloads = WorkflowCommentService._build_mention_email_payloads( + session=session, + tenant_id=comment.tenant_id, + app_id=comment.app_id, + mentioner_id=created_by, + mentioned_user_ids=mentioned_user_ids, + content=content, + ) + + session.commit() + WorkflowCommentService._dispatch_mention_emails(mention_email_payloads) + + return {"id": reply.id, "created_at": reply.created_at} + + @staticmethod + def _get_reply_in_comment_scope( + *, + session: Session, + tenant_id: str, + app_id: str, + comment_id: str, + reply_id: str, + ) -> WorkflowCommentReply: + """Get a reply scoped to tenant/app/comment to prevent cross-thread mutations.""" + stmt = ( + select(WorkflowCommentReply) + .join(WorkflowComment, WorkflowComment.id == WorkflowCommentReply.comment_id) + .where( + WorkflowCommentReply.id == reply_id, + WorkflowCommentReply.comment_id == comment_id, + WorkflowComment.tenant_id == tenant_id, + WorkflowComment.app_id == app_id, + ) + .limit(1) + ) + reply = session.scalar(stmt) + if not reply: + raise NotFound("Reply not found") + return reply + + @staticmethod + def update_reply( + tenant_id: str, + app_id: str, + comment_id: str, + reply_id: str, + user_id: str, + content: str, + mentioned_user_ids: list[str] | None = None, + ) -> dict: + """Update a comment reply and notify newly mentioned users.""" + WorkflowCommentService._validate_content(content) + + with Session(db.engine, expire_on_commit=False) as session: + reply = WorkflowCommentService._get_reply_in_comment_scope( + session=session, + tenant_id=tenant_id, + app_id=app_id, + comment_id=comment_id, + reply_id=reply_id, + ) + + # Only the creator can update the reply + if reply.created_by != user_id: + raise Forbidden("Only the reply creator can update it") + + reply.content = content + + # Update mentions - first remove existing mentions for this reply + existing_mentions = session.scalars( + select(WorkflowCommentMention).where(WorkflowCommentMention.reply_id == reply.id) + ).all() + existing_mentioned_user_ids = {mention.mentioned_user_id for mention in existing_mentions} + for mention in existing_mentions: + session.delete(mention) + + # Add mentions + raw_mentioned_user_ids = mentioned_user_ids or [] + comment = session.get(WorkflowComment, reply.comment_id) + mentioned_user_ids = [] + if comment: + mentioned_user_ids = WorkflowCommentService._filter_valid_mentioned_user_ids( + raw_mentioned_user_ids, + session=session, + tenant_id=comment.tenant_id, + ) + new_mentioned_user_ids = [ + user_id for user_id in mentioned_user_ids if user_id not in existing_mentioned_user_ids + ] + for user_id_str in mentioned_user_ids: + mention = WorkflowCommentMention( + comment_id=reply.comment_id, reply_id=reply.id, mentioned_user_id=user_id_str + ) + session.add(mention) + + mention_email_payloads: list[dict[str, str]] = [] + if comment: + mention_email_payloads = WorkflowCommentService._build_mention_email_payloads( + session=session, + tenant_id=comment.tenant_id, + app_id=comment.app_id, + mentioner_id=user_id, + mentioned_user_ids=new_mentioned_user_ids, + content=content, + ) + + session.commit() + session.refresh(reply) # Refresh to get updated timestamp + WorkflowCommentService._dispatch_mention_emails(mention_email_payloads) + + return {"id": reply.id, "updated_at": reply.updated_at} + + @staticmethod + def delete_reply(tenant_id: str, app_id: str, comment_id: str, reply_id: str, user_id: str) -> None: + """Delete a comment reply.""" + with Session(db.engine, expire_on_commit=False) as session: + reply = WorkflowCommentService._get_reply_in_comment_scope( + session=session, + tenant_id=tenant_id, + app_id=app_id, + comment_id=comment_id, + reply_id=reply_id, + ) + + # Only the creator can delete the reply + if reply.created_by != user_id: + raise Forbidden("Only the reply creator can delete it") + + # Delete associated mentions first + mentions = session.scalars( + select(WorkflowCommentMention).where(WorkflowCommentMention.reply_id == reply_id) + ).all() + for mention in mentions: + session.delete(mention) + + session.delete(reply) + session.commit() + + @staticmethod + def validate_comment_access(comment_id: str, tenant_id: str, app_id: str) -> WorkflowComment: + """Validate that a comment belongs to the specified tenant and app.""" + return WorkflowCommentService.get_comment(tenant_id, app_id, comment_id) diff --git a/api/services/workflow_draft_variable_service.py b/api/services/workflow_draft_variable_service.py index 9ed60bf86b..5ec00ee336 100644 --- a/api/services/workflow_draft_variable_service.py +++ b/api/services/workflow_draft_variable_service.py @@ -3,23 +3,11 @@ import json import logging from collections.abc import Mapping, Sequence from concurrent.futures import ThreadPoolExecutor +from datetime import datetime from enum import StrEnum -from typing import Any, ClassVar +from typing import Any, ClassVar, NotRequired, TypedDict -from graphon.enums import NodeType -from graphon.file import File -from graphon.nodes import BuiltinNodeTypes -from graphon.nodes.variable_assigner.common.helpers import get_updated_variables -from graphon.variable_loader import VariableLoader -from graphon.variables import Segment, StringSegment, VariableBase -from graphon.variables.consts import SELECTORS_LENGTH -from graphon.variables.segments import ( - ArrayFileSegment, - FileSegment, -) -from graphon.variables.types import SegmentType -from graphon.variables.utils import dumps_with_segments -from sqlalchemy import Engine, orm, select +from sqlalchemy import Engine, delete, orm, select from sqlalchemy.dialects.mysql import insert as mysql_insert from sqlalchemy.dialects.postgresql import insert as pg_insert from sqlalchemy.orm import Session, sessionmaker @@ -39,6 +27,19 @@ from core.workflow.variable_prefixes import ( from extensions.ext_storage import storage from factories.file_factory import StorageKeyLoader from factories.variable_factory import build_segment, segment_to_variable +from graphon.enums import NodeType +from graphon.file import File +from graphon.nodes import BuiltinNodeTypes +from graphon.nodes.variable_assigner.common.helpers import get_updated_variables +from graphon.variable_loader import VariableLoader +from graphon.variables import Segment, StringSegment, VariableBase +from graphon.variables.consts import SELECTORS_LENGTH +from graphon.variables.segments import ( + ArrayFileSegment, + FileSegment, +) +from graphon.variables.types import SegmentType +from graphon.variables.utils import dumps_with_segments from libs.datetime_utils import naive_utc_now from libs.uuid_utils import uuidv7 from models import Account, App, Conversation @@ -222,11 +223,10 @@ class WorkflowDraftVariableService: ) def get_variable(self, variable_id: str) -> WorkflowDraftVariable | None: - return ( - self._session.query(WorkflowDraftVariable) + return self._session.scalar( + select(WorkflowDraftVariable) .options(orm.selectinload(WorkflowDraftVariable.variable_file)) .where(WorkflowDraftVariable.id == variable_id) - .first() ) def get_draft_variables_by_selectors( @@ -254,20 +254,21 @@ class WorkflowDraftVariableService: # Alternatively, a `SELECT` statement could be constructed for each selector and # combined using `UNION` to fetch all rows. # Benchmarking indicates that both approaches yield comparable performance. - query = ( - self._session.query(WorkflowDraftVariable) - .options( - orm.selectinload(WorkflowDraftVariable.variable_file).selectinload( - WorkflowDraftVariableFile.upload_file + return list( + self._session.scalars( + select(WorkflowDraftVariable) + .options( + orm.selectinload(WorkflowDraftVariable.variable_file).selectinload( + WorkflowDraftVariableFile.upload_file + ) + ) + .where( + WorkflowDraftVariable.app_id == app_id, + WorkflowDraftVariable.user_id == user_id, + or_(*ors), ) ) - .where( - WorkflowDraftVariable.app_id == app_id, - WorkflowDraftVariable.user_id == user_id, - or_(*ors), - ) ) - return query.all() def list_variables_without_values( self, app_id: str, page: int, limit: int, user_id: str @@ -277,18 +278,21 @@ class WorkflowDraftVariableService: WorkflowDraftVariable.user_id == user_id, ] total = None - query = self._session.query(WorkflowDraftVariable).where(*criteria) + base_stmt = select(WorkflowDraftVariable).where(*criteria) if page == 1: - total = query.count() - variables = ( - # Do not load the `value` field - query.options( - orm.defer(WorkflowDraftVariable.value, raiseload=True), + from sqlalchemy import func as sa_func + + total = self._session.scalar(select(sa_func.count()).select_from(base_stmt.subquery())) + variables = list( + self._session.scalars( + # Do not load the `value` field + base_stmt.options( + orm.defer(WorkflowDraftVariable.value, raiseload=True), + ) + .order_by(WorkflowDraftVariable.created_at.desc()) + .limit(limit) + .offset((page - 1) * limit) ) - .order_by(WorkflowDraftVariable.created_at.desc()) - .limit(limit) - .offset((page - 1) * limit) - .all() ) return WorkflowDraftVariableList(variables=variables, total=total) @@ -299,11 +303,13 @@ class WorkflowDraftVariableService: WorkflowDraftVariable.node_id == node_id, WorkflowDraftVariable.user_id == user_id, ] - query = self._session.query(WorkflowDraftVariable).where(*criteria) - variables = ( - query.options(orm.selectinload(WorkflowDraftVariable.variable_file)) - .order_by(WorkflowDraftVariable.created_at.desc()) - .all() + variables = list( + self._session.scalars( + select(WorkflowDraftVariable) + .options(orm.selectinload(WorkflowDraftVariable.variable_file)) + .where(*criteria) + .order_by(WorkflowDraftVariable.created_at.desc()) + ) ) return WorkflowDraftVariableList(variables=variables) @@ -326,8 +332,8 @@ class WorkflowDraftVariableService: return self._get_variable(app_id, node_id, name, user_id=user_id) def _get_variable(self, app_id: str, node_id: str, name: str, user_id: str) -> WorkflowDraftVariable | None: - return ( - self._session.query(WorkflowDraftVariable) + return self._session.scalar( + select(WorkflowDraftVariable) .options(orm.selectinload(WorkflowDraftVariable.variable_file)) .where( WorkflowDraftVariable.app_id == app_id, @@ -335,7 +341,6 @@ class WorkflowDraftVariableService: WorkflowDraftVariable.name == name, WorkflowDraftVariable.user_id == user_id, ) - .first() ) def update_variable( @@ -488,20 +493,20 @@ class WorkflowDraftVariableService: self._session.delete(variable) def delete_user_workflow_variables(self, app_id: str, user_id: str): - ( - self._session.query(WorkflowDraftVariable) + self._session.execute( + delete(WorkflowDraftVariable) .where( WorkflowDraftVariable.app_id == app_id, WorkflowDraftVariable.user_id == user_id, ) - .delete(synchronize_session=False) + .execution_options(synchronize_session=False) ) def delete_app_workflow_variables(self, app_id: str): - ( - self._session.query(WorkflowDraftVariable) + self._session.execute( + delete(WorkflowDraftVariable) .where(WorkflowDraftVariable.app_id == app_id) - .delete(synchronize_session=False) + .execution_options(synchronize_session=False) ) def delete_workflow_draft_variable_file(self, deletions: list[DraftVarFileDeletion]): @@ -540,14 +545,14 @@ class WorkflowDraftVariableService: return self._delete_node_variables(app_id, node_id, user_id=user_id) def _delete_node_variables(self, app_id: str, node_id: str, user_id: str): - ( - self._session.query(WorkflowDraftVariable) + self._session.execute( + delete(WorkflowDraftVariable) .where( WorkflowDraftVariable.app_id == app_id, WorkflowDraftVariable.node_id == node_id, WorkflowDraftVariable.user_id == user_id, ) - .delete(synchronize_session=False) + .execution_options(synchronize_session=False) ) def _get_conversation_id_from_draft_variable(self, app_id: str, user_id: str) -> str | None: @@ -588,13 +593,11 @@ class WorkflowDraftVariableService: conv_id = self._get_conversation_id_from_draft_variable(workflow.app_id, account_id) if conv_id is not None: - conversation = ( - self._session.query(Conversation) - .where( + conversation = self._session.scalar( + select(Conversation).where( Conversation.id == conv_id, Conversation.app_id == workflow.app_id, ) - .first() ) # Only return the conversation ID if it exists and is valid (has a correspond conversation record in DB). if conversation is not None: @@ -723,8 +726,27 @@ def _batch_upsert_draft_variable( session.execute(stmt) -def _model_to_insertion_dict(model: WorkflowDraftVariable) -> dict[str, Any]: - d: dict[str, Any] = { +class _InsertionDict(TypedDict): + id: str + app_id: str + user_id: str | None + last_edited_at: datetime | None + node_id: str + name: str + selector: str + value_type: SegmentType + value: str + node_execution_id: str | None + file_id: str | None + visible: NotRequired[bool] + editable: NotRequired[bool] + created_at: NotRequired[datetime] + updated_at: NotRequired[datetime] + description: NotRequired[str] + + +def _model_to_insertion_dict(model: WorkflowDraftVariable) -> _InsertionDict: + d: _InsertionDict = { "id": model.id, "app_id": model.app_id, "user_id": model.user_id, @@ -1075,9 +1097,8 @@ class DraftVariableSaver: ) engine = bind = self._session.get_bind() assert isinstance(engine, Engine) - with Session(bind=engine, expire_on_commit=False) as session: + with sessionmaker(bind=engine, expire_on_commit=False).begin() as session: session.add(variable_file) - session.commit() return truncation_result.result, variable_file diff --git a/api/services/workflow_event_snapshot_service.py b/api/services/workflow_event_snapshot_service.py index 601e9261fc..5fca444723 100644 --- a/api/services/workflow_event_snapshot_service.py +++ b/api/services/workflow_event_snapshot_service.py @@ -9,10 +9,6 @@ from collections.abc import Generator, Mapping, Sequence from dataclasses import dataclass from typing import Any -from graphon.entities import WorkflowStartReason -from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus -from graphon.runtime import GraphRuntimeState -from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from sqlalchemy import desc, select from sqlalchemy.orm import Session, sessionmaker @@ -26,6 +22,10 @@ from core.app.entities.task_entities import ( WorkflowStartStreamResponse, ) from core.app.layers.pause_state_persist_layer import WorkflowResumptionContext +from graphon.entities import WorkflowStartReason +from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus +from graphon.runtime import GraphRuntimeState +from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from models.model import AppMode, Message from models.workflow import WorkflowNodeExecutionTriggeredFrom, WorkflowRun from repositories.api_workflow_node_execution_repository import WorkflowNodeExecutionSnapshot diff --git a/api/services/workflow_run_service.py b/api/services/workflow_run_service.py index b903d8df5f..29b9e72a00 100644 --- a/api/services/workflow_run_service.py +++ b/api/services/workflow_run_service.py @@ -1,5 +1,6 @@ import threading from collections.abc import Sequence +from typing import TypedDict from sqlalchemy import Engine from sqlalchemy.orm import sessionmaker @@ -19,6 +20,14 @@ from repositories.api_workflow_run_repository import APIWorkflowRunRepository from repositories.factory import DifyAPIRepositoryFactory +class WorkflowRunListArgs(TypedDict, total=False): + """Expected shape of the args dict passed to workflow run pagination methods.""" + + limit: int + last_id: str + status: str + + class WorkflowRunService: _session_factory: sessionmaker _workflow_run_repo: APIWorkflowRunRepository @@ -37,7 +46,10 @@ class WorkflowRunService: self._workflow_run_repo = DifyAPIRepositoryFactory.create_api_workflow_run_repository(self._session_factory) def get_paginate_advanced_chat_workflow_runs( - self, app_model: App, args: dict, triggered_from: WorkflowRunTriggeredFrom = WorkflowRunTriggeredFrom.DEBUGGING + self, + app_model: App, + args: WorkflowRunListArgs, + triggered_from: WorkflowRunTriggeredFrom = WorkflowRunTriggeredFrom.DEBUGGING, ) -> InfiniteScrollPagination: """ Get advanced chat app workflow run list @@ -73,7 +85,10 @@ class WorkflowRunService: return pagination def get_paginate_workflow_runs( - self, app_model: App, args: dict, triggered_from: WorkflowRunTriggeredFrom = WorkflowRunTriggeredFrom.DEBUGGING + self, + app_model: App, + args: WorkflowRunListArgs, + triggered_from: WorkflowRunTriggeredFrom = WorkflowRunTriggeredFrom.DEBUGGING, ) -> InfiniteScrollPagination: """ Get workflow run list diff --git a/api/services/workflow_service.py b/api/services/workflow_service.py index eaffb60c63..d71223314e 100644 --- a/api/services/workflow_service.py +++ b/api/services/workflow_service.py @@ -5,7 +5,41 @@ import uuid from collections.abc import Callable, Generator, Mapping, Sequence from typing import Any, cast -from graphon.entities import GraphInitParams, WorkflowNodeExecution +from sqlalchemy import exists, select +from sqlalchemy.orm import Session, sessionmaker + +from configs import dify_config +from core.app.apps.advanced_chat.app_config_manager import AdvancedChatAppConfigManager +from core.app.apps.workflow.app_config_manager import WorkflowAppConfigManager +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom, build_dify_run_context +from core.app.file_access import DatabaseFileAccessController +from core.entities import PluginCredentialType +from core.plugin.impl.model_runtime_factory import create_plugin_model_assembly, create_plugin_provider_manager +from core.repositories import DifyCoreRepositoryFactory +from core.repositories.human_input_repository import FormCreateParams, HumanInputFormRepositoryImpl +from core.trigger.constants import is_trigger_node_type +from core.workflow.human_input_compat import ( + DeliveryChannelConfig, + normalize_human_input_node_data_for_graph, + parse_human_input_delivery_methods, +) +from core.workflow.node_factory import ( + LATEST_VERSION, + DifyGraphInitContext, + get_node_type_classes_mapping, + is_start_node_type, +) +from core.workflow.node_runtime import DifyHumanInputNodeRuntime, apply_dify_debug_email_recipient +from core.workflow.system_variables import build_bootstrap_variables, build_system_variables, default_system_variables +from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add_variables_to_pool +from core.workflow.workflow_entry import WorkflowEntry +from enterprise.telemetry.draft_trace import enqueue_draft_node_execution_trace +from enums.cloud_plan import CloudPlan +from events.app_event import app_draft_workflow_was_synced, app_published_workflow_was_updated +from extensions.ext_database import db +from extensions.ext_storage import storage +from factories.file_factory import build_from_mapping, build_from_mappings +from graphon.entities import WorkflowNodeExecution from graphon.entities.graph_config import NodeConfigDict from graphon.entities.pause_reason import HumanInputRequired from graphon.enums import ( @@ -30,35 +64,6 @@ from graphon.variable_loader import load_into_variable_pool from graphon.variables import VariableBase from graphon.variables.input_entities import VariableEntityType from graphon.variables.variables import Variable -from sqlalchemy import exists, select -from sqlalchemy.orm import Session, sessionmaker - -from configs import dify_config -from core.app.apps.advanced_chat.app_config_manager import AdvancedChatAppConfigManager -from core.app.apps.workflow.app_config_manager import WorkflowAppConfigManager -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom, build_dify_run_context -from core.app.file_access import DatabaseFileAccessController -from core.entities import PluginCredentialType -from core.plugin.impl.model_runtime_factory import create_plugin_model_assembly, create_plugin_provider_manager -from core.repositories import DifyCoreRepositoryFactory -from core.repositories.human_input_repository import FormCreateParams, HumanInputFormRepositoryImpl -from core.trigger.constants import is_trigger_node_type -from core.workflow.human_input_compat import ( - DeliveryChannelConfig, - normalize_human_input_node_data_for_graph, - parse_human_input_delivery_methods, -) -from core.workflow.node_factory import LATEST_VERSION, get_node_type_classes_mapping, is_start_node_type -from core.workflow.node_runtime import DifyHumanInputNodeRuntime, apply_dify_debug_email_recipient -from core.workflow.system_variables import build_bootstrap_variables, build_system_variables, default_system_variables -from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add_variables_to_pool -from core.workflow.workflow_entry import WorkflowEntry -from enterprise.telemetry.draft_trace import enqueue_draft_node_execution_trace -from enums.cloud_plan import CloudPlan -from events.app_event import app_draft_workflow_was_synced, app_published_workflow_was_updated -from extensions.ext_database import db -from extensions.ext_storage import storage -from factories.file_factory import build_from_mapping, build_from_mappings from libs.datetime_utils import naive_utc_now from models import Account from models.human_input import HumanInputFormRecipient, RecipientType @@ -194,6 +199,16 @@ class WorkflowService: return workflow + def get_accessible_app_ids(self, app_ids: Sequence[str], tenant_id: str) -> set[str]: + """ + Return app IDs that belong to the given tenant. + """ + if not app_ids: + return set() + + stmt = select(App.id).where(App.id.in_(app_ids), App.tenant_id == tenant_id) + return {str(app_id) for app_id in db.session.scalars(stmt).all()} + def get_all_published_workflow( self, *, @@ -236,8 +251,8 @@ class WorkflowService: self, *, app_model: App, - graph: dict, - features: dict, + graph: dict[str, Any], + features: dict[str, Any], unique_hash: str | None, account: Account, environment_variables: Sequence[VariableBase], @@ -291,6 +306,78 @@ class WorkflowService: # return draft workflow return workflow + def update_draft_workflow_environment_variables( + self, + *, + app_model: App, + environment_variables: Sequence[VariableBase], + account: Account, + ): + """ + Update draft workflow environment variables + """ + # fetch draft workflow by app_model + workflow = self.get_draft_workflow(app_model=app_model) + + if not workflow: + raise ValueError("No draft workflow found.") + + workflow.environment_variables = environment_variables + workflow.updated_by = account.id + workflow.updated_at = naive_utc_now() + + # commit db session changes + db.session.commit() + + def update_draft_workflow_conversation_variables( + self, + *, + app_model: App, + conversation_variables: Sequence[VariableBase], + account: Account, + ): + """ + Update draft workflow conversation variables + """ + # fetch draft workflow by app_model + workflow = self.get_draft_workflow(app_model=app_model) + + if not workflow: + raise ValueError("No draft workflow found.") + + workflow.conversation_variables = conversation_variables + workflow.updated_by = account.id + workflow.updated_at = naive_utc_now() + + # commit db session changes + db.session.commit() + + def update_draft_workflow_features( + self, + *, + app_model: App, + features: dict, + account: Account, + ): + """ + Update draft workflow features + """ + # fetch draft workflow by app_model + workflow = self.get_draft_workflow(app_model=app_model) + + if not workflow: + raise ValueError("No draft workflow found.") + + # validate features structure + self.validate_features_structure(app_model=app_model, features=features) + + workflow.features = json.dumps(features) + workflow.updated_by = account.id + workflow.updated_at = naive_utc_now() + + # commit db session changes + db.session.commit() + def restore_published_workflow_to_draft( self, *, @@ -571,7 +658,7 @@ class WorkflowService: except Exception as e: raise ValueError(f"Failed to validate default credential for tool provider {provider}: {str(e)}") - def _validate_load_balancing_credentials(self, workflow: Workflow, node_data: dict, node_id: str) -> None: + def _validate_load_balancing_credentials(self, workflow: Workflow, node_data: dict[str, Any], node_id: str) -> None: """ Validate load balancing credentials for a workflow node. @@ -837,7 +924,7 @@ class WorkflowService: with sessionmaker(db.engine).begin() as session: outputs = workflow_node_execution.load_full_outputs(session, storage) - with Session(bind=db.engine) as session, session.begin(): + with sessionmaker(bind=db.engine).begin() as session: draft_var_saver = DraftVariableSaver( session=session, app_id=app_model.id, @@ -848,7 +935,6 @@ class WorkflowService: user=account, ) draft_var_saver.save(process_data=node_execution.process_data, outputs=outputs) - session.commit() enqueue_draft_node_execution_trace( execution=workflow_node_execution, @@ -977,7 +1063,7 @@ class WorkflowService: enclosing_node_type_and_id = draft_workflow.get_enclosing_node_type_and_id(node_config) enclosing_node_id = enclosing_node_type_and_id[1] if enclosing_node_type_and_id else None - with Session(bind=db.engine) as session, session.begin(): + with sessionmaker(bind=db.engine).begin() as session: draft_var_saver = DraftVariableSaver( session=session, app_id=app_model.id, @@ -988,7 +1074,6 @@ class WorkflowService: enclosing_node_id=enclosing_node_id, ) draft_var_saver.save(outputs=outputs, process_data={}) - session.commit() return outputs @@ -1134,18 +1219,20 @@ class WorkflowService: node_config: NodeConfigDict, variable_pool: VariablePool, ) -> HumanInputNode: - graph_init_params = GraphInitParams( + run_context = build_dify_run_context( + tenant_id=workflow.tenant_id, + app_id=workflow.app_id, + user_id=account.id, + user_from=UserFrom.ACCOUNT, + invoke_from=InvokeFrom.DEBUGGER, + ) + graph_init_context = DifyGraphInitContext( workflow_id=workflow.id, graph_config=workflow.graph_dict, - run_context=build_dify_run_context( - tenant_id=workflow.tenant_id, - app_id=workflow.app_id, - user_id=account.id, - user_from=UserFrom.ACCOUNT, - invoke_from=InvokeFrom.DEBUGGER, - ), + run_context=run_context, call_depth=0, ) + graph_init_params = graph_init_context.to_graph_init_params() graph_runtime_state = GraphRuntimeState( variable_pool=variable_pool, start_at=time.perf_counter(), @@ -1155,7 +1242,7 @@ class WorkflowService: config=node_config, graph_init_params=graph_init_params, graph_runtime_state=graph_runtime_state, - runtime=DifyHumanInputNodeRuntime(graph_init_params.run_context), + runtime=DifyHumanInputNodeRuntime(run_context), ) return node @@ -1209,7 +1296,7 @@ class WorkflowService: return variable_pool def run_free_workflow_node( - self, node_data: dict, tenant_id: str, user_id: str, node_id: str, user_inputs: dict[str, Any] + self, node_data: dict[str, Any], tenant_id: str, user_id: str, node_id: str, user_inputs: dict[str, Any] ) -> WorkflowNodeExecution: """ Run free workflow node @@ -1356,7 +1443,7 @@ class WorkflowService: node_execution.status = WorkflowNodeExecutionStatus.FAILED node_execution.error = error - def convert_to_workflow(self, app_model: App, account: Account, args: dict) -> App: + def convert_to_workflow(self, app_model: App, account: Account, args: dict[str, Any]) -> App: """ Basic mode of chatbot app(expert mode) to workflow Completion App to Workflow App @@ -1416,7 +1503,7 @@ class WorkflowService: if node_type == BuiltinNodeTypes.HUMAN_INPUT: self._validate_human_input_node_data(node_data) - def validate_features_structure(self, app_model: App, features: dict): + def validate_features_structure(self, app_model: App, features: dict[str, Any]): match app_model.mode: case AppMode.ADVANCED_CHAT: return AdvancedChatAppConfigManager.config_validate( @@ -1429,7 +1516,7 @@ class WorkflowService: case _: raise ValueError(f"Invalid app mode: {app_model.mode}") - def _validate_human_input_node_data(self, node_data: dict) -> None: + def _validate_human_input_node_data(self, node_data: dict[str, Any]) -> None: """ Validate HumanInput node data format. @@ -1447,7 +1534,7 @@ class WorkflowService: raise ValueError(f"Invalid HumanInput node data: {str(e)}") def update_workflow( - self, *, session: Session, workflow_id: str, tenant_id: str, account_id: str, data: dict + self, *, session: Session, workflow_id: str, tenant_id: str, account_id: str, data: dict[str, Any] ) -> Workflow | None: """ Update workflow attributes @@ -1507,14 +1594,12 @@ class WorkflowService: # Don't use workflow.tool_published as it's not accurate for specific workflow versions # Check if there's a tool provider using this specific workflow version - tool_provider = ( - session.query(WorkflowToolProvider) - .where( + tool_provider = session.scalar( + select(WorkflowToolProvider).where( WorkflowToolProvider.tenant_id == workflow.tenant_id, WorkflowToolProvider.app_id == workflow.app_id, WorkflowToolProvider.version == workflow.version, ) - .first() ) if tool_provider: diff --git a/api/tasks/add_document_to_index_task.py b/api/tasks/add_document_to_index_task.py index ae55c9ee03..c9d4673c0a 100644 --- a/api/tasks/add_document_to_index_task.py +++ b/api/tasks/add_document_to_index_task.py @@ -3,6 +3,7 @@ import time import click from celery import shared_task +from sqlalchemy import delete, select, update from core.db.session_factory import session_factory from core.rag.index_processor.constant.doc_type import DocType @@ -30,7 +31,9 @@ def add_document_to_index_task(dataset_document_id: str): start_at = time.perf_counter() with session_factory.create_session() as session: - dataset_document = session.query(DatasetDocument).where(DatasetDocument.id == dataset_document_id).first() + dataset_document = session.scalar( + select(DatasetDocument).where(DatasetDocument.id == dataset_document_id).limit(1) + ) if not dataset_document: logger.info(click.style(f"Document not found: {dataset_document_id}", fg="red")) return @@ -45,15 +48,14 @@ def add_document_to_index_task(dataset_document_id: str): if not dataset: raise Exception(f"Document {dataset_document.id} dataset {dataset_document.dataset_id} doesn't exist.") - segments = ( - session.query(DocumentSegment) + segments = session.scalars( + select(DocumentSegment) .where( DocumentSegment.document_id == dataset_document.id, DocumentSegment.status == SegmentStatus.COMPLETED, ) .order_by(DocumentSegment.position.asc()) - .all() - ) + ).all() documents = [] multimodal_documents = [] @@ -104,18 +106,15 @@ def add_document_to_index_task(dataset_document_id: str): index_processor.load(dataset, documents, multimodal_documents=multimodal_documents) # delete auto disable log - session.query(DatasetAutoDisableLog).where( - DatasetAutoDisableLog.document_id == dataset_document.id - ).delete() + session.execute( + delete(DatasetAutoDisableLog).where(DatasetAutoDisableLog.document_id == dataset_document.id) + ) # update segment to enable - session.query(DocumentSegment).where(DocumentSegment.document_id == dataset_document.id).update( - { - DocumentSegment.enabled: True, - DocumentSegment.disabled_at: None, - DocumentSegment.disabled_by: None, - DocumentSegment.updated_at: naive_utc_now(), - } + session.execute( + update(DocumentSegment) + .where(DocumentSegment.document_id == dataset_document.id) + .values(enabled=True, disabled_at=None, disabled_by=None, updated_at=naive_utc_now()) ) session.commit() diff --git a/api/tasks/app_generate/workflow_execute_task.py b/api/tasks/app_generate/workflow_execute_task.py index 8f2f5f261e..c22e7e9918 100644 --- a/api/tasks/app_generate/workflow_execute_task.py +++ b/api/tasks/app_generate/workflow_execute_task.py @@ -7,7 +7,6 @@ from typing import Annotated, Any from celery import shared_task from flask import current_app, json -from graphon.runtime import GraphRuntimeState from pydantic import BaseModel, Discriminator, Field, Tag from sqlalchemy import Engine, select from sqlalchemy.orm import Session, sessionmaker @@ -23,6 +22,7 @@ from core.app.entities.app_invoke_entities import ( from core.app.layers.pause_state_persist_layer import PauseStateLayerConfig, WorkflowResumptionContext from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.runtime import GraphRuntimeState from libs.flask_utils import set_login_user from models.account import Account from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom diff --git a/api/tasks/async_workflow_tasks.py b/api/tasks/async_workflow_tasks.py index 0a73c91279..5809268992 100644 --- a/api/tasks/async_workflow_tasks.py +++ b/api/tasks/async_workflow_tasks.py @@ -7,15 +7,15 @@ with appropriate retry policies and error handling. import logging from datetime import UTC, datetime -from typing import Any +from typing import Any, NotRequired from celery import shared_task -from graphon.runtime import GraphRuntimeState from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker +from typing_extensions import TypedDict from configs import dify_config -from core.app.apps.workflow.app_generator import SKIP_PREPARE_USER_INPUTS_KEY, WorkflowAppGenerator +from core.app.apps.workflow.app_generator import WorkflowAppGenerator from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity from core.app.layers.pause_state_persist_layer import PauseStateLayerConfig, WorkflowResumptionContext from core.app.layers.timeslice_layer import TimeSliceLayer @@ -23,6 +23,7 @@ from core.app.layers.trigger_post_layer import TriggerPostLayer from core.db.session_factory import session_factory from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.runtime import GraphRuntimeState from models.account import Account from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom, WorkflowTriggerStatus from models.model import App, EndUser, Tenant @@ -42,6 +43,13 @@ from tasks.workflow_cfs_scheduler.entities import AsyncWorkflowQueue, AsyncWorkf logger = logging.getLogger(__name__) +class WorkflowGeneratorArgsDict(TypedDict): + inputs: dict[str, Any] + files: list[Any] + _skip_prepare_user_inputs: bool + workflow_id: NotRequired[str] + + @shared_task(queue=AsyncWorkflowQueue.PROFESSIONAL_QUEUE) def execute_workflow_professional(task_data_dict: dict[str, Any]): """Execute workflow for professional tier with highest priority""" @@ -90,15 +98,13 @@ def execute_workflow_sandbox(task_data_dict: dict[str, Any]): ) -def _build_generator_args(trigger_data: TriggerData) -> dict[str, Any]: +def _build_generator_args(trigger_data: TriggerData) -> WorkflowGeneratorArgsDict: """Build args passed into WorkflowAppGenerator.generate for Celery executions.""" - - args: dict[str, Any] = { + return { "inputs": dict(trigger_data.inputs), "files": list(trigger_data.files), - SKIP_PREPARE_USER_INPUTS_KEY: True, + "_skip_prepare_user_inputs": True, } - return args def _execute_workflow_common( @@ -156,7 +162,12 @@ def _execute_workflow_common( state_owner_user_id=workflow.created_by, ) - # Execute the workflow with the trigger type + # NOTE (hj24) + # Release the transaction before the blocking generate() call, + # otherwise the connection stays "idle in transaction" for hours. + session.commit() + # NOTE END + generator.generate( app_model=app_model, workflow=workflow, diff --git a/api/tasks/batch_clean_document_task.py b/api/tasks/batch_clean_document_task.py index 66aafc30b9..56c371fcc1 100644 --- a/api/tasks/batch_clean_document_task.py +++ b/api/tasks/batch_clean_document_task.py @@ -1,9 +1,11 @@ import logging import time +from typing import cast import click from celery import shared_task from sqlalchemy import delete, select +from sqlalchemy.engine import CursorResult from core.db.session_factory import session_factory from core.rag.index_processor.index_processor_factory import IndexProcessorFactory @@ -92,14 +94,16 @@ def batch_clean_document_task(document_ids: list[str], dataset_id: str, doc_form # ============ Step 3: Delete metadata binding (separate short transaction) ============ try: with session_factory.create_session() as session: - deleted_count = int( - session.query(DatasetMetadataBinding) - .where( - DatasetMetadataBinding.dataset_id == dataset_id, - DatasetMetadataBinding.document_id.in_(document_ids), - ) - .delete(synchronize_session=False) + result = cast( + CursorResult, + session.execute( + delete(DatasetMetadataBinding).where( + DatasetMetadataBinding.dataset_id == dataset_id, + DatasetMetadataBinding.document_id.in_(document_ids), + ) + ), ) + deleted_count = result.rowcount session.commit() logger.debug("Deleted %d metadata bindings for dataset_id: %s", deleted_count, dataset_id) except Exception: diff --git a/api/tasks/batch_create_segment_to_index_task.py b/api/tasks/batch_create_segment_to_index_task.py index 77feea47a2..beb23d8354 100644 --- a/api/tasks/batch_create_segment_to_index_task.py +++ b/api/tasks/batch_create_segment_to_index_task.py @@ -3,11 +3,11 @@ import tempfile import time import uuid from pathlib import Path +from typing import Any import click import pandas as pd from celery import shared_task -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import func, select from core.db.session_factory import session_factory @@ -15,6 +15,7 @@ from core.model_manager import ModelManager from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType from extensions.ext_redis import redis_client from extensions.ext_storage import storage +from graphon.model_runtime.entities.model_entities import ModelType from libs import helper from libs.datetime_utils import naive_utc_now from models.dataset import Dataset, Document, DocumentSegment @@ -51,8 +52,8 @@ def batch_create_segment_to_index_task( # Initialize variables with default values upload_file_key: str | None = None - dataset_config: dict | None = None - document_config: dict | None = None + dataset_config: dict[str, Any] | None = None + document_config: dict[str, Any] | None = None with session_factory.create_session() as session: try: diff --git a/api/tasks/clean_dataset_task.py b/api/tasks/clean_dataset_task.py index 0d51a743ad..377d0e5cc7 100644 --- a/api/tasks/clean_dataset_task.py +++ b/api/tasks/clean_dataset_task.py @@ -112,7 +112,9 @@ def clean_dataset_task( segment_ids = [segment.id for segment in segments] for segment in segments: image_upload_file_ids = get_image_upload_file_ids(segment.content) - image_files = session.query(UploadFile).where(UploadFile.id.in_(image_upload_file_ids)).all() + image_files = session.scalars( + select(UploadFile).where(UploadFile.id.in_(image_upload_file_ids)) + ).all() for image_file in image_files: if image_file is None: continue @@ -150,20 +152,22 @@ def clean_dataset_task( ) session.execute(binding_delete_stmt) - session.query(DatasetProcessRule).where(DatasetProcessRule.dataset_id == dataset_id).delete() - session.query(DatasetQuery).where(DatasetQuery.dataset_id == dataset_id).delete() - session.query(AppDatasetJoin).where(AppDatasetJoin.dataset_id == dataset_id).delete() + session.execute(delete(DatasetProcessRule).where(DatasetProcessRule.dataset_id == dataset_id)) + session.execute(delete(DatasetQuery).where(DatasetQuery.dataset_id == dataset_id)) + session.execute(delete(AppDatasetJoin).where(AppDatasetJoin.dataset_id == dataset_id)) # delete dataset metadata - session.query(DatasetMetadata).where(DatasetMetadata.dataset_id == dataset_id).delete() - session.query(DatasetMetadataBinding).where(DatasetMetadataBinding.dataset_id == dataset_id).delete() + session.execute(delete(DatasetMetadata).where(DatasetMetadata.dataset_id == dataset_id)) + session.execute(delete(DatasetMetadataBinding).where(DatasetMetadataBinding.dataset_id == dataset_id)) # delete pipeline and workflow if pipeline_id: - session.query(Pipeline).where(Pipeline.id == pipeline_id).delete() - session.query(Workflow).where( - Workflow.tenant_id == tenant_id, - Workflow.app_id == pipeline_id, - Workflow.type == WorkflowType.RAG_PIPELINE, - ).delete() + session.execute(delete(Pipeline).where(Pipeline.id == pipeline_id)) + session.execute( + delete(Workflow).where( + Workflow.tenant_id == tenant_id, + Workflow.app_id == pipeline_id, + Workflow.type == WorkflowType.RAG_PIPELINE, + ) + ) # delete files if documents: file_ids = [] @@ -174,7 +178,7 @@ def clean_dataset_task( if data_source_info and "upload_file_id" in data_source_info: file_id = data_source_info["upload_file_id"] file_ids.append(file_id) - files = session.query(UploadFile).where(UploadFile.id.in_(file_ids)).all() + files = session.scalars(select(UploadFile).where(UploadFile.id.in_(file_ids))).all() for file in files: storage.delete(file.key) diff --git a/api/tasks/clean_document_task.py b/api/tasks/clean_document_task.py index a017e9114b..a657cd553a 100644 --- a/api/tasks/clean_document_task.py +++ b/api/tasks/clean_document_task.py @@ -32,7 +32,7 @@ def clean_document_task(document_id: str, dataset_id: str, doc_form: str, file_i with session_factory.create_session() as session: try: - dataset = session.query(Dataset).where(Dataset.id == dataset_id).first() + dataset = session.scalar(select(Dataset).where(Dataset.id == dataset_id).limit(1)) if not dataset: raise Exception("Document has no dataset") @@ -63,7 +63,7 @@ def clean_document_task(document_id: str, dataset_id: str, doc_form: str, file_i if index_node_ids: index_processor = IndexProcessorFactory(doc_form).init_index_processor() with session_factory.create_session() as session: - dataset = session.query(Dataset).where(Dataset.id == dataset_id).first() + dataset = session.scalar(select(Dataset).where(Dataset.id == dataset_id).limit(1)) if dataset: index_processor.clean( dataset, index_node_ids, with_keywords=True, delete_child_chunks=True, delete_summaries=True @@ -94,7 +94,7 @@ def clean_document_task(document_id: str, dataset_id: str, doc_form: str, file_i with session_factory.create_session() as session, session.begin(): if file_id: - file = session.query(UploadFile).where(UploadFile.id == file_id).first() + file = session.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) if file: try: storage.delete(file.key) @@ -124,10 +124,12 @@ def clean_document_task(document_id: str, dataset_id: str, doc_form: str, file_i with session_factory.create_session() as session, session.begin(): # delete dataset metadata binding - session.query(DatasetMetadataBinding).where( - DatasetMetadataBinding.dataset_id == dataset_id, - DatasetMetadataBinding.document_id == document_id, - ).delete() + session.execute( + delete(DatasetMetadataBinding).where( + DatasetMetadataBinding.dataset_id == dataset_id, + DatasetMetadataBinding.document_id == document_id, + ) + ) end_at = time.perf_counter() logger.info( diff --git a/api/tasks/deal_dataset_index_update_task.py b/api/tasks/deal_dataset_index_update_task.py index fa844a8647..c9b5121a08 100644 --- a/api/tasks/deal_dataset_index_update_task.py +++ b/api/tasks/deal_dataset_index_update_task.py @@ -3,6 +3,7 @@ import time import click from celery import shared_task # type: ignore +from sqlalchemy import select, update from core.db.session_factory import session_factory from core.rag.index_processor.constant.doc_type import DocType @@ -26,43 +27,42 @@ def deal_dataset_index_update_task(dataset_id: str, action: str): with session_factory.create_session() as session: try: - dataset = session.query(Dataset).filter_by(id=dataset_id).first() + dataset = session.scalar(select(Dataset).where(Dataset.id == dataset_id).limit(1)) if not dataset: raise Exception("Dataset not found") index_type = dataset.doc_form or IndexStructureType.PARAGRAPH_INDEX index_processor = IndexProcessorFactory(index_type).init_index_processor() if action == "upgrade": - dataset_documents = ( - session.query(DatasetDocument) - .where( + dataset_documents = session.scalars( + select(DatasetDocument).where( DatasetDocument.dataset_id == dataset_id, DatasetDocument.indexing_status == "completed", DatasetDocument.enabled == True, DatasetDocument.archived == False, ) - .all() - ) + ).all() if dataset_documents: dataset_documents_ids = [doc.id for doc in dataset_documents] - session.query(DatasetDocument).where(DatasetDocument.id.in_(dataset_documents_ids)).update( - {"indexing_status": "indexing"}, synchronize_session=False + session.execute( + update(DatasetDocument) + .where(DatasetDocument.id.in_(dataset_documents_ids)) + .values(indexing_status="indexing") ) session.commit() for dataset_document in dataset_documents: try: # add from vector index - segments = ( - session.query(DocumentSegment) + segments = session.scalars( + select(DocumentSegment) .where( DocumentSegment.document_id == dataset_document.id, DocumentSegment.enabled == True, ) .order_by(DocumentSegment.position.asc()) - .all() - ) + ).all() if segments: documents = [] for segment in segments: @@ -81,32 +81,36 @@ def deal_dataset_index_update_task(dataset_id: str, action: str): # clean keywords index_processor.clean(dataset, None, with_keywords=True, delete_child_chunks=False) index_processor.load(dataset, documents, with_keywords=False) - session.query(DatasetDocument).where(DatasetDocument.id == dataset_document.id).update( - {"indexing_status": "completed"}, synchronize_session=False + session.execute( + update(DatasetDocument) + .where(DatasetDocument.id == dataset_document.id) + .values(indexing_status="completed") ) session.commit() except Exception as e: - session.query(DatasetDocument).where(DatasetDocument.id == dataset_document.id).update( - {"indexing_status": "error", "error": str(e)}, synchronize_session=False + session.execute( + update(DatasetDocument) + .where(DatasetDocument.id == dataset_document.id) + .values(indexing_status="error", error=str(e)) ) session.commit() elif action == "update": - dataset_documents = ( - session.query(DatasetDocument) - .where( + dataset_documents = session.scalars( + select(DatasetDocument).where( DatasetDocument.dataset_id == dataset_id, DatasetDocument.indexing_status == "completed", DatasetDocument.enabled == True, DatasetDocument.archived == False, ) - .all() - ) + ).all() # add new index if dataset_documents: # update document status dataset_documents_ids = [doc.id for doc in dataset_documents] - session.query(DatasetDocument).where(DatasetDocument.id.in_(dataset_documents_ids)).update( - {"indexing_status": "indexing"}, synchronize_session=False + session.execute( + update(DatasetDocument) + .where(DatasetDocument.id.in_(dataset_documents_ids)) + .values(indexing_status="indexing") ) session.commit() @@ -116,15 +120,14 @@ def deal_dataset_index_update_task(dataset_id: str, action: str): for dataset_document in dataset_documents: # update from vector index try: - segments = ( - session.query(DocumentSegment) + segments = session.scalars( + select(DocumentSegment) .where( DocumentSegment.document_id == dataset_document.id, DocumentSegment.enabled == True, ) .order_by(DocumentSegment.position.asc()) - .all() - ) + ).all() if segments: documents = [] multimodal_documents = [] @@ -173,13 +176,17 @@ def deal_dataset_index_update_task(dataset_id: str, action: str): index_processor.load( dataset, documents, multimodal_documents=multimodal_documents, with_keywords=False ) - session.query(DatasetDocument).where(DatasetDocument.id == dataset_document.id).update( - {"indexing_status": "completed"}, synchronize_session=False + session.execute( + update(DatasetDocument) + .where(DatasetDocument.id == dataset_document.id) + .values(indexing_status="completed") ) session.commit() except Exception as e: - session.query(DatasetDocument).where(DatasetDocument.id == dataset_document.id).update( - {"indexing_status": "error", "error": str(e)}, synchronize_session=False + session.execute( + update(DatasetDocument) + .where(DatasetDocument.id == dataset_document.id) + .values(indexing_status="error", error=str(e)) ) session.commit() else: diff --git a/api/tasks/deal_dataset_vector_index_task.py b/api/tasks/deal_dataset_vector_index_task.py index 0047e04a17..36605359dc 100644 --- a/api/tasks/deal_dataset_vector_index_task.py +++ b/api/tasks/deal_dataset_vector_index_task.py @@ -3,7 +3,7 @@ import time import click from celery import shared_task -from sqlalchemy import select +from sqlalchemy import select, update from core.db.session_factory import session_factory from core.rag.index_processor.constant.doc_type import DocType @@ -29,7 +29,7 @@ def deal_dataset_vector_index_task(dataset_id: str, action: str): with session_factory.create_session() as session: try: - dataset = session.query(Dataset).filter_by(id=dataset_id).first() + dataset = session.scalar(select(Dataset).where(Dataset.id == dataset_id).limit(1)) if not dataset: raise Exception("Dataset not found") @@ -49,23 +49,24 @@ def deal_dataset_vector_index_task(dataset_id: str, action: str): if dataset_documents: dataset_documents_ids = [doc.id for doc in dataset_documents] - session.query(DatasetDocument).where(DatasetDocument.id.in_(dataset_documents_ids)).update( - {"indexing_status": "indexing"}, synchronize_session=False + session.execute( + update(DatasetDocument) + .where(DatasetDocument.id.in_(dataset_documents_ids)) + .values(indexing_status="indexing") ) session.commit() for dataset_document in dataset_documents: try: # add from vector index - segments = ( - session.query(DocumentSegment) + segments = session.scalars( + select(DocumentSegment) .where( DocumentSegment.document_id == dataset_document.id, DocumentSegment.enabled == True, ) .order_by(DocumentSegment.position.asc()) - .all() - ) + ).all() if segments: documents = [] for segment in segments: @@ -82,13 +83,17 @@ def deal_dataset_vector_index_task(dataset_id: str, action: str): documents.append(document) # save vector index index_processor.load(dataset, documents, with_keywords=False) - session.query(DatasetDocument).where(DatasetDocument.id == dataset_document.id).update( - {"indexing_status": "completed"}, synchronize_session=False + session.execute( + update(DatasetDocument) + .where(DatasetDocument.id == dataset_document.id) + .values(indexing_status="completed") ) session.commit() except Exception as e: - session.query(DatasetDocument).where(DatasetDocument.id == dataset_document.id).update( - {"indexing_status": "error", "error": str(e)}, synchronize_session=False + session.execute( + update(DatasetDocument) + .where(DatasetDocument.id == dataset_document.id) + .values(indexing_status="error", error=str(e)) ) session.commit() elif action == "update": @@ -104,8 +109,10 @@ def deal_dataset_vector_index_task(dataset_id: str, action: str): if dataset_documents: # update document status dataset_documents_ids = [doc.id for doc in dataset_documents] - session.query(DatasetDocument).where(DatasetDocument.id.in_(dataset_documents_ids)).update( - {"indexing_status": "indexing"}, synchronize_session=False + session.execute( + update(DatasetDocument) + .where(DatasetDocument.id.in_(dataset_documents_ids)) + .values(indexing_status="indexing") ) session.commit() @@ -115,15 +122,14 @@ def deal_dataset_vector_index_task(dataset_id: str, action: str): for dataset_document in dataset_documents: # update from vector index try: - segments = ( - session.query(DocumentSegment) + segments = session.scalars( + select(DocumentSegment) .where( DocumentSegment.document_id == dataset_document.id, DocumentSegment.enabled == True, ) .order_by(DocumentSegment.position.asc()) - .all() - ) + ).all() if segments: documents = [] multimodal_documents = [] @@ -172,13 +178,17 @@ def deal_dataset_vector_index_task(dataset_id: str, action: str): index_processor.load( dataset, documents, multimodal_documents=multimodal_documents, with_keywords=False ) - session.query(DatasetDocument).where(DatasetDocument.id == dataset_document.id).update( - {"indexing_status": "completed"}, synchronize_session=False + session.execute( + update(DatasetDocument) + .where(DatasetDocument.id == dataset_document.id) + .values(indexing_status="completed") ) session.commit() except Exception as e: - session.query(DatasetDocument).where(DatasetDocument.id == dataset_document.id).update( - {"indexing_status": "error", "error": str(e)}, synchronize_session=False + session.execute( + update(DatasetDocument) + .where(DatasetDocument.id == dataset_document.id) + .values(indexing_status="error", error=str(e)) ) session.commit() else: diff --git a/api/tasks/delete_conversation_task.py b/api/tasks/delete_conversation_task.py index 9664b8ac73..0b392f6096 100644 --- a/api/tasks/delete_conversation_task.py +++ b/api/tasks/delete_conversation_task.py @@ -3,6 +3,7 @@ import time import click from celery import shared_task +from sqlalchemy import delete from core.db.session_factory import session_factory from models import ConversationVariable @@ -29,29 +30,21 @@ def delete_conversation_related_data(conversation_id: str): with session_factory.create_session() as session: try: - session.query(MessageAnnotation).where(MessageAnnotation.conversation_id == conversation_id).delete( - synchronize_session=False + session.execute(delete(MessageAnnotation).where(MessageAnnotation.conversation_id == conversation_id)) + + session.execute(delete(MessageFeedback).where(MessageFeedback.conversation_id == conversation_id)) + + session.execute( + delete(ToolConversationVariables).where(ToolConversationVariables.conversation_id == conversation_id) ) - session.query(MessageFeedback).where(MessageFeedback.conversation_id == conversation_id).delete( - synchronize_session=False - ) + session.execute(delete(ToolFile).where(ToolFile.conversation_id == conversation_id)) - session.query(ToolConversationVariables).where( - ToolConversationVariables.conversation_id == conversation_id - ).delete(synchronize_session=False) + session.execute(delete(ConversationVariable).where(ConversationVariable.conversation_id == conversation_id)) - session.query(ToolFile).where(ToolFile.conversation_id == conversation_id).delete(synchronize_session=False) + session.execute(delete(Message).where(Message.conversation_id == conversation_id)) - session.query(ConversationVariable).where(ConversationVariable.conversation_id == conversation_id).delete( - synchronize_session=False - ) - - session.query(Message).where(Message.conversation_id == conversation_id).delete(synchronize_session=False) - - session.query(PinnedConversation).where(PinnedConversation.conversation_id == conversation_id).delete( - synchronize_session=False - ) + session.execute(delete(PinnedConversation).where(PinnedConversation.conversation_id == conversation_id)) session.commit() diff --git a/api/tasks/delete_segment_from_index_task.py b/api/tasks/delete_segment_from_index_task.py index a6a2dcebc8..306a23aeda 100644 --- a/api/tasks/delete_segment_from_index_task.py +++ b/api/tasks/delete_segment_from_index_task.py @@ -3,7 +3,7 @@ import time import click from celery import shared_task -from sqlalchemy import delete +from sqlalchemy import delete, select from core.db.session_factory import session_factory from core.rag.index_processor.index_processor_factory import IndexProcessorFactory @@ -29,12 +29,12 @@ def delete_segment_from_index_task( start_at = time.perf_counter() with session_factory.create_session() as session: try: - dataset = session.query(Dataset).where(Dataset.id == dataset_id).first() + dataset = session.scalar(select(Dataset).where(Dataset.id == dataset_id).limit(1)) if not dataset: logging.warning("Dataset %s not found, skipping index cleanup", dataset_id) return - dataset_document = session.query(Document).where(Document.id == document_id).first() + dataset_document = session.scalar(select(Document).where(Document.id == document_id).limit(1)) if not dataset_document: return @@ -60,11 +60,9 @@ def delete_segment_from_index_task( ) if dataset.is_multimodal: # delete segment attachment binding - segment_attachment_bindings = ( - session.query(SegmentAttachmentBinding) - .where(SegmentAttachmentBinding.segment_id.in_(segment_ids)) - .all() - ) + segment_attachment_bindings = session.scalars( + select(SegmentAttachmentBinding).where(SegmentAttachmentBinding.segment_id.in_(segment_ids)) + ).all() if segment_attachment_bindings: attachment_ids = [binding.attachment_id for binding in segment_attachment_bindings] index_processor.clean(dataset=dataset, node_ids=attachment_ids, with_keywords=False) @@ -77,7 +75,7 @@ def delete_segment_from_index_task( session.execute(segment_attachment_bind_delete_stmt) # delete upload file - session.query(UploadFile).where(UploadFile.id.in_(attachment_ids)).delete(synchronize_session=False) + session.execute(delete(UploadFile).where(UploadFile.id.in_(attachment_ids))) session.commit() end_at = time.perf_counter() diff --git a/api/tasks/disable_segments_from_index_task.py b/api/tasks/disable_segments_from_index_task.py index 3cc267e821..86e96ea3f0 100644 --- a/api/tasks/disable_segments_from_index_task.py +++ b/api/tasks/disable_segments_from_index_task.py @@ -3,7 +3,7 @@ import time import click from celery import shared_task -from sqlalchemy import select +from sqlalchemy import select, update from core.db.session_factory import session_factory from core.rag.index_processor.index_processor_factory import IndexProcessorFactory @@ -27,12 +27,12 @@ def disable_segments_from_index_task(segment_ids: list, dataset_id: str, documen start_at = time.perf_counter() with session_factory.create_session() as session: - dataset = session.query(Dataset).where(Dataset.id == dataset_id).first() + dataset = session.scalar(select(Dataset).where(Dataset.id == dataset_id).limit(1)) if not dataset: logger.info(click.style(f"Dataset {dataset_id} not found, pass.", fg="cyan")) return - dataset_document = session.query(DatasetDocument).where(DatasetDocument.id == document_id).first() + dataset_document = session.scalar(select(DatasetDocument).where(DatasetDocument.id == document_id).limit(1)) if not dataset_document: logger.info(click.style(f"Document {document_id} not found, pass.", fg="cyan")) @@ -58,11 +58,9 @@ def disable_segments_from_index_task(segment_ids: list, dataset_id: str, documen index_node_ids = [segment.index_node_id for segment in segments] if dataset.is_multimodal: segment_ids = [segment.id for segment in segments] - segment_attachment_bindings = ( - session.query(SegmentAttachmentBinding) - .where(SegmentAttachmentBinding.segment_id.in_(segment_ids)) - .all() - ) + segment_attachment_bindings = session.scalars( + select(SegmentAttachmentBinding).where(SegmentAttachmentBinding.segment_id.in_(segment_ids)) + ).all() if segment_attachment_bindings: attachment_ids = [binding.attachment_id for binding in segment_attachment_bindings] index_node_ids.extend(attachment_ids) @@ -87,16 +85,14 @@ def disable_segments_from_index_task(segment_ids: list, dataset_id: str, documen logger.info(click.style(f"Segments removed from index latency: {end_at - start_at}", fg="green")) except Exception: # update segment error msg - session.query(DocumentSegment).where( - DocumentSegment.id.in_(segment_ids), - DocumentSegment.dataset_id == dataset_id, - DocumentSegment.document_id == document_id, - ).update( - { - "disabled_at": None, - "disabled_by": None, - "enabled": True, - } + session.execute( + update(DocumentSegment) + .where( + DocumentSegment.id.in_(segment_ids), + DocumentSegment.dataset_id == dataset_id, + DocumentSegment.document_id == document_id, + ) + .values(disabled_at=None, disabled_by=None, enabled=True) ) session.commit() finally: diff --git a/api/tasks/document_indexing_sync_task.py b/api/tasks/document_indexing_sync_task.py index f99e90062f..90c80be3a1 100644 --- a/api/tasks/document_indexing_sync_task.py +++ b/api/tasks/document_indexing_sync_task.py @@ -32,7 +32,9 @@ def document_indexing_sync_task(dataset_id: str, document_id: str): tenant_id = None with session_factory.create_session() as session, session.begin(): - document = session.query(Document).where(Document.id == document_id, Document.dataset_id == dataset_id).first() + document = session.scalar( + select(Document).where(Document.id == document_id, Document.dataset_id == dataset_id).limit(1) + ) if not document: logger.info(click.style(f"Document not found: {document_id}", fg="red")) @@ -42,7 +44,7 @@ def document_indexing_sync_task(dataset_id: str, document_id: str): logger.info(click.style(f"Document {document_id} is already being processed, skipping", fg="yellow")) return - dataset = session.query(Dataset).where(Dataset.id == dataset_id).first() + dataset = session.scalar(select(Dataset).where(Dataset.id == dataset_id).limit(1)) if not dataset: raise Exception("Dataset not found") @@ -87,7 +89,7 @@ def document_indexing_sync_task(dataset_id: str, document_id: str): ) with session_factory.create_session() as session, session.begin(): - document = session.query(Document).filter_by(id=document_id).first() + document = session.scalar(select(Document).where(Document.id == document_id).limit(1)) if document: document.indexing_status = IndexingStatus.ERROR document.error = "Datasource credential not found. Please reconnect your Notion workspace." @@ -112,7 +114,7 @@ def document_indexing_sync_task(dataset_id: str, document_id: str): try: index_processor = IndexProcessorFactory(index_type).init_index_processor() with session_factory.create_session() as session: - dataset = session.query(Dataset).where(Dataset.id == dataset_id).first() + dataset = session.scalar(select(Dataset).where(Dataset.id == dataset_id).limit(1)) if dataset: index_processor.clean(dataset, index_node_ids, with_keywords=True, delete_child_chunks=True) logger.info(click.style(f"Cleaned vector index for document {document_id}", fg="green")) @@ -120,7 +122,7 @@ def document_indexing_sync_task(dataset_id: str, document_id: str): logger.exception("Failed to clean vector index for document %s", document_id) with session_factory.create_session() as session, session.begin(): - document = session.query(Document).filter_by(id=document_id).first() + document = session.scalar(select(Document).where(Document.id == document_id).limit(1)) if not document: logger.warning(click.style(f"Document {document_id} not found during sync", fg="yellow")) return @@ -140,7 +142,7 @@ def document_indexing_sync_task(dataset_id: str, document_id: str): try: indexing_runner = IndexingRunner() with session_factory.create_session() as session: - document = session.query(Document).filter_by(id=document_id).first() + document = session.scalar(select(Document).where(Document.id == document_id).limit(1)) if document: indexing_runner.run([document]) end_at = time.perf_counter() @@ -150,7 +152,7 @@ def document_indexing_sync_task(dataset_id: str, document_id: str): except Exception as e: logger.exception("document_indexing_sync_task failed for document_id: %s", document_id) with session_factory.create_session() as session, session.begin(): - document = session.query(Document).filter_by(id=document_id).first() + document = session.scalar(select(Document).where(Document.id == document_id).limit(1)) if document: document.indexing_status = IndexingStatus.ERROR document.error = str(e) diff --git a/api/tasks/document_indexing_task.py b/api/tasks/document_indexing_task.py index 23a80fa106..31dad7937c 100644 --- a/api/tasks/document_indexing_task.py +++ b/api/tasks/document_indexing_task.py @@ -5,6 +5,7 @@ from typing import Any, Protocol import click from celery import current_app, shared_task +from sqlalchemy import select from configs import dify_config from core.db.session_factory import session_factory @@ -53,11 +54,10 @@ def _document_indexing(dataset_id: str, document_ids: Sequence[str]): Usage: _document_indexing(dataset_id, document_ids) """ - documents = [] start_at = time.perf_counter() with session_factory.create_session() as session: - dataset = session.query(Dataset).where(Dataset.id == dataset_id).first() + dataset = session.scalar(select(Dataset).where(Dataset.id == dataset_id).limit(1)) if not dataset: logger.info(click.style(f"Dataset is not found: {dataset_id}", fg="yellow")) return @@ -79,8 +79,8 @@ def _document_indexing(dataset_id: str, document_ids: Sequence[str]): ) except Exception as e: for document_id in document_ids: - document = ( - session.query(Document).where(Document.id == document_id, Document.dataset_id == dataset_id).first() + document = session.scalar( + select(Document).where(Document.id == document_id, Document.dataset_id == dataset_id).limit(1) ) if document: document.indexing_status = IndexingStatus.ERROR @@ -92,8 +92,10 @@ def _document_indexing(dataset_id: str, document_ids: Sequence[str]): # Phase 1: Update status to parsing (short transaction) with session_factory.create_session() as session, session.begin(): - documents = ( - session.query(Document).where(Document.id.in_(document_ids), Document.dataset_id == dataset_id).all() + documents: list[Document] = list( + session.scalars( + select(Document).where(Document.id.in_(document_ids), Document.dataset_id == dataset_id) + ).all() ) for document in documents: @@ -122,7 +124,7 @@ def _document_indexing(dataset_id: str, document_ids: Sequence[str]): # Trigger summary index generation for completed documents if enabled # Only generate for high_quality indexing technique and when summary_index_setting is enabled # Re-query dataset to get latest summary_index_setting (in case it was updated) - dataset = session.query(Dataset).where(Dataset.id == dataset_id).first() + dataset = session.scalar(select(Dataset).where(Dataset.id == dataset_id).limit(1)) if not dataset: logger.warning("Dataset %s not found after indexing", dataset_id) return @@ -134,10 +136,10 @@ def _document_indexing(dataset_id: str, document_ids: Sequence[str]): session.expire_all() # Check each document's indexing status and trigger summary generation if completed - documents = ( - session.query(Document) - .where(Document.id.in_(document_ids), Document.dataset_id == dataset_id) - .all() + documents = list( + session.scalars( + select(Document).where(Document.id.in_(document_ids), Document.dataset_id == dataset_id) + ).all() ) for document in documents: diff --git a/api/tasks/human_input_timeout_tasks.py b/api/tasks/human_input_timeout_tasks.py index ca73b4d374..fd743205a1 100644 --- a/api/tasks/human_input_timeout_tasks.py +++ b/api/tasks/human_input_timeout_tasks.py @@ -2,8 +2,6 @@ import logging from datetime import timedelta from celery import shared_task -from graphon.enums import WorkflowExecutionStatus -from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from sqlalchemy import or_, select from sqlalchemy.orm import sessionmaker @@ -11,6 +9,8 @@ from configs import dify_config from core.repositories.human_input_repository import HumanInputFormSubmissionRepository from extensions.ext_database import db from extensions.ext_storage import storage +from graphon.enums import WorkflowExecutionStatus +from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from libs.datetime_utils import ensure_naive_utc, naive_utc_now from models.human_input import HumanInputForm from models.workflow import WorkflowPause, WorkflowRun diff --git a/api/tasks/mail_human_input_delivery_task.py b/api/tasks/mail_human_input_delivery_task.py index a316eec7b9..f8ae3f4b6e 100644 --- a/api/tasks/mail_human_input_delivery_task.py +++ b/api/tasks/mail_human_input_delivery_task.py @@ -6,7 +6,6 @@ from typing import Any import click from celery import shared_task -from graphon.runtime import GraphRuntimeState, VariablePool from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker @@ -15,6 +14,7 @@ from core.app.layers.pause_state_persist_layer import WorkflowResumptionContext from core.workflow.human_input_compat import EmailDeliveryConfig, EmailDeliveryMethod from extensions.ext_database import db from extensions.ext_mail import mail +from graphon.runtime import GraphRuntimeState, VariablePool from models.human_input import ( DeliveryMethodType, HumanInputDelivery, diff --git a/api/tasks/mail_workflow_comment_task.py b/api/tasks/mail_workflow_comment_task.py new file mode 100644 index 0000000000..36d51f0514 --- /dev/null +++ b/api/tasks/mail_workflow_comment_task.py @@ -0,0 +1,65 @@ +import logging +import time + +import click +from celery import shared_task + +from extensions.ext_mail import mail +from libs.email_i18n import EmailType, get_email_i18n_service + +logger = logging.getLogger(__name__) + + +@shared_task(queue="mail") +def send_workflow_comment_mention_email_task( + language: str, + to: str, + mentioned_name: str, + commenter_name: str, + app_name: str, + comment_content: str, + app_url: str, +): + """ + Send workflow comment mention email with internationalization support. + + Args: + language: Language code for email localization + to: Recipient email address + mentioned_name: Name of the mentioned user + commenter_name: Name of the comment author + app_name: Name of the app where the comment was made + comment_content: Comment content excerpt + app_url: Link to the app workflow page + """ + if not mail.is_inited(): + return + + logger.info(click.style(f"Start workflow comment mention mail to {to}", fg="green")) + start_at = time.perf_counter() + + try: + email_service = get_email_i18n_service() + email_service.send_email( + email_type=EmailType.WORKFLOW_COMMENT_MENTION, + language_code=language, + to=to, + template_context={ + "to": to, + "mentioned_name": mentioned_name, + "commenter_name": commenter_name, + "app_name": app_name, + "comment_content": comment_content, + "app_url": app_url, + }, + ) + + end_at = time.perf_counter() + logger.info( + click.style( + f"Send workflow comment mention mail to {to} succeeded: latency: {end_at - start_at}", + fg="green", + ) + ) + except Exception: + logger.exception("workflow comment mention email to %s failed", to) diff --git a/api/tasks/regenerate_summary_index_task.py b/api/tasks/regenerate_summary_index_task.py index 6f490ab7ea..e794195c92 100644 --- a/api/tasks/regenerate_summary_index_task.py +++ b/api/tasks/regenerate_summary_index_task.py @@ -47,7 +47,7 @@ def regenerate_summary_index_task( try: with session_factory.create_session() as session: - dataset = session.query(Dataset).filter_by(id=dataset_id).first() + dataset = session.scalar(select(Dataset).where(Dataset.id == dataset_id).limit(1)) if not dataset: logger.error(click.style(f"Dataset not found: {dataset_id}", fg="red")) return @@ -84,8 +84,8 @@ def regenerate_summary_index_task( # For embedding_model change: directly query all segments with existing summaries # Don't require document indexing_status == "completed" # Include summaries with status "completed" or "error" (if they have content) - segments_with_summaries = ( - session.query(DocumentSegment, DocumentSegmentSummary) + segments_with_summaries = session.execute( + select(DocumentSegment, DocumentSegmentSummary) .join( DocumentSegmentSummary, DocumentSegment.id == DocumentSegmentSummary.chunk_id, @@ -110,8 +110,7 @@ def regenerate_summary_index_task( DatasetDocument.doc_form != IndexStructureType.QA_INDEX, # Skip qa_model documents ) .order_by(DocumentSegment.document_id.asc(), DocumentSegment.position.asc()) - .all() - ) + ).all() if not segments_with_summaries: logger.info( @@ -215,8 +214,8 @@ def regenerate_summary_index_task( try: # Get all segments with existing summaries - segments = ( - session.query(DocumentSegment) + segments = session.scalars( + select(DocumentSegment) .join( DocumentSegmentSummary, DocumentSegment.id == DocumentSegmentSummary.chunk_id, @@ -229,8 +228,7 @@ def regenerate_summary_index_task( DocumentSegmentSummary.dataset_id == dataset_id, ) .order_by(DocumentSegment.position.asc()) - .all() - ) + ).all() if not segments: continue @@ -245,13 +243,13 @@ def regenerate_summary_index_task( summary_record = None try: # Get existing summary record - summary_record = ( - session.query(DocumentSegmentSummary) - .filter_by( - chunk_id=segment.id, - dataset_id=dataset_id, + summary_record = session.scalar( + select(DocumentSegmentSummary) + .where( + DocumentSegmentSummary.chunk_id == segment.id, + DocumentSegmentSummary.dataset_id == dataset_id, ) - .first() + .limit(1) ) if not summary_record: diff --git a/api/tasks/remove_app_and_related_data_task.py b/api/tasks/remove_app_and_related_data_task.py index b1840662ff..5f1f0952af 100644 --- a/api/tasks/remove_app_and_related_data_task.py +++ b/api/tasks/remove_app_and_related_data_task.py @@ -6,7 +6,7 @@ from typing import Any, cast import click import sqlalchemy as sa from celery import shared_task -from sqlalchemy import delete +from sqlalchemy import delete, select from sqlalchemy.engine import CursorResult from sqlalchemy.exc import SQLAlchemyError from sqlalchemy.orm import sessionmaker @@ -99,7 +99,11 @@ def remove_app_and_related_data_task(self, tenant_id: str, app_id: str): def _delete_app_model_configs(tenant_id: str, app_id: str): def del_model_config(session, model_config_id: str): - session.query(AppModelConfig).where(AppModelConfig.id == model_config_id).delete(synchronize_session=False) + session.execute( + delete(AppModelConfig) + .where(AppModelConfig.id == model_config_id) + .execution_options(synchronize_session=False) + ) _delete_records( """select id from app_model_configs where app_id=:app_id limit 1000""", @@ -111,7 +115,7 @@ def _delete_app_model_configs(tenant_id: str, app_id: str): def _delete_app_site(tenant_id: str, app_id: str): def del_site(session, site_id: str): - session.query(Site).where(Site.id == site_id).delete(synchronize_session=False) + session.execute(delete(Site).where(Site.id == site_id).execution_options(synchronize_session=False)) _delete_records( """select id from sites where app_id=:app_id limit 1000""", @@ -123,7 +127,9 @@ def _delete_app_site(tenant_id: str, app_id: str): def _delete_app_mcp_servers(tenant_id: str, app_id: str): def del_mcp_server(session, mcp_server_id: str): - session.query(AppMCPServer).where(AppMCPServer.id == mcp_server_id).delete(synchronize_session=False) + session.execute( + delete(AppMCPServer).where(AppMCPServer.id == mcp_server_id).execution_options(synchronize_session=False) + ) _delete_records( """select id from app_mcp_servers where app_id=:app_id limit 1000""", @@ -136,12 +142,14 @@ def _delete_app_mcp_servers(tenant_id: str, app_id: str): def _delete_app_api_tokens(tenant_id: str, app_id: str): def del_api_token(session, api_token_id: str): # Fetch token details for cache invalidation - token_obj = session.query(ApiToken).where(ApiToken.id == api_token_id).first() + token_obj = session.scalar(select(ApiToken).where(ApiToken.id == api_token_id).limit(1)) if token_obj: # Invalidate cache before deletion ApiTokenCache.delete(token_obj.token, token_obj.type) - session.query(ApiToken).where(ApiToken.id == api_token_id).delete(synchronize_session=False) + session.execute( + delete(ApiToken).where(ApiToken.id == api_token_id).execution_options(synchronize_session=False) + ) _delete_records( """select id from api_tokens where app_id=:app_id limit 1000""", @@ -153,7 +161,9 @@ def _delete_app_api_tokens(tenant_id: str, app_id: str): def _delete_installed_apps(tenant_id: str, app_id: str): def del_installed_app(session, installed_app_id: str): - session.query(InstalledApp).where(InstalledApp.id == installed_app_id).delete(synchronize_session=False) + session.execute( + delete(InstalledApp).where(InstalledApp.id == installed_app_id).execution_options(synchronize_session=False) + ) _delete_records( """select id from installed_apps where tenant_id=:tenant_id and app_id=:app_id limit 1000""", @@ -165,7 +175,11 @@ def _delete_installed_apps(tenant_id: str, app_id: str): def _delete_recommended_apps(tenant_id: str, app_id: str): def del_recommended_app(session, recommended_app_id: str): - session.query(RecommendedApp).where(RecommendedApp.id == recommended_app_id).delete(synchronize_session=False) + session.execute( + delete(RecommendedApp) + .where(RecommendedApp.id == recommended_app_id) + .execution_options(synchronize_session=False) + ) _delete_records( """select id from recommended_apps where app_id=:app_id limit 1000""", @@ -177,8 +191,10 @@ def _delete_recommended_apps(tenant_id: str, app_id: str): def _delete_app_annotation_data(tenant_id: str, app_id: str): def del_annotation_hit_history(session, annotation_hit_history_id: str): - session.query(AppAnnotationHitHistory).where(AppAnnotationHitHistory.id == annotation_hit_history_id).delete( - synchronize_session=False + session.execute( + delete(AppAnnotationHitHistory) + .where(AppAnnotationHitHistory.id == annotation_hit_history_id) + .execution_options(synchronize_session=False) ) _delete_records( @@ -189,8 +205,10 @@ def _delete_app_annotation_data(tenant_id: str, app_id: str): ) def del_annotation_setting(session, annotation_setting_id: str): - session.query(AppAnnotationSetting).where(AppAnnotationSetting.id == annotation_setting_id).delete( - synchronize_session=False + session.execute( + delete(AppAnnotationSetting) + .where(AppAnnotationSetting.id == annotation_setting_id) + .execution_options(synchronize_session=False) ) _delete_records( @@ -203,7 +221,11 @@ def _delete_app_annotation_data(tenant_id: str, app_id: str): def _delete_app_dataset_joins(tenant_id: str, app_id: str): def del_dataset_join(session, dataset_join_id: str): - session.query(AppDatasetJoin).where(AppDatasetJoin.id == dataset_join_id).delete(synchronize_session=False) + session.execute( + delete(AppDatasetJoin) + .where(AppDatasetJoin.id == dataset_join_id) + .execution_options(synchronize_session=False) + ) _delete_records( """select id from app_dataset_joins where app_id=:app_id limit 1000""", @@ -215,7 +237,7 @@ def _delete_app_dataset_joins(tenant_id: str, app_id: str): def _delete_app_workflows(tenant_id: str, app_id: str): def del_workflow(session, workflow_id: str): - session.query(Workflow).where(Workflow.id == workflow_id).delete(synchronize_session=False) + session.execute(delete(Workflow).where(Workflow.id == workflow_id).execution_options(synchronize_session=False)) _delete_records( """select id from workflows where tenant_id=:tenant_id and app_id=:app_id limit 1000""", @@ -255,7 +277,11 @@ def _delete_app_workflow_node_executions(tenant_id: str, app_id: str): def _delete_app_workflow_app_logs(tenant_id: str, app_id: str): def del_workflow_app_log(session, workflow_app_log_id: str): - session.query(WorkflowAppLog).where(WorkflowAppLog.id == workflow_app_log_id).delete(synchronize_session=False) + session.execute( + delete(WorkflowAppLog) + .where(WorkflowAppLog.id == workflow_app_log_id) + .execution_options(synchronize_session=False) + ) _delete_records( """select id from workflow_app_logs where tenant_id=:tenant_id and app_id=:app_id limit 1000""", @@ -267,8 +293,10 @@ def _delete_app_workflow_app_logs(tenant_id: str, app_id: str): def _delete_app_workflow_archive_logs(tenant_id: str, app_id: str): def del_workflow_archive_log(session, workflow_archive_log_id: str): - session.query(WorkflowArchiveLog).where(WorkflowArchiveLog.id == workflow_archive_log_id).delete( - synchronize_session=False + session.execute( + delete(WorkflowArchiveLog) + .where(WorkflowArchiveLog.id == workflow_archive_log_id) + .execution_options(synchronize_session=False) ) _delete_records( @@ -306,10 +334,14 @@ def _delete_archived_workflow_run_files(tenant_id: str, app_id: str): def _delete_app_conversations(tenant_id: str, app_id: str): def del_conversation(session, conversation_id: str): - session.query(PinnedConversation).where(PinnedConversation.conversation_id == conversation_id).delete( - synchronize_session=False + session.execute( + delete(PinnedConversation) + .where(PinnedConversation.conversation_id == conversation_id) + .execution_options(synchronize_session=False) + ) + session.execute( + delete(Conversation).where(Conversation.id == conversation_id).execution_options(synchronize_session=False) ) - session.query(Conversation).where(Conversation.id == conversation_id).delete(synchronize_session=False) _delete_records( """select id from conversations where app_id=:app_id limit 1000""", @@ -329,17 +361,35 @@ def _delete_conversation_variables(*, app_id: str): def _delete_app_messages(tenant_id: str, app_id: str): def del_message(session, message_id: str): - session.query(MessageFeedback).where(MessageFeedback.message_id == message_id).delete(synchronize_session=False) - session.query(MessageAnnotation).where(MessageAnnotation.message_id == message_id).delete( - synchronize_session=False + session.execute( + delete(MessageFeedback) + .where(MessageFeedback.message_id == message_id) + .execution_options(synchronize_session=False) ) - session.query(MessageChain).where(MessageChain.message_id == message_id).delete(synchronize_session=False) - session.query(MessageAgentThought).where(MessageAgentThought.message_id == message_id).delete( - synchronize_session=False + session.execute( + delete(MessageAnnotation) + .where(MessageAnnotation.message_id == message_id) + .execution_options(synchronize_session=False) ) - session.query(MessageFile).where(MessageFile.message_id == message_id).delete(synchronize_session=False) - session.query(SavedMessage).where(SavedMessage.message_id == message_id).delete(synchronize_session=False) - session.query(Message).where(Message.id == message_id).delete() + session.execute( + delete(MessageChain) + .where(MessageChain.message_id == message_id) + .execution_options(synchronize_session=False) + ) + session.execute( + delete(MessageAgentThought) + .where(MessageAgentThought.message_id == message_id) + .execution_options(synchronize_session=False) + ) + session.execute( + delete(MessageFile).where(MessageFile.message_id == message_id).execution_options(synchronize_session=False) + ) + session.execute( + delete(SavedMessage) + .where(SavedMessage.message_id == message_id) + .execution_options(synchronize_session=False) + ) + session.execute(delete(Message).where(Message.id == message_id).execution_options(synchronize_session=False)) _delete_records( """select id from messages where app_id=:app_id limit 1000""", @@ -351,8 +401,10 @@ def _delete_app_messages(tenant_id: str, app_id: str): def _delete_workflow_tool_providers(tenant_id: str, app_id: str): def del_tool_provider(session, tool_provider_id: str): - session.query(WorkflowToolProvider).where(WorkflowToolProvider.id == tool_provider_id).delete( - synchronize_session=False + session.execute( + delete(WorkflowToolProvider) + .where(WorkflowToolProvider.id == tool_provider_id) + .execution_options(synchronize_session=False) ) _delete_records( @@ -365,7 +417,9 @@ def _delete_workflow_tool_providers(tenant_id: str, app_id: str): def _delete_app_tag_bindings(tenant_id: str, app_id: str): def del_tag_binding(session, tag_binding_id: str): - session.query(TagBinding).where(TagBinding.id == tag_binding_id).delete(synchronize_session=False) + session.execute( + delete(TagBinding).where(TagBinding.id == tag_binding_id).execution_options(synchronize_session=False) + ) _delete_records( """select id from tag_bindings where tenant_id=:tenant_id and target_id=:app_id limit 1000""", @@ -377,7 +431,7 @@ def _delete_app_tag_bindings(tenant_id: str, app_id: str): def _delete_end_users(tenant_id: str, app_id: str): def del_end_user(session, end_user_id: str): - session.query(EndUser).where(EndUser.id == end_user_id).delete(synchronize_session=False) + session.execute(delete(EndUser).where(EndUser.id == end_user_id).execution_options(synchronize_session=False)) _delete_records( """select id from end_users where tenant_id=:tenant_id and app_id=:app_id limit 1000""", @@ -389,7 +443,11 @@ def _delete_end_users(tenant_id: str, app_id: str): def _delete_trace_app_configs(tenant_id: str, app_id: str): def del_trace_app_config(session, trace_app_config_id: str): - session.query(TraceAppConfig).where(TraceAppConfig.id == trace_app_config_id).delete(synchronize_session=False) + session.execute( + delete(TraceAppConfig) + .where(TraceAppConfig.id == trace_app_config_id) + .execution_options(synchronize_session=False) + ) _delete_records( """select id from trace_app_config where app_id=:app_id limit 1000""", @@ -545,7 +603,9 @@ def _delete_draft_variable_offload_data(session, file_ids: list[str]) -> int: def _delete_app_triggers(tenant_id: str, app_id: str): def del_app_trigger(session, trigger_id: str): - session.query(AppTrigger).where(AppTrigger.id == trigger_id).delete(synchronize_session=False) + session.execute( + delete(AppTrigger).where(AppTrigger.id == trigger_id).execution_options(synchronize_session=False) + ) _delete_records( """select id from app_triggers where tenant_id=:tenant_id and app_id=:app_id limit 1000""", @@ -557,8 +617,10 @@ def _delete_app_triggers(tenant_id: str, app_id: str): def _delete_workflow_plugin_triggers(tenant_id: str, app_id: str): def del_plugin_trigger(session, trigger_id: str): - session.query(WorkflowPluginTrigger).where(WorkflowPluginTrigger.id == trigger_id).delete( - synchronize_session=False + session.execute( + delete(WorkflowPluginTrigger) + .where(WorkflowPluginTrigger.id == trigger_id) + .execution_options(synchronize_session=False) ) _delete_records( @@ -571,8 +633,10 @@ def _delete_workflow_plugin_triggers(tenant_id: str, app_id: str): def _delete_workflow_webhook_triggers(tenant_id: str, app_id: str): def del_webhook_trigger(session, trigger_id: str): - session.query(WorkflowWebhookTrigger).where(WorkflowWebhookTrigger.id == trigger_id).delete( - synchronize_session=False + session.execute( + delete(WorkflowWebhookTrigger) + .where(WorkflowWebhookTrigger.id == trigger_id) + .execution_options(synchronize_session=False) ) _delete_records( @@ -585,7 +649,11 @@ def _delete_workflow_webhook_triggers(tenant_id: str, app_id: str): def _delete_workflow_schedule_plans(tenant_id: str, app_id: str): def del_schedule_plan(session, plan_id: str): - session.query(WorkflowSchedulePlan).where(WorkflowSchedulePlan.id == plan_id).delete(synchronize_session=False) + session.execute( + delete(WorkflowSchedulePlan) + .where(WorkflowSchedulePlan.id == plan_id) + .execution_options(synchronize_session=False) + ) _delete_records( """select id from workflow_schedule_plans where tenant_id=:tenant_id and app_id=:app_id limit 1000""", @@ -597,7 +665,11 @@ def _delete_workflow_schedule_plans(tenant_id: str, app_id: str): def _delete_workflow_trigger_logs(tenant_id: str, app_id: str): def del_trigger_log(session, log_id: str): - session.query(WorkflowTriggerLog).where(WorkflowTriggerLog.id == log_id).delete(synchronize_session=False) + session.execute( + delete(WorkflowTriggerLog) + .where(WorkflowTriggerLog.id == log_id) + .execution_options(synchronize_session=False) + ) _delete_records( """select id from workflow_trigger_logs where tenant_id=:tenant_id and app_id=:app_id limit 1000""", @@ -607,7 +679,7 @@ def _delete_workflow_trigger_logs(tenant_id: str, app_id: str): ) -def _delete_records(query_sql: str, params: dict, delete_func: Callable, name: str) -> None: +def _delete_records(query_sql: str, params: dict[str, Any], delete_func: Callable, name: str) -> None: while True: with session_factory.create_session() as session: rs = session.execute(sa.text(query_sql), params) diff --git a/api/tasks/trigger_processing_tasks.py b/api/tasks/trigger_processing_tasks.py index b9f382eccf..b0cbc54db3 100644 --- a/api/tasks/trigger_processing_tasks.py +++ b/api/tasks/trigger_processing_tasks.py @@ -12,7 +12,6 @@ from datetime import UTC, datetime from typing import Any from celery import shared_task -from graphon.enums import WorkflowExecutionStatus from sqlalchemy import func, select from sqlalchemy.orm import Session @@ -29,6 +28,7 @@ from core.trigger.provider import PluginTriggerProviderController from core.trigger.trigger_manager import TriggerManager from core.workflow.nodes.trigger_plugin.entities import TriggerEventNodeData from enums.quota_type import QuotaType +from graphon.enums import WorkflowExecutionStatus from models.enums import ( AppTriggerType, CreatorUserRole, diff --git a/api/tasks/workflow_execution_tasks.py b/api/tasks/workflow_execution_tasks.py index 0c7f74c180..5ca04fd7c2 100644 --- a/api/tasks/workflow_execution_tasks.py +++ b/api/tasks/workflow_execution_tasks.py @@ -7,13 +7,14 @@ improving performance by offloading storage operations to background workers. import json import logging +from typing import Any from celery import shared_task -from graphon.entities import WorkflowExecution -from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from sqlalchemy import select from core.db.session_factory import session_factory +from graphon.entities import WorkflowExecution +from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from models import CreatorUserRole, WorkflowRun from models.enums import WorkflowRunTriggeredFrom @@ -23,7 +24,7 @@ logger = logging.getLogger(__name__) @shared_task(queue="workflow_storage", bind=True, max_retries=3, default_retry_delay=60) def save_workflow_execution_task( self, - execution_data: dict, + execution_data: dict[str, Any], tenant_id: str, app_id: str, triggered_from: str, diff --git a/api/tasks/workflow_node_execution_tasks.py b/api/tasks/workflow_node_execution_tasks.py index f25ebe3bae..0d5475a56d 100644 --- a/api/tasks/workflow_node_execution_tasks.py +++ b/api/tasks/workflow_node_execution_tasks.py @@ -7,15 +7,16 @@ improving performance by offloading storage operations to background workers. import json import logging +from typing import Any from celery import shared_task +from sqlalchemy import select + +from core.db.session_factory import session_factory from graphon.entities.workflow_node_execution import ( WorkflowNodeExecution, ) from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter -from sqlalchemy import select - -from core.db.session_factory import session_factory from models import CreatorUserRole, WorkflowNodeExecutionModel from models.workflow import WorkflowNodeExecutionTriggeredFrom @@ -25,7 +26,7 @@ logger = logging.getLogger(__name__) @shared_task(queue="workflow_storage", bind=True, max_retries=3, default_retry_delay=60) def save_workflow_node_execution_task( self, - execution_data: dict, + execution_data: dict[str, Any], tenant_id: str, app_id: str, triggered_from: str, diff --git a/api/templates/without-brand/workflow_comment_mention_template_en-US.html b/api/templates/without-brand/workflow_comment_mention_template_en-US.html new file mode 100644 index 0000000000..1ef8fe4e3f --- /dev/null +++ b/api/templates/without-brand/workflow_comment_mention_template_en-US.html @@ -0,0 +1,119 @@ + + + + + + + + +
+
+ Dify Logo +
+

You were mentioned in a workflow comment

+
+

Hi {{ mentioned_name }},

+

{{ commenter_name }} mentioned you in {{ app_name }}.

+
+
+

{{ comment_content }}

+
+

Open {{ application_title }} to reply to the comment.

+
+ + + diff --git a/api/templates/without-brand/workflow_comment_mention_template_zh-CN.html b/api/templates/without-brand/workflow_comment_mention_template_zh-CN.html new file mode 100644 index 0000000000..8b9b2dbe71 --- /dev/null +++ b/api/templates/without-brand/workflow_comment_mention_template_zh-CN.html @@ -0,0 +1,119 @@ + + + + + + + + +
+
+ Dify Logo +
+

你在工作流评论中被提及

+
+

你好,{{ mentioned_name }}:

+

{{ commenter_name }} 在 {{ app_name }} 中提及了你。

+
+
+

{{ comment_content }}

+
+

请在 {{ application_title }} 中查看并回复此评论。

+
+ + + diff --git a/api/templates/workflow_comment_mention_template_en-US.html b/api/templates/workflow_comment_mention_template_en-US.html new file mode 100644 index 0000000000..1ef8fe4e3f --- /dev/null +++ b/api/templates/workflow_comment_mention_template_en-US.html @@ -0,0 +1,119 @@ + + + + + + + + +
+
+ Dify Logo +
+

You were mentioned in a workflow comment

+
+

Hi {{ mentioned_name }},

+

{{ commenter_name }} mentioned you in {{ app_name }}.

+
+
+

{{ comment_content }}

+
+

Open {{ application_title }} to reply to the comment.

+
+ + + diff --git a/api/templates/workflow_comment_mention_template_zh-CN.html b/api/templates/workflow_comment_mention_template_zh-CN.html new file mode 100644 index 0000000000..8b9b2dbe71 --- /dev/null +++ b/api/templates/workflow_comment_mention_template_zh-CN.html @@ -0,0 +1,119 @@ + + + + + + + + +
+
+ Dify Logo +
+

你在工作流评论中被提及

+
+

你好,{{ mentioned_name }}:

+

{{ commenter_name }} 在 {{ app_name }} 中提及了你。

+
+
+

{{ comment_content }}

+
+

请在 {{ application_title }} 中查看并回复此评论。

+
+ + + diff --git a/api/tests/__init__.py b/api/tests/__init__.py index e69de29bb2..ced6188ce8 100644 --- a/api/tests/__init__.py +++ b/api/tests/__init__.py @@ -0,0 +1 @@ +"""Test suite root package (enables ``import tests.integration_tests...`` with ``pythonpath = .``).""" diff --git a/api/tests/integration_tests/.env.example b/api/tests/integration_tests/.env.example index f84d39aeb5..c07ab6d6bf 100644 --- a/api/tests/integration_tests/.env.example +++ b/api/tests/integration_tests/.env.example @@ -33,6 +33,7 @@ REDIS_USERNAME= REDIS_PASSWORD=difyai123456 REDIS_USE_SSL=false REDIS_DB=0 +REDIS_KEY_PREFIX= # PostgreSQL database configuration DB_USERNAME=postgres diff --git a/api/tests/integration_tests/__init__.py b/api/tests/integration_tests/__init__.py index e69de29bb2..c66cd71b7e 100644 --- a/api/tests/integration_tests/__init__.py +++ b/api/tests/integration_tests/__init__.py @@ -0,0 +1 @@ +"""Integration tests package.""" diff --git a/api/tests/integration_tests/conftest.py b/api/tests/integration_tests/conftest.py index 44adadeaa5..09078d196d 100644 --- a/api/tests/integration_tests/conftest.py +++ b/api/tests/integration_tests/conftest.py @@ -8,6 +8,7 @@ from collections.abc import Generator import pytest from flask import Flask from flask.testing import FlaskClient +from sqlalchemy import delete, select from sqlalchemy.orm import Session from app_factory import create_app @@ -47,7 +48,7 @@ os.environ["OPENDAL_FS_ROOT"] = "/tmp/dify-storage" os.environ.setdefault("STORAGE_TYPE", "opendal") os.environ.setdefault("OPENDAL_SCHEME", "fs") -_CACHED_APP = create_app() +_SIO_APP, _CACHED_APP = create_app() @pytest.fixture(scope="session") @@ -83,15 +84,15 @@ def setup_account(request) -> Generator[Account, None, None]: with _CACHED_APP.test_request_context(): with Session(bind=db.engine, expire_on_commit=False) as session: - account = session.query(Account).filter_by(email=email).one() + account = session.scalars(select(Account).filter_by(email=email)).one() yield account with _CACHED_APP.test_request_context(): - db.session.query(DifySetup).delete() - db.session.query(TenantAccountJoin).delete() - db.session.query(Account).delete() - db.session.query(Tenant).delete() + db.session.execute(delete(DifySetup)) + db.session.execute(delete(TenantAccountJoin)) + db.session.execute(delete(Account)) + db.session.execute(delete(Tenant)) db.session.commit() diff --git a/api/tests/integration_tests/controllers/console/app/test_workflow_draft_variable.py b/api/tests/integration_tests/controllers/console/app/test_workflow_draft_variable.py deleted file mode 100644 index 038f37af5f..0000000000 --- a/api/tests/integration_tests/controllers/console/app/test_workflow_draft_variable.py +++ /dev/null @@ -1,47 +0,0 @@ -import uuid -from unittest import mock - -from controllers.console.app import workflow_draft_variable as draft_variable_api -from controllers.console.app import wraps -from factories.variable_factory import build_segment -from models import App, AppMode -from models.workflow import WorkflowDraftVariable -from services.workflow_draft_variable_service import WorkflowDraftVariableList, WorkflowDraftVariableService - - -def _get_mock_srv_class() -> type[WorkflowDraftVariableService]: - return mock.create_autospec(WorkflowDraftVariableService) - - -class TestWorkflowDraftNodeVariableListApi: - def test_get(self, test_client, auth_header, monkeypatch): - srv_class = _get_mock_srv_class() - mock_app_model: App = App() - mock_app_model.id = str(uuid.uuid4()) - test_node_id = "test_node_id" - mock_app_model.mode = AppMode.ADVANCED_CHAT - mock_load_app_model = mock.Mock(return_value=mock_app_model) - - monkeypatch.setattr(draft_variable_api, "WorkflowDraftVariableService", srv_class) - monkeypatch.setattr(wraps, "_load_app_model", mock_load_app_model) - - var1 = WorkflowDraftVariable.new_node_variable( - app_id="test_app_1", - node_id="test_node_1", - name="str_var", - value=build_segment("str_value"), - node_execution_id=str(uuid.uuid4()), - ) - srv_instance = mock.create_autospec(WorkflowDraftVariableService, instance=True) - srv_class.return_value = srv_instance - srv_instance.list_node_variables.return_value = WorkflowDraftVariableList(variables=[var1]) - - response = test_client.get( - f"/console/api/apps/{mock_app_model.id}/workflows/draft/nodes/{test_node_id}/variables", - headers=auth_header, - ) - assert response.status_code == 200 - response_dict = response.json - assert isinstance(response_dict, dict) - assert "items" in response_dict - assert len(response_dict["items"]) == 1 diff --git a/api/tests/integration_tests/controllers/console/workspace/test_trigger_provider_permissions.py b/api/tests/integration_tests/controllers/console/workspace/test_trigger_provider_permissions.py deleted file mode 100644 index e55c12e678..0000000000 --- a/api/tests/integration_tests/controllers/console/workspace/test_trigger_provider_permissions.py +++ /dev/null @@ -1,244 +0,0 @@ -"""Integration tests for Trigger Provider subscription permission verification.""" - -import uuid -from unittest import mock - -import pytest -from flask.testing import FlaskClient - -from controllers.console.workspace import trigger_providers as trigger_providers_api -from libs.datetime_utils import naive_utc_now -from models import Tenant -from models.account import Account, TenantAccountJoin, TenantAccountRole - - -class TestTriggerProviderSubscriptionPermissions: - """Test permission verification for Trigger Provider subscription endpoints.""" - - @pytest.fixture - def mock_account(self, monkeypatch: pytest.MonkeyPatch): - """Create a mock Account for testing.""" - - account = Account(name="Test User", email="test@example.com") - account.id = str(uuid.uuid4()) - account.last_active_at = naive_utc_now() - account.created_at = naive_utc_now() - account.updated_at = naive_utc_now() - - # Create mock tenant - tenant = Tenant(name="Test Tenant") - tenant.id = str(uuid.uuid4()) - - mock_session_instance = mock.Mock() - - mock_tenant_join = TenantAccountJoin(role=TenantAccountRole.OWNER) - monkeypatch.setattr(mock_session_instance, "scalar", mock.Mock(return_value=mock_tenant_join)) - - mock_scalars_result = mock.Mock() - mock_scalars_result.one.return_value = tenant - monkeypatch.setattr(mock_session_instance, "scalars", mock.Mock(return_value=mock_scalars_result)) - - mock_session_context = mock.Mock() - mock_session_context.__enter__.return_value = mock_session_instance - monkeypatch.setattr("models.account.Session", lambda _, expire_on_commit: mock_session_context) - - account.current_tenant = tenant - account.current_tenant_id = tenant.id - return account - - @pytest.mark.parametrize( - ("role", "list_status", "get_status", "update_status", "create_status", "build_status", "delete_status"), - [ - # Admin/Owner can do everything - (TenantAccountRole.OWNER, 200, 200, 200, 200, 200, 200), - (TenantAccountRole.ADMIN, 200, 200, 200, 200, 200, 200), - # Editor can list, get, update (parameters), but not create, build, or delete - (TenantAccountRole.EDITOR, 200, 200, 200, 403, 403, 403), - # Normal user cannot do anything - (TenantAccountRole.NORMAL, 403, 403, 403, 403, 403, 403), - # Dataset operator cannot do anything - (TenantAccountRole.DATASET_OPERATOR, 403, 403, 403, 403, 403, 403), - ], - ) - def test_trigger_subscription_permissions( - self, - test_client: FlaskClient, - auth_header, - monkeypatch, - mock_account, - role: TenantAccountRole, - list_status: int, - get_status: int, - update_status: int, - create_status: int, - build_status: int, - delete_status: int, - ): - """Test that different roles have appropriate permissions for trigger subscription operations.""" - # Set user role - mock_account.role = role - - # Mock current user - monkeypatch.setattr(trigger_providers_api, "current_user", mock_account) - - # Mock AccountService.load_user to prevent authentication issues - from services.account_service import AccountService - - mock_load_user = mock.Mock(return_value=mock_account) - monkeypatch.setattr(AccountService, "load_user", mock_load_user) - - # Test data - provider = "some_provider/some_trigger" - subscription_builder_id = str(uuid.uuid4()) - subscription_id = str(uuid.uuid4()) - - # Mock service methods - mock_list_subscriptions = mock.Mock(return_value=[]) - monkeypatch.setattr( - "services.trigger.trigger_provider_service.TriggerProviderService.list_trigger_provider_subscriptions", - mock_list_subscriptions, - ) - - mock_get_subscription_builder = mock.Mock(return_value={"id": subscription_builder_id}) - monkeypatch.setattr( - "services.trigger.trigger_subscription_builder_service.TriggerSubscriptionBuilderService.get_subscription_builder_by_id", - mock_get_subscription_builder, - ) - - mock_update_subscription_builder = mock.Mock(return_value={"id": subscription_builder_id}) - monkeypatch.setattr( - "services.trigger.trigger_subscription_builder_service.TriggerSubscriptionBuilderService.update_trigger_subscription_builder", - mock_update_subscription_builder, - ) - - mock_create_subscription_builder = mock.Mock(return_value={"id": subscription_builder_id}) - monkeypatch.setattr( - "services.trigger.trigger_subscription_builder_service.TriggerSubscriptionBuilderService.create_trigger_subscription_builder", - mock_create_subscription_builder, - ) - - mock_update_and_build_builder = mock.Mock() - monkeypatch.setattr( - "services.trigger.trigger_subscription_builder_service.TriggerSubscriptionBuilderService.update_and_build_builder", - mock_update_and_build_builder, - ) - - mock_delete_provider = mock.Mock() - mock_delete_plugin_trigger = mock.Mock() - mock_db_session = mock.Mock() - mock_db_session.commit = mock.Mock() - - def mock_session_func(engine=None): - return mock_session_context - - mock_session_context = mock.Mock() - mock_session_context.__enter__.return_value = mock_db_session - mock_session_context.__exit__.return_value = None - - monkeypatch.setattr("services.trigger.trigger_provider_service.Session", mock_session_func) - monkeypatch.setattr("services.trigger.trigger_subscription_operator_service.Session", mock_session_func) - - monkeypatch.setattr( - "services.trigger.trigger_provider_service.TriggerProviderService.delete_trigger_provider", - mock_delete_provider, - ) - monkeypatch.setattr( - "services.trigger.trigger_subscription_operator_service.TriggerSubscriptionOperatorService.delete_plugin_trigger_by_subscription", - mock_delete_plugin_trigger, - ) - - # Test 1: List subscriptions (should work for Editor, Admin, Owner) - response = test_client.get( - f"/console/api/workspaces/current/trigger-provider/{provider}/subscriptions/list", - headers=auth_header, - ) - assert response.status_code == list_status - - # Test 2: Get subscription builder (should work for Editor, Admin, Owner) - response = test_client.get( - f"/console/api/workspaces/current/trigger-provider/{provider}/subscriptions/builder/{subscription_builder_id}", - headers=auth_header, - ) - assert response.status_code == get_status - - # Test 3: Update subscription builder parameters (should work for Editor, Admin, Owner) - response = test_client.post( - f"/console/api/workspaces/current/trigger-provider/{provider}/subscriptions/builder/update/{subscription_builder_id}", - headers=auth_header, - json={"parameters": {"webhook_url": "https://example.com/webhook"}}, - ) - assert response.status_code == update_status - - # Test 4: Create subscription builder (should only work for Admin, Owner) - response = test_client.post( - f"/console/api/workspaces/current/trigger-provider/{provider}/subscriptions/builder/create", - headers=auth_header, - json={"credential_type": "api_key"}, - ) - assert response.status_code == create_status - - # Test 5: Build/activate subscription (should only work for Admin, Owner) - response = test_client.post( - f"/console/api/workspaces/current/trigger-provider/{provider}/subscriptions/builder/build/{subscription_builder_id}", - headers=auth_header, - json={"name": "Test Subscription"}, - ) - assert response.status_code == build_status - - # Test 6: Delete subscription (should only work for Admin, Owner) - response = test_client.post( - f"/console/api/workspaces/current/trigger-provider/{subscription_id}/subscriptions/delete", - headers=auth_header, - ) - assert response.status_code == delete_status - - @pytest.mark.parametrize( - ("role", "status"), - [ - (TenantAccountRole.OWNER, 200), - (TenantAccountRole.ADMIN, 200), - # Editor should be able to access logs for debugging - (TenantAccountRole.EDITOR, 200), - (TenantAccountRole.NORMAL, 403), - (TenantAccountRole.DATASET_OPERATOR, 403), - ], - ) - def test_trigger_subscription_logs_permissions( - self, - test_client: FlaskClient, - auth_header, - monkeypatch, - mock_account, - role: TenantAccountRole, - status: int, - ): - """Test that different roles have appropriate permissions for accessing subscription logs.""" - # Set user role - mock_account.role = role - - # Mock current user - monkeypatch.setattr(trigger_providers_api, "current_user", mock_account) - - # Mock AccountService.load_user to prevent authentication issues - from services.account_service import AccountService - - mock_load_user = mock.Mock(return_value=mock_account) - monkeypatch.setattr(AccountService, "load_user", mock_load_user) - - # Test data - provider = "some_provider/some_trigger" - subscription_builder_id = str(uuid.uuid4()) - - # Mock service method - mock_list_logs = mock.Mock(return_value=[]) - monkeypatch.setattr( - "services.trigger.trigger_subscription_builder_service.TriggerSubscriptionBuilderService.list_logs", - mock_list_logs, - ) - - # Test access to logs - response = test_client.get( - f"/console/api/workspaces/current/trigger-provider/{provider}/subscriptions/builder/logs/{subscription_builder_id}", - headers=auth_header, - ) - assert response.status_code == status diff --git a/api/tests/integration_tests/core/datasource/test_datasource_manager_integration.py b/api/tests/integration_tests/core/datasource/test_datasource_manager_integration.py index 91245e879e..a876b0c4aa 100644 --- a/api/tests/integration_tests/core/datasource/test_datasource_manager_integration.py +++ b/api/tests/integration_tests/core/datasource/test_datasource_manager_integration.py @@ -1,9 +1,8 @@ from collections.abc import Generator -from graphon.node_events import StreamCompletedEvent - from core.datasource.datasource_manager import DatasourceManager from core.datasource.entities.datasource_entities import DatasourceMessage +from graphon.node_events import StreamCompletedEvent def _gen_var_stream() -> Generator[DatasourceMessage, None, None]: diff --git a/api/tests/integration_tests/core/workflow/nodes/datasource/test_datasource_node_integration.py b/api/tests/integration_tests/core/workflow/nodes/datasource/test_datasource_node_integration.py index 3fdea10976..b5318aaa2b 100644 --- a/api/tests/integration_tests/core/workflow/nodes/datasource/test_datasource_node_integration.py +++ b/api/tests/integration_tests/core/workflow/nodes/datasource/test_datasource_node_integration.py @@ -1,8 +1,7 @@ -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.node_events import NodeRunResult, StreamCompletedEvent - from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY from core.workflow.nodes.datasource.datasource_node import DatasourceNode +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.node_events import NodeRunResult, StreamCompletedEvent class _Seg: diff --git a/api/tests/integration_tests/factories/test_storage_key_loader.py b/api/tests/integration_tests/factories/test_storage_key_loader.py deleted file mode 100644 index c1bb8e1245..0000000000 --- a/api/tests/integration_tests/factories/test_storage_key_loader.py +++ /dev/null @@ -1,375 +0,0 @@ -import unittest -from datetime import UTC, datetime -from unittest.mock import patch -from uuid import uuid4 - -import pytest -from graphon.file import File, FileTransferMethod, FileType -from sqlalchemy.orm import Session - -from core.app.file_access import DatabaseFileAccessController -from extensions.ext_database import db -from extensions.storage.storage_type import StorageType -from factories.file_factory import StorageKeyLoader -from models import ToolFile, UploadFile -from models.enums import CreatorUserRole - - -@pytest.mark.usefixtures("flask_req_ctx") -class TestStorageKeyLoader(unittest.TestCase): - """ - Integration tests for StorageKeyLoader class. - - Tests the batched loading of storage keys from the database for files - with different transfer methods: LOCAL_FILE, REMOTE_URL, and TOOL_FILE. - """ - - def setUp(self): - """Set up test data before each test method.""" - self.session = db.session() - self.tenant_id = str(uuid4()) - self.user_id = str(uuid4()) - self.conversation_id = str(uuid4()) - - # Create test data that will be cleaned up after each test - self.test_upload_files = [] - self.test_tool_files = [] - - # Create StorageKeyLoader instance - self.loader = StorageKeyLoader( - self.session, - self.tenant_id, - access_controller=DatabaseFileAccessController(), - ) - - def tearDown(self): - """Clean up test data after each test method.""" - self.session.rollback() - - def _create_upload_file( - self, file_id: str | None = None, storage_key: str | None = None, tenant_id: str | None = None - ) -> UploadFile: - """Helper method to create an UploadFile record for testing.""" - if file_id is None: - file_id = str(uuid4()) - if storage_key is None: - storage_key = f"test_storage_key_{uuid4()}" - if tenant_id is None: - tenant_id = self.tenant_id - - upload_file = UploadFile( - tenant_id=tenant_id, - storage_type=StorageType.LOCAL, - key=storage_key, - name="test_file.txt", - size=1024, - extension=".txt", - mime_type="text/plain", - created_by_role=CreatorUserRole.ACCOUNT, - created_by=self.user_id, - created_at=datetime.now(UTC), - used=False, - ) - upload_file.id = file_id - - self.session.add(upload_file) - self.session.flush() - self.test_upload_files.append(upload_file) - - return upload_file - - def _create_tool_file( - self, file_id: str | None = None, file_key: str | None = None, tenant_id: str | None = None - ) -> ToolFile: - """Helper method to create a ToolFile record for testing.""" - if file_id is None: - file_id = str(uuid4()) - if file_key is None: - file_key = f"test_file_key_{uuid4()}" - if tenant_id is None: - tenant_id = self.tenant_id - - tool_file = ToolFile( - user_id=self.user_id, - tenant_id=tenant_id, - conversation_id=self.conversation_id, - file_key=file_key, - mimetype="text/plain", - original_url="http://example.com/file.txt", - name="test_tool_file.txt", - size=2048, - ) - tool_file.id = file_id - self.session.add(tool_file) - self.session.flush() - self.test_tool_files.append(tool_file) - - return tool_file - - def _create_file(self, related_id: str, transfer_method: FileTransferMethod, tenant_id: str | None = None) -> File: - """Helper method to create a File object for testing.""" - if tenant_id is None: - tenant_id = self.tenant_id - - # Set related_id for LOCAL_FILE and TOOL_FILE transfer methods - file_related_id = None - remote_url = None - - if transfer_method in (FileTransferMethod.LOCAL_FILE, FileTransferMethod.TOOL_FILE): - file_related_id = related_id - elif transfer_method == FileTransferMethod.REMOTE_URL: - remote_url = "https://example.com/test_file.txt" - file_related_id = related_id - - return File( - id=str(uuid4()), # Generate new UUID for File.id - tenant_id=tenant_id, - type=FileType.DOCUMENT, - transfer_method=transfer_method, - related_id=file_related_id, - remote_url=remote_url, - filename="test_file.txt", - extension=".txt", - mime_type="text/plain", - size=1024, - storage_key="initial_key", - ) - - def test_load_storage_keys_local_file(self): - """Test loading storage keys for LOCAL_FILE transfer method.""" - # Create test data - upload_file = self._create_upload_file() - file = self._create_file(related_id=upload_file.id, transfer_method=FileTransferMethod.LOCAL_FILE) - - # Load storage keys - self.loader.load_storage_keys([file]) - - # Verify storage key was loaded correctly - assert file._storage_key == upload_file.key - - def test_load_storage_keys_remote_url(self): - """Test loading storage keys for REMOTE_URL transfer method.""" - # Create test data - upload_file = self._create_upload_file() - file = self._create_file(related_id=upload_file.id, transfer_method=FileTransferMethod.REMOTE_URL) - - # Load storage keys - self.loader.load_storage_keys([file]) - - # Verify storage key was loaded correctly - assert file._storage_key == upload_file.key - - def test_load_storage_keys_tool_file(self): - """Test loading storage keys for TOOL_FILE transfer method.""" - # Create test data - tool_file = self._create_tool_file() - file = self._create_file(related_id=tool_file.id, transfer_method=FileTransferMethod.TOOL_FILE) - - # Load storage keys - self.loader.load_storage_keys([file]) - - # Verify storage key was loaded correctly - assert file._storage_key == tool_file.file_key - - def test_load_storage_keys_mixed_methods(self): - """Test batch loading with mixed transfer methods.""" - # Create test data for different transfer methods - upload_file1 = self._create_upload_file() - upload_file2 = self._create_upload_file() - tool_file = self._create_tool_file() - - file1 = self._create_file(related_id=upload_file1.id, transfer_method=FileTransferMethod.LOCAL_FILE) - file2 = self._create_file(related_id=upload_file2.id, transfer_method=FileTransferMethod.REMOTE_URL) - file3 = self._create_file(related_id=tool_file.id, transfer_method=FileTransferMethod.TOOL_FILE) - - files = [file1, file2, file3] - - # Load storage keys - self.loader.load_storage_keys(files) - - # Verify all storage keys were loaded correctly - assert file1._storage_key == upload_file1.key - assert file2._storage_key == upload_file2.key - assert file3._storage_key == tool_file.file_key - - def test_load_storage_keys_empty_list(self): - """Test with empty file list.""" - # Should not raise any exceptions - self.loader.load_storage_keys([]) - - def test_load_storage_keys_ignores_legacy_file_tenant_id(self): - """Legacy file tenant_id should not override the loader tenant scope.""" - upload_file = self._create_upload_file() - file = self._create_file( - related_id=upload_file.id, transfer_method=FileTransferMethod.LOCAL_FILE, tenant_id=str(uuid4()) - ) - - self.loader.load_storage_keys([file]) - - assert file._storage_key == upload_file.key - - def test_load_storage_keys_missing_file_id(self): - """Test with None file.related_id.""" - # Create a file with valid parameters first, then manually set related_id to None - file = self._create_file(related_id=str(uuid4()), transfer_method=FileTransferMethod.LOCAL_FILE) - file.related_id = None - - # Should raise ValueError for None file related_id - with pytest.raises(ValueError) as context: - self.loader.load_storage_keys([file]) - - assert str(context.value) == "file id should not be None." - - def test_load_storage_keys_nonexistent_upload_file_records(self): - """Test with missing UploadFile database records.""" - # Create file with non-existent upload file id - non_existent_id = str(uuid4()) - file = self._create_file(related_id=non_existent_id, transfer_method=FileTransferMethod.LOCAL_FILE) - - # Should raise ValueError for missing record - with pytest.raises(ValueError): - self.loader.load_storage_keys([file]) - - def test_load_storage_keys_nonexistent_tool_file_records(self): - """Test with missing ToolFile database records.""" - # Create file with non-existent tool file id - non_existent_id = str(uuid4()) - file = self._create_file(related_id=non_existent_id, transfer_method=FileTransferMethod.TOOL_FILE) - - # Should raise ValueError for missing record - with pytest.raises(ValueError): - self.loader.load_storage_keys([file]) - - def test_load_storage_keys_invalid_uuid(self): - """Test with invalid UUID format.""" - # Create a file with valid parameters first, then manually set invalid related_id - file = self._create_file(related_id=str(uuid4()), transfer_method=FileTransferMethod.LOCAL_FILE) - file.related_id = "invalid-uuid-format" - - # Should raise ValueError for invalid UUID - with pytest.raises(ValueError): - self.loader.load_storage_keys([file]) - - def test_load_storage_keys_batch_efficiency(self): - """Test batched operations use efficient queries.""" - # Create multiple files of different types - upload_files = [self._create_upload_file() for _ in range(3)] - tool_files = [self._create_tool_file() for _ in range(2)] - - files = [] - files.extend( - [self._create_file(related_id=uf.id, transfer_method=FileTransferMethod.LOCAL_FILE) for uf in upload_files] - ) - files.extend( - [self._create_file(related_id=tf.id, transfer_method=FileTransferMethod.TOOL_FILE) for tf in tool_files] - ) - - # Mock the session to count queries - with patch.object(self.session, "scalars", wraps=self.session.scalars) as mock_scalars: - self.loader.load_storage_keys(files) - - # Should make exactly 2 queries (one for upload_files, one for tool_files) - assert mock_scalars.call_count == 2 - - # Verify all storage keys were loaded correctly - for i, file in enumerate(files[:3]): - assert file._storage_key == upload_files[i].key - for i, file in enumerate(files[3:]): - assert file._storage_key == tool_files[i].file_key - - def test_load_storage_keys_tenant_isolation(self): - """Test that tenant isolation works correctly.""" - # Create files for different tenants - other_tenant_id = str(uuid4()) - - # Create upload file for current tenant - upload_file_current = self._create_upload_file() - file_current = self._create_file( - related_id=upload_file_current.id, transfer_method=FileTransferMethod.LOCAL_FILE - ) - - # Create upload file for other tenant (but don't add to cleanup list) - upload_file_other = UploadFile( - tenant_id=other_tenant_id, - storage_type=StorageType.LOCAL, - key="other_tenant_key", - name="other_file.txt", - size=1024, - extension=".txt", - mime_type="text/plain", - created_by_role=CreatorUserRole.ACCOUNT, - created_by=self.user_id, - created_at=datetime.now(UTC), - used=False, - ) - upload_file_other.id = str(uuid4()) - self.session.add(upload_file_other) - self.session.flush() - - # Create file for other tenant but try to load with current tenant's loader - file_other = self._create_file( - related_id=upload_file_other.id, transfer_method=FileTransferMethod.LOCAL_FILE, tenant_id=other_tenant_id - ) - - # Should raise ValueError due to tenant mismatch - with pytest.raises(ValueError) as context: - self.loader.load_storage_keys([file_other]) - - assert "Upload file not found for id:" in str(context.value) - - # Current tenant's file should still work - self.loader.load_storage_keys([file_current]) - assert file_current._storage_key == upload_file_current.key - - def test_load_storage_keys_mixed_tenant_batch(self): - """Test batch with mixed tenant files (should fail on first mismatch).""" - # Create files for current tenant - upload_file_current = self._create_upload_file() - file_current = self._create_file( - related_id=upload_file_current.id, transfer_method=FileTransferMethod.LOCAL_FILE - ) - - # Create file for different tenant - other_tenant_id = str(uuid4()) - file_other = self._create_file( - related_id=str(uuid4()), transfer_method=FileTransferMethod.LOCAL_FILE, tenant_id=other_tenant_id - ) - - # Should raise ValueError on tenant mismatch - with pytest.raises(ValueError) as context: - self.loader.load_storage_keys([file_current, file_other]) - - assert "Upload file not found for id:" in str(context.value) - - def test_load_storage_keys_duplicate_file_ids(self): - """Test handling of duplicate file IDs in the batch.""" - # Create upload file - upload_file = self._create_upload_file() - - # Create two File objects with same related_id - file1 = self._create_file(related_id=upload_file.id, transfer_method=FileTransferMethod.LOCAL_FILE) - file2 = self._create_file(related_id=upload_file.id, transfer_method=FileTransferMethod.LOCAL_FILE) - - # Should handle duplicates gracefully - self.loader.load_storage_keys([file1, file2]) - - # Both files should have the same storage key - assert file1._storage_key == upload_file.key - assert file2._storage_key == upload_file.key - - def test_load_storage_keys_session_isolation(self): - """Test that the loader uses the provided session correctly.""" - # Create test data - upload_file = self._create_upload_file() - file = self._create_file(related_id=upload_file.id, transfer_method=FileTransferMethod.LOCAL_FILE) - - # Create loader with different session (same underlying connection) - - with Session(bind=db.engine) as other_session: - other_loader = StorageKeyLoader( - other_session, - self.tenant_id, - access_controller=DatabaseFileAccessController(), - ) - with pytest.raises(ValueError): - other_loader.load_storage_keys([file]) diff --git a/api/tests/integration_tests/model_runtime/__mock/plugin_model.py b/api/tests/integration_tests/model_runtime/__mock/plugin_model.py index ce04a158a8..c4146d5ccd 100644 --- a/api/tests/integration_tests/model_runtime/__mock/plugin_model.py +++ b/api/tests/integration_tests/model_runtime/__mock/plugin_model.py @@ -4,6 +4,9 @@ from collections.abc import Generator, Sequence from decimal import Decimal from json import dumps +from core.plugin.entities.plugin_daemon import PluginModelProviderEntity +from core.plugin.impl.model import PluginModelClient + # import monkeypatch from graphon.model_runtime.entities.common_entities import I18nObject from graphon.model_runtime.entities.llm_entities import ( @@ -23,9 +26,6 @@ from graphon.model_runtime.entities.model_entities import ( ) from graphon.model_runtime.entities.provider_entities import ConfigurateMethod, ProviderEntity -from core.plugin.entities.plugin_daemon import PluginModelProviderEntity -from core.plugin.impl.model import PluginModelClient - class MockModelClass(PluginModelClient): def fetch_model_providers(self, tenant_id: str) -> Sequence[PluginModelProviderEntity]: diff --git a/api/tests/integration_tests/services/plugin/test_plugin_lifecycle.py b/api/tests/integration_tests/services/plugin/test_plugin_lifecycle.py index 951a5ab4b4..0a19debc39 100644 --- a/api/tests/integration_tests/services/plugin/test_plugin_lifecycle.py +++ b/api/tests/integration_tests/services/plugin/test_plugin_lifecycle.py @@ -1,5 +1,5 @@ import pytest -from sqlalchemy import delete +from sqlalchemy import delete, func, select from core.db.session_factory import session_factory from models import Tenant @@ -61,7 +61,11 @@ class TestPluginPermissionLifecycle: assert perm.debug_permission == TenantPluginPermission.DebugPermission.ADMINS with session_factory.create_session() as session: - count = session.query(TenantPluginPermission).where(TenantPluginPermission.tenant_id == tenant).count() + count = session.scalar( + select(func.count()) + .select_from(TenantPluginPermission) + .where(TenantPluginPermission.tenant_id == tenant) + ) assert count == 1 diff --git a/api/tests/integration_tests/services/retention/test_messages_clean_service.py b/api/tests/integration_tests/services/retention/test_messages_clean_service.py index 348bb0af4a..352960bcc2 100644 --- a/api/tests/integration_tests/services/retention/test_messages_clean_service.py +++ b/api/tests/integration_tests/services/retention/test_messages_clean_service.py @@ -3,7 +3,7 @@ import math import uuid import pytest -from sqlalchemy import delete +from sqlalchemy import delete, func, select from core.db.session_factory import session_factory from models import Tenant @@ -210,7 +210,7 @@ class TestMessagesCleanServiceIntegration: assert stats["total_deleted"] == 0 with session_factory.create_session() as session: - remaining = session.query(Message).where(Message.id.in_(all_ids)).count() + remaining = session.scalar(select(func.count()).select_from(Message).where(Message.id.in_(all_ids))) assert remaining == len(all_ids) def test_billing_disabled_deletes_all_in_range(self, seed_messages): @@ -231,7 +231,7 @@ class TestMessagesCleanServiceIntegration: assert stats["total_deleted"] == len(all_ids) with session_factory.create_session() as session: - remaining = session.query(Message).where(Message.id.in_(all_ids)).count() + remaining = session.scalar(select(func.count()).select_from(Message).where(Message.id.in_(all_ids))) assert remaining == 0 def test_start_from_filters_correctly(self, seed_messages): @@ -254,7 +254,7 @@ class TestMessagesCleanServiceIntegration: with session_factory.create_session() as session: all_ids = list(msg_ids.values()) - remaining_ids = {r[0] for r in session.query(Message.id).where(Message.id.in_(all_ids)).all()} + remaining_ids = set(session.scalars(select(Message.id).where(Message.id.in_(all_ids))).all()) assert msg_ids["old"] not in remaining_ids assert msg_ids["very_old"] in remaining_ids @@ -282,7 +282,7 @@ class TestMessagesCleanServiceIntegration: assert stats["batches"] >= expected_batches with session_factory.create_session() as session: - remaining = session.query(Message).where(Message.id.in_(msg_ids)).count() + remaining = session.scalar(select(func.count()).select_from(Message).where(Message.id.in_(msg_ids))) assert remaining == 0 def test_no_messages_in_range_returns_empty_stats(self, seed_messages): @@ -319,9 +319,17 @@ class TestMessagesCleanServiceIntegration: assert stats["total_deleted"] == 1 with session_factory.create_session() as session: - assert session.query(Message).where(Message.id == msg_id).count() == 0 - assert session.query(MessageFeedback).where(MessageFeedback.id == fb_id).count() == 0 - assert session.query(MessageAnnotation).where(MessageAnnotation.id == ann_id).count() == 0 + assert session.scalar(select(func.count()).select_from(Message).where(Message.id == msg_id)) == 0 + assert ( + session.scalar(select(func.count()).select_from(MessageFeedback).where(MessageFeedback.id == fb_id)) + == 0 + ) + assert ( + session.scalar( + select(func.count()).select_from(MessageAnnotation).where(MessageAnnotation.id == ann_id) + ) + == 0 + ) def test_factory_from_time_range_validation(self): with pytest.raises(ValueError, match="start_from"): diff --git a/api/tests/integration_tests/services/test_workflow_draft_variable_service.py b/api/tests/integration_tests/services/test_workflow_draft_variable_service.py index 5c6636f31e..e130644338 100644 --- a/api/tests/integration_tests/services/test_workflow_draft_variable_service.py +++ b/api/tests/integration_tests/services/test_workflow_draft_variable_service.py @@ -3,11 +3,7 @@ import unittest import uuid import pytest -from graphon.nodes import BuiltinNodeTypes -from graphon.variables.segments import StringSegment -from graphon.variables.types import SegmentType -from graphon.variables.variables import StringVariable -from sqlalchemy import delete +from sqlalchemy import delete, func, select from sqlalchemy.orm import Session from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, SYSTEM_VARIABLE_NODE_ID @@ -15,6 +11,10 @@ from extensions.ext_database import db from extensions.ext_storage import storage from extensions.storage.storage_type import StorageType from factories.variable_factory import build_segment +from graphon.nodes import BuiltinNodeTypes +from graphon.variables.segments import StringSegment +from graphon.variables.types import SegmentType +from graphon.variables.variables import StringVariable from libs import datetime_utils from models.enums import CreatorUserRole from models.model import UploadFile @@ -38,21 +38,25 @@ class TestWorkflowDraftVariableService(unittest.TestCase): def setUp(self): self._test_app_id = str(uuid.uuid4()) + self._test_user_id = str(uuid.uuid4()) self._session: Session = db.session() sys_var = WorkflowDraftVariable.new_sys_variable( app_id=self._test_app_id, + user_id=self._test_user_id, name="sys_var", value=build_segment("sys_value"), node_execution_id=self._node_exec_id, ) conv_var = WorkflowDraftVariable.new_conversation_variable( app_id=self._test_app_id, + user_id=self._test_user_id, name="conv_var", value=build_segment("conv_value"), ) node2_vars = [ WorkflowDraftVariable.new_node_variable( app_id=self._test_app_id, + user_id=self._test_user_id, node_id=self._node2_id, name="int_var", value=build_segment(1), @@ -61,6 +65,7 @@ class TestWorkflowDraftVariableService(unittest.TestCase): ), WorkflowDraftVariable.new_node_variable( app_id=self._test_app_id, + user_id=self._test_user_id, node_id=self._node2_id, name="str_var", value=build_segment("str_value"), @@ -70,6 +75,7 @@ class TestWorkflowDraftVariableService(unittest.TestCase): ] node1_var = WorkflowDraftVariable.new_node_variable( app_id=self._test_app_id, + user_id=self._test_user_id, node_id=self._node1_id, name="str_var", value=build_segment("str_value"), @@ -141,24 +147,27 @@ class TestWorkflowDraftVariableService(unittest.TestCase): def test_delete_node_variables(self): srv = self._get_test_srv() srv.delete_node_variables(self._test_app_id, self._node2_id, user_id=self._test_user_id) - node2_var_count = ( - self._session.query(WorkflowDraftVariable) + node2_var_count = self._session.scalar( + select(func.count()) + .select_from(WorkflowDraftVariable) .where( WorkflowDraftVariable.app_id == self._test_app_id, WorkflowDraftVariable.node_id == self._node2_id, + WorkflowDraftVariable.user_id == self._test_user_id, ) - .count() ) assert node2_var_count == 0 def test_delete_variable(self): srv = self._get_test_srv() - node_1_var = ( - self._session.query(WorkflowDraftVariable).where(WorkflowDraftVariable.id == self._node1_str_var_id).one() - ) + node_1_var = self._session.scalars( + select(WorkflowDraftVariable).where(WorkflowDraftVariable.id == self._node1_str_var_id) + ).one() srv.delete_variable(node_1_var) exists = bool( - self._session.query(WorkflowDraftVariable).where(WorkflowDraftVariable.id == self._node1_str_var_id).first() + self._session.scalars( + select(WorkflowDraftVariable).where(WorkflowDraftVariable.id == self._node1_str_var_id) + ).first() ) assert exists is False @@ -248,9 +257,7 @@ class TestDraftVariableLoader(unittest.TestCase): def tearDown(self): with Session(bind=db.engine, expire_on_commit=False) as session: - session.query(WorkflowDraftVariable).where(WorkflowDraftVariable.app_id == self._test_app_id).delete( - synchronize_session=False - ) + session.execute(delete(WorkflowDraftVariable).where(WorkflowDraftVariable.app_id == self._test_app_id)) session.commit() def test_variable_loader_with_empty_selector(self): @@ -431,9 +438,11 @@ class TestDraftVariableLoader(unittest.TestCase): # Clean up with Session(bind=db.engine) as session: # Query and delete by ID to ensure they're tracked in this session - session.query(WorkflowDraftVariable).filter_by(id=offloaded_var.id).delete() - session.query(WorkflowDraftVariableFile).filter_by(id=variable_file.id).delete() - session.query(UploadFile).filter_by(id=upload_file.id).delete() + session.execute(delete(WorkflowDraftVariable).where(WorkflowDraftVariable.id == offloaded_var.id)) + session.execute( + delete(WorkflowDraftVariableFile).where(WorkflowDraftVariableFile.id == variable_file.id) + ) + session.execute(delete(UploadFile).where(UploadFile.id == upload_file.id)) session.commit() # Clean up storage try: @@ -534,9 +543,11 @@ class TestDraftVariableLoader(unittest.TestCase): # Clean up with Session(bind=db.engine) as session: # Query and delete by ID to ensure they're tracked in this session - session.query(WorkflowDraftVariable).filter_by(id=offloaded_var.id).delete() - session.query(WorkflowDraftVariableFile).filter_by(id=variable_file.id).delete() - session.query(UploadFile).filter_by(id=upload_file.id).delete() + session.execute(delete(WorkflowDraftVariable).where(WorkflowDraftVariable.id == offloaded_var.id)) + session.execute( + delete(WorkflowDraftVariableFile).where(WorkflowDraftVariableFile.id == variable_file.id) + ) + session.execute(delete(UploadFile).where(UploadFile.id == upload_file.id)) session.commit() # Clean up storage try: diff --git a/api/tests/integration_tests/tasks/test_remove_app_and_related_data_task.py b/api/tests/integration_tests/tasks/test_remove_app_and_related_data_task.py index 38dc8bbb28..4f444598b1 100644 --- a/api/tests/integration_tests/tasks/test_remove_app_and_related_data_task.py +++ b/api/tests/integration_tests/tasks/test_remove_app_and_related_data_task.py @@ -2,11 +2,11 @@ import uuid from unittest.mock import patch import pytest -from graphon.variables.segments import StringSegment -from sqlalchemy import delete +from sqlalchemy import delete, func, select from core.db.session_factory import session_factory from extensions.storage.storage_type import StorageType +from graphon.variables.segments import StringSegment from models import Tenant from models.enums import CreatorUserRole from models.model import App, UploadFile @@ -108,8 +108,12 @@ class TestDeleteDraftVariablesIntegration: app2_id = data["app2"].id with session_factory.create_session() as session: - app1_vars_before = session.query(WorkflowDraftVariable).filter_by(app_id=app1_id).count() - app2_vars_before = session.query(WorkflowDraftVariable).filter_by(app_id=app2_id).count() + app1_vars_before = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app1_id) + ) + app2_vars_before = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app2_id) + ) assert app1_vars_before == 5 assert app2_vars_before == 5 @@ -117,8 +121,12 @@ class TestDeleteDraftVariablesIntegration: assert deleted_count == 5 with session_factory.create_session() as session: - app1_vars_after = session.query(WorkflowDraftVariable).filter_by(app_id=app1_id).count() - app2_vars_after = session.query(WorkflowDraftVariable).filter_by(app_id=app2_id).count() + app1_vars_after = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app1_id) + ) + app2_vars_after = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app2_id) + ) assert app1_vars_after == 0 assert app2_vars_after == 5 @@ -130,7 +138,9 @@ class TestDeleteDraftVariablesIntegration: assert deleted_count == 5 with session_factory.create_session() as session: - remaining_vars = session.query(WorkflowDraftVariable).filter_by(app_id=app1_id).count() + remaining_vars = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app1_id) + ) assert remaining_vars == 0 def test_delete_draft_variables_batch_nonexistent_app(self, setup_test_data): @@ -143,14 +153,18 @@ class TestDeleteDraftVariablesIntegration: app1_id = data["app1"].id with session_factory.create_session() as session: - vars_before = session.query(WorkflowDraftVariable).filter_by(app_id=app1_id).count() + vars_before = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app1_id) + ) assert vars_before == 5 deleted_count = _delete_draft_variables(app1_id) assert deleted_count == 5 with session_factory.create_session() as session: - vars_after = session.query(WorkflowDraftVariable).filter_by(app_id=app1_id).count() + vars_after = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app1_id) + ) assert vars_after == 0 def test_batch_deletion_handles_large_dataset(self, app_and_tenant): @@ -175,7 +189,9 @@ class TestDeleteDraftVariablesIntegration: deleted_count = delete_draft_variables_batch(app.id, batch_size=8) assert deleted_count == 25 with session_factory.create_session() as session: - remaining = session.query(WorkflowDraftVariable).filter_by(app_id=app.id).count() + remaining = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app.id) + ) assert remaining == 0 finally: with session_factory.create_session() as session: @@ -193,7 +209,6 @@ class TestDeleteDraftVariablesWithOffloadIntegration: def setup_offload_test_data(self, app_and_tenant): tenant, app = app_and_tenant from graphon.variables.types import SegmentType - from libs.datetime_utils import naive_utc_now with session_factory.create_session() as session: @@ -307,13 +322,17 @@ class TestDeleteDraftVariablesWithOffloadIntegration: mock_storage.delete.return_value = None with session_factory.create_session() as session: - draft_vars_before = session.query(WorkflowDraftVariable).filter_by(app_id=app_id).count() - var_files_before = ( - session.query(WorkflowDraftVariableFile) - .where(WorkflowDraftVariableFile.id.in_(variable_file_ids)) - .count() + draft_vars_before = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app_id) + ) + var_files_before = session.scalar( + select(func.count()) + .select_from(WorkflowDraftVariableFile) + .where(WorkflowDraftVariableFile.id.in_(variable_file_ids)) + ) + upload_files_before = session.scalar( + select(func.count()).select_from(UploadFile).where(UploadFile.id.in_(upload_file_ids)) ) - upload_files_before = session.query(UploadFile).where(UploadFile.id.in_(upload_file_ids)).count() assert draft_vars_before == 3 assert var_files_before == 2 assert upload_files_before == 2 @@ -322,16 +341,20 @@ class TestDeleteDraftVariablesWithOffloadIntegration: assert deleted_count == 3 with session_factory.create_session() as session: - draft_vars_after = session.query(WorkflowDraftVariable).filter_by(app_id=app_id).count() + draft_vars_after = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app_id) + ) assert draft_vars_after == 0 with session_factory.create_session() as session: - var_files_after = ( - session.query(WorkflowDraftVariableFile) + var_files_after = session.scalar( + select(func.count()) + .select_from(WorkflowDraftVariableFile) .where(WorkflowDraftVariableFile.id.in_(variable_file_ids)) - .count() ) - upload_files_after = session.query(UploadFile).where(UploadFile.id.in_(upload_file_ids)).count() + upload_files_after = session.scalar( + select(func.count()).select_from(UploadFile).where(UploadFile.id.in_(upload_file_ids)) + ) assert var_files_after == 0 assert upload_files_after == 0 @@ -352,16 +375,20 @@ class TestDeleteDraftVariablesWithOffloadIntegration: assert deleted_count == 3 with session_factory.create_session() as session: - draft_vars_after = session.query(WorkflowDraftVariable).filter_by(app_id=app_id).count() + draft_vars_after = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app_id) + ) assert draft_vars_after == 0 with session_factory.create_session() as session: - var_files_after = ( - session.query(WorkflowDraftVariableFile) + var_files_after = session.scalar( + select(func.count()) + .select_from(WorkflowDraftVariableFile) .where(WorkflowDraftVariableFile.id.in_(variable_file_ids)) - .count() ) - upload_files_after = session.query(UploadFile).where(UploadFile.id.in_(upload_file_ids)).count() + upload_files_after = session.scalar( + select(func.count()).select_from(UploadFile).where(UploadFile.id.in_(upload_file_ids)) + ) assert var_files_after == 0 assert upload_files_after == 0 @@ -425,7 +452,6 @@ class TestDeleteDraftVariablesSessionCommit: def setup_offload_test_data(self, app_and_tenant): """Create test data with offload files for session commit tests.""" from graphon.variables.types import SegmentType - from libs.datetime_utils import naive_utc_now tenant, app = app_and_tenant @@ -579,7 +605,9 @@ class TestDeleteDraftVariablesSessionCommit: # Verify all data was deleted (proves transaction was committed) with session_factory.create_session() as session: - remaining_count = session.query(WorkflowDraftVariable).filter_by(app_id=app_id).count() + remaining_count = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app_id) + ) assert deleted_count == 10 assert remaining_count == 0 @@ -592,7 +620,9 @@ class TestDeleteDraftVariablesSessionCommit: # Verify initial state with session_factory.create_session() as session: - initial_count = session.query(WorkflowDraftVariable).filter_by(app_id=app_id).count() + initial_count = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app_id) + ) assert initial_count == 10 # Perform deletion with small batch size to force multiple commits @@ -602,13 +632,17 @@ class TestDeleteDraftVariablesSessionCommit: # Verify all data is deleted in a new session (proves commits worked) with session_factory.create_session() as session: - final_count = session.query(WorkflowDraftVariable).filter_by(app_id=app_id).count() + final_count = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app_id) + ) assert final_count == 0 # Verify specific IDs are deleted with session_factory.create_session() as session: - remaining_vars = ( - session.query(WorkflowDraftVariable).where(WorkflowDraftVariable.id.in_(variable_ids)).count() + remaining_vars = session.scalar( + select(func.count()) + .select_from(WorkflowDraftVariable) + .where(WorkflowDraftVariable.id.in_(variable_ids)) ) assert remaining_vars == 0 @@ -626,7 +660,9 @@ class TestDeleteDraftVariablesSessionCommit: app_id = data["app"].id with session_factory.create_session() as session: - initial_count = session.query(WorkflowDraftVariable).filter_by(app_id=app_id).count() + initial_count = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app_id) + ) assert initial_count == 10 # Delete all in a single batch @@ -635,7 +671,9 @@ class TestDeleteDraftVariablesSessionCommit: # Verify data is persisted with session_factory.create_session() as session: - final_count = session.query(WorkflowDraftVariable).filter_by(app_id=app_id).count() + final_count = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app_id) + ) assert final_count == 0 def test_invalid_batch_size_raises_error(self, setup_commit_test_data): @@ -659,13 +697,17 @@ class TestDeleteDraftVariablesSessionCommit: # Verify initial state with session_factory.create_session() as session: - draft_vars_before = session.query(WorkflowDraftVariable).filter_by(app_id=app_id).count() - var_files_before = ( - session.query(WorkflowDraftVariableFile) - .where(WorkflowDraftVariableFile.id.in_([vf.id for vf in data["variable_files"]])) - .count() + draft_vars_before = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app_id) + ) + var_files_before = session.scalar( + select(func.count()) + .select_from(WorkflowDraftVariableFile) + .where(WorkflowDraftVariableFile.id.in_([vf.id for vf in data["variable_files"]])) + ) + upload_files_before = session.scalar( + select(func.count()).select_from(UploadFile).where(UploadFile.id.in_(upload_file_ids)) ) - upload_files_before = session.query(UploadFile).where(UploadFile.id.in_(upload_file_ids)).count() assert draft_vars_before == 3 assert var_files_before == 2 assert upload_files_before == 2 @@ -676,13 +718,17 @@ class TestDeleteDraftVariablesSessionCommit: # Verify all data is persisted (deleted) in new session with session_factory.create_session() as session: - draft_vars_after = session.query(WorkflowDraftVariable).filter_by(app_id=app_id).count() - var_files_after = ( - session.query(WorkflowDraftVariableFile) - .where(WorkflowDraftVariableFile.id.in_([vf.id for vf in data["variable_files"]])) - .count() + draft_vars_after = session.scalar( + select(func.count()).select_from(WorkflowDraftVariable).filter_by(app_id=app_id) + ) + var_files_after = session.scalar( + select(func.count()) + .select_from(WorkflowDraftVariableFile) + .where(WorkflowDraftVariableFile.id.in_([vf.id for vf in data["variable_files"]])) + ) + upload_files_after = session.scalar( + select(func.count()).select_from(UploadFile).where(UploadFile.id.in_(upload_file_ids)) ) - upload_files_after = session.query(UploadFile).where(UploadFile.id.in_(upload_file_ids)).count() assert draft_vars_after == 0 assert var_files_after == 0 assert upload_files_after == 0 diff --git a/api/tests/integration_tests/vdb/analyticdb/__init__.py b/api/tests/integration_tests/vdb/analyticdb/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/baidu/__init__.py b/api/tests/integration_tests/vdb/baidu/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/chroma/__init__.py b/api/tests/integration_tests/vdb/chroma/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/couchbase/__init__.py b/api/tests/integration_tests/vdb/couchbase/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/elasticsearch/__init__.py b/api/tests/integration_tests/vdb/elasticsearch/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/hologres/__init__.py b/api/tests/integration_tests/vdb/hologres/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/huawei/__init__.py b/api/tests/integration_tests/vdb/huawei/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/iris/__init__.py b/api/tests/integration_tests/vdb/iris/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/lindorm/__init__.py b/api/tests/integration_tests/vdb/lindorm/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/matrixone/__init__.py b/api/tests/integration_tests/vdb/matrixone/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/milvus/__init__.py b/api/tests/integration_tests/vdb/milvus/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/myscale/__init__.py b/api/tests/integration_tests/vdb/myscale/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/oceanbase/__init__.py b/api/tests/integration_tests/vdb/oceanbase/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/opengauss/__init__.py b/api/tests/integration_tests/vdb/opengauss/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/opensearch/__init__.py b/api/tests/integration_tests/vdb/opensearch/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/opensearch/test_opensearch.py b/api/tests/integration_tests/vdb/opensearch/test_opensearch.py deleted file mode 100644 index 81ebb1d2f7..0000000000 --- a/api/tests/integration_tests/vdb/opensearch/test_opensearch.py +++ /dev/null @@ -1,235 +0,0 @@ -from unittest.mock import MagicMock, patch - -import pytest - -from core.rag.datasource.vdb.field import Field -from core.rag.datasource.vdb.opensearch.opensearch_vector import OpenSearchConfig, OpenSearchVector -from core.rag.models.document import Document -from extensions import ext_redis - - -def get_example_text() -> str: - return "This is a sample text for testing purposes." - - -@pytest.fixture(scope="module") -def setup_mock_redis(): - ext_redis.redis_client.get = MagicMock(return_value=None) - ext_redis.redis_client.set = MagicMock(return_value=None) - - mock_redis_lock = MagicMock() - mock_redis_lock.__enter__ = MagicMock() - mock_redis_lock.__exit__ = MagicMock() - ext_redis.redis_client.lock = MagicMock(return_value=mock_redis_lock) - - -class TestOpenSearchConfig: - def test_to_opensearch_params(self): - config = OpenSearchConfig( - host="localhost", - port=9200, - secure=True, - user="admin", - password="password", - ) - - params = config.to_opensearch_params() - - assert params["hosts"] == [{"host": "localhost", "port": 9200}] - assert params["use_ssl"] is True - assert params["verify_certs"] is True - assert params["connection_class"].__name__ == "Urllib3HttpConnection" - assert params["http_auth"] == ("admin", "password") - - @patch("boto3.Session", autospec=True) - @patch("core.rag.datasource.vdb.opensearch.opensearch_vector.Urllib3AWSV4SignerAuth", autospec=True) - def test_to_opensearch_params_with_aws_managed_iam( - self, mock_aws_signer_auth: MagicMock, mock_boto_session: MagicMock - ): - mock_credentials = MagicMock() - mock_boto_session.return_value.get_credentials.return_value = mock_credentials - - mock_auth_instance = mock_aws_signer_auth.return_value - aws_region = "ap-southeast-2" - aws_service = "aoss" - host = f"aoss-endpoint.{aws_region}.aoss.amazonaws.com" - port = 9201 - - config = OpenSearchConfig( - host=host, - port=port, - secure=True, - auth_method="aws_managed_iam", - aws_region=aws_region, - aws_service=aws_service, - ) - - params = config.to_opensearch_params() - - assert params["hosts"] == [{"host": host, "port": port}] - assert params["use_ssl"] is True - assert params["verify_certs"] is True - assert params["connection_class"].__name__ == "Urllib3HttpConnection" - assert params["http_auth"] is mock_auth_instance - - mock_aws_signer_auth.assert_called_once_with( - credentials=mock_credentials, region=aws_region, service=aws_service - ) - assert mock_boto_session.return_value.get_credentials.called - - -class TestOpenSearchVector: - def setup_method(self): - self.collection_name = "test_collection" - self.example_doc_id = "example_doc_id" - self.vector = OpenSearchVector( - collection_name=self.collection_name, - config=OpenSearchConfig(host="localhost", port=9200, secure=False, user="admin", password="password"), - ) - self.vector._client = MagicMock() - - @pytest.mark.parametrize( - ("search_response", "expected_length", "expected_doc_id"), - [ - ( - { - "hits": { - "total": {"value": 1}, - "hits": [ - { - "_source": { - "page_content": get_example_text(), - "metadata": {"document_id": "example_doc_id"}, - } - } - ], - } - }, - 1, - "example_doc_id", - ), - ({"hits": {"total": {"value": 0}, "hits": []}}, 0, None), - ], - ) - def test_search_by_full_text(self, search_response, expected_length, expected_doc_id): - self.vector._client.search.return_value = search_response - - hits_by_full_text = self.vector.search_by_full_text(query=get_example_text()) - assert len(hits_by_full_text) == expected_length - if expected_length > 0: - assert hits_by_full_text[0].metadata["document_id"] == expected_doc_id - - def test_search_by_vector(self): - vector = [0.1] * 128 - mock_response = { - "hits": { - "total": {"value": 1}, - "hits": [ - { - "_source": { - Field.CONTENT_KEY: get_example_text(), - Field.METADATA_KEY: {"document_id": self.example_doc_id}, - }, - "_score": 1.0, - } - ], - } - } - self.vector._client.search.return_value = mock_response - - hits_by_vector = self.vector.search_by_vector(query_vector=vector) - - print("Hits by vector:", hits_by_vector) - print("Expected document ID:", self.example_doc_id) - print("Actual document ID:", hits_by_vector[0].metadata["document_id"] if hits_by_vector else "No hits") - - assert len(hits_by_vector) > 0, f"Expected at least one hit, got {len(hits_by_vector)}" - assert hits_by_vector[0].metadata["document_id"] == self.example_doc_id, ( - f"Expected document ID {self.example_doc_id}, got {hits_by_vector[0].metadata['document_id']}" - ) - - def test_get_ids_by_metadata_field(self): - mock_response = {"hits": {"total": {"value": 1}, "hits": [{"_id": "mock_id"}]}} - self.vector._client.search.return_value = mock_response - - doc = Document(page_content="Test content", metadata={"document_id": self.example_doc_id}) - embedding = [0.1] * 128 - - with patch("opensearchpy.helpers.bulk", autospec=True) as mock_bulk: - mock_bulk.return_value = ([], []) - self.vector.add_texts([doc], [embedding]) - - ids = self.vector.get_ids_by_metadata_field(key="document_id", value=self.example_doc_id) - assert len(ids) == 1 - assert ids[0] == "mock_id" - - def test_add_texts(self): - self.vector._client.index.return_value = {"result": "created"} - - doc = Document(page_content="Test content", metadata={"document_id": self.example_doc_id}) - embedding = [0.1] * 128 - - with patch("opensearchpy.helpers.bulk", autospec=True) as mock_bulk: - mock_bulk.return_value = ([], []) - self.vector.add_texts([doc], [embedding]) - - mock_response = {"hits": {"total": {"value": 1}, "hits": [{"_id": "mock_id"}]}} - self.vector._client.search.return_value = mock_response - - ids = self.vector.get_ids_by_metadata_field(key="document_id", value=self.example_doc_id) - assert len(ids) == 1 - assert ids[0] == "mock_id" - - def test_delete_nonexistent_index(self): - """Test deleting a non-existent index.""" - # Create a vector instance with a non-existent collection name - self.vector._client.indices.exists.return_value = False - - # Should not raise an exception - self.vector.delete() - - # Verify that exists was called but delete was not - self.vector._client.indices.exists.assert_called_once_with(index=self.collection_name.lower()) - self.vector._client.indices.delete.assert_not_called() - - def test_delete_existing_index(self): - """Test deleting an existing index.""" - self.vector._client.indices.exists.return_value = True - - self.vector.delete() - - # Verify both exists and delete were called - self.vector._client.indices.exists.assert_called_once_with(index=self.collection_name.lower()) - self.vector._client.indices.delete.assert_called_once_with(index=self.collection_name.lower()) - - -@pytest.mark.usefixtures("setup_mock_redis") -class TestOpenSearchVectorWithRedis: - def setup_method(self): - self.tester = TestOpenSearchVector() - - def test_search_by_full_text(self): - self.tester.setup_method() - search_response = { - "hits": { - "total": {"value": 1}, - "hits": [ - {"_source": {"page_content": get_example_text(), "metadata": {"document_id": "example_doc_id"}}} - ], - } - } - expected_length = 1 - expected_doc_id = "example_doc_id" - self.tester.test_search_by_full_text(search_response, expected_length, expected_doc_id) - - def test_get_ids_by_metadata_field(self): - self.tester.setup_method() - self.tester.test_get_ids_by_metadata_field() - - def test_add_texts(self): - self.tester.setup_method() - self.tester.test_add_texts() - - def test_search_by_vector(self): - self.tester.setup_method() - self.tester.test_search_by_vector() diff --git a/api/tests/integration_tests/vdb/oracle/__init__.py b/api/tests/integration_tests/vdb/oracle/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/pgvecto_rs/__init__.py b/api/tests/integration_tests/vdb/pgvecto_rs/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/pgvector/__init__.py b/api/tests/integration_tests/vdb/pgvector/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/pyvastbase/__init__.py b/api/tests/integration_tests/vdb/pyvastbase/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/qdrant/__init__.py b/api/tests/integration_tests/vdb/qdrant/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/tablestore/__init__.py b/api/tests/integration_tests/vdb/tablestore/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/tcvectordb/__init__.py b/api/tests/integration_tests/vdb/tcvectordb/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/tidb_vector/__init__.py b/api/tests/integration_tests/vdb/tidb_vector/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/upstash/__init__.py b/api/tests/integration_tests/vdb/upstash/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/vikingdb/__init__.py b/api/tests/integration_tests/vdb/vikingdb/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/vdb/weaviate/__init__.py b/api/tests/integration_tests/vdb/weaviate/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/integration_tests/workflow/nodes/__mock/model.py b/api/tests/integration_tests/workflow/nodes/__mock/model.py index c0143faa85..a9a2617bae 100644 --- a/api/tests/integration_tests/workflow/nodes/__mock/model.py +++ b/api/tests/integration_tests/workflow/nodes/__mock/model.py @@ -1,12 +1,11 @@ from unittest.mock import MagicMock -from graphon.model_runtime.entities.model_entities import ModelType - from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.entities.provider_configuration import ProviderConfiguration, ProviderModelBundle from core.entities.provider_entities import CustomConfiguration, CustomProviderConfiguration, SystemConfiguration from core.model_manager import ModelInstance from core.plugin.impl.model_runtime_factory import create_plugin_model_provider_factory +from graphon.model_runtime.entities.model_entities import ModelType from models.provider import ProviderType diff --git a/api/tests/integration_tests/workflow/nodes/code_executor/test_code_executor.py b/api/tests/integration_tests/workflow/nodes/code_executor/test_code_executor.py deleted file mode 100644 index 487178ff58..0000000000 --- a/api/tests/integration_tests/workflow/nodes/code_executor/test_code_executor.py +++ /dev/null @@ -1,11 +0,0 @@ -import pytest - -from core.helper.code_executor.code_executor import CodeExecutionError, CodeExecutor - -CODE_LANGUAGE = "unsupported_language" - - -def test_unsupported_with_code_template(): - with pytest.raises(CodeExecutionError) as e: - CodeExecutor.execute_workflow_code_template(language=CODE_LANGUAGE, code="", inputs={}) - assert str(e.value) == f"Unsupported language {CODE_LANGUAGE}" diff --git a/api/tests/integration_tests/workflow/nodes/code_executor/test_code_jinja2.py b/api/tests/integration_tests/workflow/nodes/code_executor/test_code_jinja2.py deleted file mode 100644 index c8eb9ec3e4..0000000000 --- a/api/tests/integration_tests/workflow/nodes/code_executor/test_code_jinja2.py +++ /dev/null @@ -1,95 +0,0 @@ -import base64 - -from core.helper.code_executor.code_executor import CodeExecutor, CodeLanguage -from core.helper.code_executor.jinja2.jinja2_transformer import Jinja2TemplateTransformer - -CODE_LANGUAGE = CodeLanguage.JINJA2 - - -def test_jinja2(): - """Test basic Jinja2 template rendering.""" - template = "Hello {{template}}" - # Template must be base64 encoded to match the new safe embedding approach - template_b64 = base64.b64encode(template.encode("utf-8")).decode("utf-8") - inputs = base64.b64encode(b'{"template": "World"}').decode("utf-8") - code = ( - Jinja2TemplateTransformer.get_runner_script() - .replace(Jinja2TemplateTransformer._template_b64_placeholder, template_b64) - .replace(Jinja2TemplateTransformer._inputs_placeholder, inputs) - ) - result = CodeExecutor.execute_code( - language=CODE_LANGUAGE, preload=Jinja2TemplateTransformer.get_preload_script(), code=code - ) - assert result == "<>Hello World<>\n" - - -def test_jinja2_with_code_template(): - """Test template rendering via the high-level workflow API.""" - result = CodeExecutor.execute_workflow_code_template( - language=CODE_LANGUAGE, code="Hello {{template}}", inputs={"template": "World"} - ) - assert result == {"result": "Hello World"} - - -def test_jinja2_get_runner_script(): - """Test that runner script contains required placeholders.""" - runner_script = Jinja2TemplateTransformer.get_runner_script() - assert runner_script.count(Jinja2TemplateTransformer._template_b64_placeholder) == 1 - assert runner_script.count(Jinja2TemplateTransformer._inputs_placeholder) == 1 - assert runner_script.count(Jinja2TemplateTransformer._result_tag) == 2 - - -def test_jinja2_template_with_special_characters(): - """ - Test that templates with special characters (quotes, newlines) render correctly. - This is a regression test for issue #26818 where textarea pre-fill values - containing special characters would break template rendering. - """ - # Template with triple quotes, single quotes, double quotes, and newlines - template = """ - - - -

Status: "{{ status }}"

-
'''code block'''
- -""" - inputs = {"task": {"Task ID": "TASK-123", "Issues": "Line 1\nLine 2\nLine 3"}, "status": "completed"} - - result = CodeExecutor.execute_workflow_code_template(language=CODE_LANGUAGE, code=template, inputs=inputs) - - # Verify the template rendered correctly with all special characters - output = result["result"] - assert 'value="TASK-123"' in output - assert "" in output - assert 'Status: "completed"' in output - assert "'''code block'''" in output - - -def test_jinja2_template_with_html_textarea_prefill(): - """ - Specific test for HTML textarea with Jinja2 variable pre-fill. - Verifies fix for issue #26818. - """ - template = "" - notes_content = "This is a multi-line note.\nWith special chars: 'single' and \"double\" quotes." - inputs = {"notes": notes_content} - - result = CodeExecutor.execute_workflow_code_template(language=CODE_LANGUAGE, code=template, inputs=inputs) - - expected_output = f"" - assert result["result"] == expected_output - - -def test_jinja2_assemble_runner_script_encodes_template(): - """Test that assemble_runner_script properly base64 encodes the template.""" - template = "Hello {{ name }}!" - inputs = {"name": "World"} - - script = Jinja2TemplateTransformer.assemble_runner_script(template, inputs) - - # The template should be base64 encoded in the script - template_b64 = base64.b64encode(template.encode("utf-8")).decode("utf-8") - assert template_b64 in script - # The raw template should NOT appear in the script (it's encoded) - assert "Hello {{ name }}!" not in script diff --git a/api/tests/integration_tests/workflow/nodes/code_executor/test_code_python3.py b/api/tests/integration_tests/workflow/nodes/code_executor/test_code_python3.py deleted file mode 100644 index 25af312afa..0000000000 --- a/api/tests/integration_tests/workflow/nodes/code_executor/test_code_python3.py +++ /dev/null @@ -1,36 +0,0 @@ -from textwrap import dedent - -from core.helper.code_executor.code_executor import CodeExecutor, CodeLanguage -from core.helper.code_executor.python3.python3_code_provider import Python3CodeProvider -from core.helper.code_executor.python3.python3_transformer import Python3TemplateTransformer - -CODE_LANGUAGE = CodeLanguage.PYTHON3 - - -def test_python3_plain(): - code = 'print("Hello World")' - result = CodeExecutor.execute_code(language=CODE_LANGUAGE, preload="", code=code) - assert result == "Hello World\n" - - -def test_python3_json(): - code = dedent(""" - import json - print(json.dumps({'Hello': 'World'})) - """) - result = CodeExecutor.execute_code(language=CODE_LANGUAGE, preload="", code=code) - assert result == '{"Hello": "World"}\n' - - -def test_python3_with_code_template(): - result = CodeExecutor.execute_workflow_code_template( - language=CODE_LANGUAGE, code=Python3CodeProvider.get_default_code(), inputs={"arg1": "Hello", "arg2": "World"} - ) - assert result == {"result": "HelloWorld"} - - -def test_python3_get_runner_script(): - runner_script = Python3TemplateTransformer.get_runner_script() - assert runner_script.count(Python3TemplateTransformer._code_placeholder) == 1 - assert runner_script.count(Python3TemplateTransformer._inputs_placeholder) == 1 - assert runner_script.count(Python3TemplateTransformer._result_tag) == 2 diff --git a/api/tests/integration_tests/workflow/nodes/test_code.py b/api/tests/integration_tests/workflow/nodes/test_code.py index 4f41396c22..e3476c292b 100644 --- a/api/tests/integration_tests/workflow/nodes/test_code.py +++ b/api/tests/integration_tests/workflow/nodes/test_code.py @@ -2,17 +2,17 @@ import time import uuid import pytest + +from configs import dify_config +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom +from core.workflow.node_factory import DifyNodeFactory +from core.workflow.system_variables import build_system_variables from graphon.enums import WorkflowNodeExecutionStatus from graphon.graph import Graph from graphon.node_events import NodeRunResult from graphon.nodes.code.code_node import CodeNode from graphon.nodes.code.limits import CodeNodeLimits from graphon.runtime import GraphRuntimeState, VariablePool - -from configs import dify_config -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom -from core.workflow.node_factory import DifyNodeFactory -from core.workflow.system_variables import build_system_variables from tests.workflow_test_utils import build_test_graph_init_params pytest_plugins = ("tests.integration_tests.workflow.nodes.__mock.code_executor",) diff --git a/api/tests/integration_tests/workflow/nodes/test_http.py b/api/tests/integration_tests/workflow/nodes/test_http.py index b1f937e738..aa6cf1e021 100644 --- a/api/tests/integration_tests/workflow/nodes/test_http.py +++ b/api/tests/integration_tests/workflow/nodes/test_http.py @@ -3,11 +3,6 @@ import uuid from urllib.parse import urlencode import pytest -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.file.file_manager import file_manager -from graphon.graph import Graph -from graphon.nodes.http_request import HttpRequestNode, HttpRequestNodeConfig -from graphon.runtime import GraphRuntimeState, VariablePool from configs import dify_config from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom @@ -16,6 +11,11 @@ from core.tools.tool_file_manager import ToolFileManager from core.workflow.node_factory import DifyNodeFactory from core.workflow.node_runtime import DifyFileReferenceFactory from core.workflow.system_variables import build_system_variables +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.file.file_manager import file_manager +from graphon.graph import Graph +from graphon.nodes.http_request import HttpRequestNode, HttpRequestNodeConfig +from graphon.runtime import GraphRuntimeState, VariablePool from tests.workflow_test_utils import build_test_graph_init_params pytest_plugins = ("tests.integration_tests.workflow.nodes.__mock.http",) @@ -192,6 +192,7 @@ def test_custom_authorization_header(setup_http_mock): @pytest.mark.parametrize("setup_http_mock", [["none"]], indirect=True) def test_custom_auth_with_empty_api_key_raises_error(setup_http_mock): """Test: In custom authentication mode, when the api_key is empty, AuthorizationConfigError should be raised.""" + from core.workflow.system_variables import build_system_variables from graphon.enums import BuiltinNodeTypes from graphon.nodes.http_request.entities import ( HttpRequestNodeAuthorization, @@ -202,8 +203,6 @@ def test_custom_auth_with_empty_api_key_raises_error(setup_http_mock): from graphon.nodes.http_request.executor import Executor from graphon.runtime import VariablePool - from core.workflow.system_variables import build_system_variables - # Create variable pool variable_pool = VariablePool( system_variables=build_system_variables(user_id="test", files=[]), diff --git a/api/tests/integration_tests/workflow/nodes/test_llm.py b/api/tests/integration_tests/workflow/nodes/test_llm.py index f0f3fcead1..fa5d63cfbf 100644 --- a/api/tests/integration_tests/workflow/nodes/test_llm.py +++ b/api/tests/integration_tests/workflow/nodes/test_llm.py @@ -4,6 +4,11 @@ import uuid from collections.abc import Generator from unittest.mock import MagicMock, patch +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom +from core.llm_generator.output_parser.structured_output import _parse_structured_output +from core.model_manager import ModelInstance +from core.workflow.system_variables import build_system_variables +from extensions.ext_database import db from graphon.enums import WorkflowNodeExecutionStatus from graphon.node_events import StreamCompletedEvent from graphon.nodes.llm.file_saver import LLMFileSaver @@ -12,12 +17,6 @@ from graphon.nodes.llm.protocols import CredentialsProvider, ModelFactory from graphon.nodes.llm.runtime_protocols import PromptMessageSerializerProtocol from graphon.nodes.protocols import HttpClientProtocol from graphon.runtime import GraphRuntimeState, VariablePool - -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom -from core.llm_generator.output_parser.structured_output import _parse_structured_output -from core.model_manager import ModelInstance -from core.workflow.system_variables import build_system_variables -from extensions.ext_database import db from tests.workflow_test_utils import build_test_graph_init_params """FOR MOCK FIXTURES, DO NOT REMOVE""" diff --git a/api/tests/integration_tests/workflow/nodes/test_parameter_extractor.py b/api/tests/integration_tests/workflow/nodes/test_parameter_extractor.py index fe512c2585..52886855b8 100644 --- a/api/tests/integration_tests/workflow/nodes/test_parameter_extractor.py +++ b/api/tests/integration_tests/workflow/nodes/test_parameter_extractor.py @@ -3,17 +3,16 @@ import time import uuid from unittest.mock import MagicMock -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.model_runtime.entities import AssistantPromptMessage, UserPromptMessage -from graphon.nodes.llm.protocols import CredentialsProvider, ModelFactory -from graphon.nodes.parameter_extractor.parameter_extractor_node import ParameterExtractorNode -from graphon.runtime import GraphRuntimeState, VariablePool - from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.model_manager import ModelInstance from core.workflow.node_runtime import DifyPromptMessageSerializer from core.workflow.system_variables import build_system_variables from extensions.ext_database import db +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.model_runtime.entities import AssistantPromptMessage, UserPromptMessage +from graphon.nodes.llm.protocols import CredentialsProvider, ModelFactory +from graphon.nodes.parameter_extractor.parameter_extractor_node import ParameterExtractorNode +from graphon.runtime import GraphRuntimeState, VariablePool from tests.integration_tests.workflow.nodes.__mock.model import get_mocked_fetch_model_instance from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/integration_tests/workflow/nodes/test_template_transform.py b/api/tests/integration_tests/workflow/nodes/test_template_transform.py index 2d728569be..9e3e1a47e3 100644 --- a/api/tests/integration_tests/workflow/nodes/test_template_transform.py +++ b/api/tests/integration_tests/workflow/nodes/test_template_transform.py @@ -1,15 +1,14 @@ import time import uuid +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom +from core.workflow.node_factory import DifyNodeFactory +from core.workflow.system_variables import build_system_variables from graphon.enums import WorkflowNodeExecutionStatus from graphon.graph import Graph from graphon.nodes.template_transform.template_transform_node import TemplateTransformNode from graphon.runtime import GraphRuntimeState, VariablePool from graphon.template_rendering import TemplateRenderError - -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom -from core.workflow.node_factory import DifyNodeFactory -from core.workflow.system_variables import build_system_variables from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/integration_tests/workflow/nodes/test_tool.py b/api/tests/integration_tests/workflow/nodes/test_tool.py index 750ced7075..f9ec51ee10 100644 --- a/api/tests/integration_tests/workflow/nodes/test_tool.py +++ b/api/tests/integration_tests/workflow/nodes/test_tool.py @@ -2,18 +2,17 @@ import time import uuid from unittest.mock import MagicMock, patch -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.graph import Graph -from graphon.node_events import StreamCompletedEvent -from graphon.nodes.protocols import ToolFileManagerProtocol -from graphon.nodes.tool.tool_node import ToolNode -from graphon.runtime import GraphRuntimeState, VariablePool - from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.tools.utils.configuration import ToolParameterConfigurationManager from core.workflow.node_factory import DifyNodeFactory from core.workflow.node_runtime import DifyToolNodeRuntime from core.workflow.system_variables import build_system_variables +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.graph import Graph +from graphon.node_events import StreamCompletedEvent +from graphon.nodes.protocols import ToolFileManagerProtocol +from graphon.nodes.tool.tool_node import ToolNode +from graphon.runtime import GraphRuntimeState, VariablePool from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/test_containers_integration_tests/conftest.py b/api/tests/test_containers_integration_tests/conftest.py index ef74893f07..66a25e5daf 100644 --- a/api/tests/test_containers_integration_tests/conftest.py +++ b/api/tests/test_containers_integration_tests/conftest.py @@ -369,7 +369,7 @@ def _create_app_with_containers() -> Flask: # Create and configure the Flask application logger.info("Initializing Flask application...") - app = create_app() + sio_app, app = create_app() logger.info("Flask application created successfully") # Initialize database schema diff --git a/api/tests/test_containers_integration_tests/controllers/console/app/test_app_apis.py b/api/tests/test_containers_integration_tests/controllers/console/app/test_app_apis.py index c3a861c3e1..15dec06311 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/app/test_app_apis.py +++ b/api/tests/test_containers_integration_tests/controllers/console/app/test_app_apis.py @@ -313,6 +313,21 @@ class TestSiteEndpoints: method = _unwrap(api.post) site = MagicMock() + site.app_id = "app-1" + site.code = "test-code" + site.title = "My Site" + site.icon = None + site.icon_background = None + site.description = "Test site" + site.default_language = "en-US" + site.customize_domain = None + site.copyright = None + site.privacy_policy = None + site.custom_disclaimer = "" + site.customize_token_strategy = "not_allow" + site.prompt_public = False + site.show_workflow_steps = True + site.use_icon_as_answer_icon = False monkeypatch.setattr( site_module.db, "session", @@ -328,13 +343,29 @@ class TestSiteEndpoints: with app.test_request_context("/", json={"title": "My Site"}): result = method(app_model=SimpleNamespace(id="app-1")) - assert result is site + assert isinstance(result, dict) + assert result["title"] == "My Site" def test_app_site_access_token_reset(self, app, monkeypatch): api = site_module.AppSiteAccessTokenReset() method = _unwrap(api.post) site = MagicMock() + site.app_id = "app-1" + site.code = "old-code" + site.title = "My Site" + site.icon = None + site.icon_background = None + site.description = None + site.default_language = "en-US" + site.customize_domain = None + site.copyright = None + site.privacy_policy = None + site.custom_disclaimer = "" + site.customize_token_strategy = "not_allow" + site.prompt_public = False + site.show_workflow_steps = True + site.use_icon_as_answer_icon = False monkeypatch.setattr( site_module.db, "session", @@ -351,7 +382,8 @@ class TestSiteEndpoints: with app.test_request_context("/"): result = method(app_model=SimpleNamespace(id="app-1")) - assert result is site + assert isinstance(result, dict) + assert result["access_token"] == "code" class TestWorkflowEndpoints: @@ -400,7 +432,7 @@ class TestWorkflowAppLogEndpoints: monkeypatch.setattr(workflow_app_log_module, "sessionmaker", DummySessionMaker) def fake_get_paginate(self, **_kwargs): - return {"items": [], "total": 0} + return {"page": 1, "limit": 20, "total": 0, "has_more": False, "data": []} monkeypatch.setattr( workflow_app_log_module.WorkflowAppService, @@ -411,7 +443,7 @@ class TestWorkflowAppLogEndpoints: with app.test_request_context("/?page=1&limit=20"): result = method(app_model=SimpleNamespace(id="app-1")) - assert result == {"items": [], "total": 0} + assert result == {"page": 1, "limit": 20, "total": 0, "has_more": False, "data": []} class TestWorkflowDraftVariableEndpoints: @@ -576,7 +608,8 @@ class TestWorkflowTriggerEndpoints: with app.test_request_context("/?node_id=node-1"): result = method(app_model=SimpleNamespace(id="app-1")) - assert result is trigger + assert isinstance(result, dict) + assert {"id", "webhook_id", "webhook_url", "webhook_debug_url", "node_id", "created_at"} <= set(result.keys()) class TestWrapsEndpoints: diff --git a/api/tests/test_containers_integration_tests/controllers/console/app/test_app_import_api.py b/api/tests/test_containers_integration_tests/controllers/console/app/test_app_import_api.py index d8c6821f8d..25d19cf35a 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/app/test_app_import_api.py +++ b/api/tests/test_containers_integration_tests/controllers/console/app/test_app_import_api.py @@ -96,6 +96,56 @@ class TestAppImportApi: assert status == 200 assert response["status"] == ImportStatus.COMPLETED + def test_import_post_commits_session_on_success(self, app, monkeypatch: pytest.MonkeyPatch) -> None: + api = app_import_module.AppImportApi() + method = _unwrap(api.post) + + _install_features(monkeypatch, enabled=False) + monkeypatch.setattr( + app_import_module.AppDslService, + "import_app", + lambda *_args, **_kwargs: _Result(ImportStatus.COMPLETED, app_id="app-123"), + ) + monkeypatch.setattr(app_import_module, "current_account_with_tenant", lambda: (SimpleNamespace(id="u1"), "t1")) + + fake_session = MagicMock() + fake_session.__enter__.return_value = fake_session + fake_session.__exit__.return_value = None + monkeypatch.setattr(app_import_module, "Session", lambda *_args, **_kwargs: fake_session) + + with app.test_request_context("/console/api/apps/imports", method="POST", json={"mode": "yaml-content"}): + response, status = method() + + fake_session.commit.assert_called_once_with() + fake_session.rollback.assert_not_called() + assert status == 200 + assert response["status"] == ImportStatus.COMPLETED + + def test_import_post_rolls_back_session_on_failure(self, app, monkeypatch: pytest.MonkeyPatch) -> None: + api = app_import_module.AppImportApi() + method = _unwrap(api.post) + + _install_features(monkeypatch, enabled=False) + monkeypatch.setattr( + app_import_module.AppDslService, + "import_app", + lambda *_args, **_kwargs: _Result(ImportStatus.FAILED, app_id=None), + ) + monkeypatch.setattr(app_import_module, "current_account_with_tenant", lambda: (SimpleNamespace(id="u1"), "t1")) + + fake_session = MagicMock() + fake_session.__enter__.return_value = fake_session + fake_session.__exit__.return_value = None + monkeypatch.setattr(app_import_module, "Session", lambda *_args, **_kwargs: fake_session) + + with app.test_request_context("/console/api/apps/imports", method="POST", json={"mode": "yaml-content"}): + response, status = method() + + fake_session.rollback.assert_called_once_with() + fake_session.commit.assert_not_called() + assert status == 400 + assert response["status"] == ImportStatus.FAILED + class TestAppImportConfirmApi: @pytest.fixture diff --git a/api/tests/test_containers_integration_tests/controllers/console/app/test_chat_conversation_status_count_api.py b/api/tests/test_containers_integration_tests/controllers/console/app/test_chat_conversation_status_count_api.py index 5cc458fe2e..5a22f81a69 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/app/test_chat_conversation_status_count_api.py +++ b/api/tests/test_containers_integration_tests/controllers/console/app/test_chat_conversation_status_count_api.py @@ -4,15 +4,15 @@ import json import uuid from flask.testing import FlaskClient -from graphon.enums import WorkflowExecutionStatus from sqlalchemy.orm import Session from configs import dify_config from constants import HEADER_NAME_CSRF_TOKEN +from graphon.enums import WorkflowExecutionStatus from libs.datetime_utils import naive_utc_now from libs.token import _real_cookie_name, generate_csrf_token from models import Account, DifySetup, Tenant, TenantAccountJoin -from models.account import AccountStatus, TenantAccountRole +from models.account import AccountStatus, TenantAccountRole, TenantStatus from models.enums import ConversationFromSource, CreatorUserRole from models.model import App, AppMode, Conversation, Message from models.workflow import WorkflowRun @@ -30,7 +30,7 @@ def _create_account_and_tenant(db_session: Session) -> tuple[Account, Tenant]: db_session.add(account) db_session.commit() - tenant = Tenant(name="Test Tenant", status="normal") + tenant = Tenant(name="Test Tenant", status=TenantStatus.NORMAL) db_session.add(tenant) db_session.commit() diff --git a/api/tests/test_containers_integration_tests/controllers/console/app/test_conversation_read_timestamp.py b/api/tests/test_containers_integration_tests/controllers/console/app/test_conversation_read_timestamp.py new file mode 100644 index 0000000000..fad0b8b10e --- /dev/null +++ b/api/tests/test_containers_integration_tests/controllers/console/app/test_conversation_read_timestamp.py @@ -0,0 +1,73 @@ +from datetime import datetime +from unittest.mock import patch + +import pytest +from sqlalchemy.orm import Session +from werkzeug.exceptions import NotFound + +from controllers.console.app.conversation import _get_conversation +from models.enums import ConversationFromSource +from models.model import AppMode, Conversation +from tests.test_containers_integration_tests.controllers.console.helpers import ( + create_console_account_and_tenant, + create_console_app, +) + + +def test_get_conversation_mark_read_keeps_updated_at_unchanged( + db_session_with_containers: Session, +): + account, tenant = create_console_account_and_tenant(db_session_with_containers) + app = create_console_app(db_session_with_containers, tenant.id, account.id, AppMode.CHAT) + + original_updated_at = datetime(2026, 2, 8, 0, 0, 0) + conversation = Conversation( + app_id=app.id, + name="read timestamp test", + inputs={}, + status="normal", + mode=AppMode.CHAT, + from_source=ConversationFromSource.CONSOLE, + from_account_id=account.id, + updated_at=original_updated_at, + ) + db_session_with_containers.add(conversation) + db_session_with_containers.commit() + + read_at = datetime(2026, 2, 9, 0, 0, 0) + + with ( + patch( + "controllers.console.app.conversation.current_account_with_tenant", + return_value=(account, tenant.id), + autospec=True, + ), + patch( + "controllers.console.app.conversation.naive_utc_now", + return_value=read_at, + autospec=True, + ), + ): + loaded = _get_conversation(app, conversation.id) + + db_session_with_containers.refresh(conversation) + + assert loaded.id == conversation.id + assert conversation.read_at == read_at + assert conversation.read_account_id == account.id + assert conversation.updated_at == original_updated_at + + +def test_get_conversation_raises_not_found_for_missing_conversation( + db_session_with_containers: Session, +): + account, tenant = create_console_account_and_tenant(db_session_with_containers) + app = create_console_app(db_session_with_containers, tenant.id, account.id, AppMode.CHAT) + + with patch( + "controllers.console.app.conversation.current_account_with_tenant", + return_value=(account, tenant.id), + autospec=True, + ): + with pytest.raises(NotFound): + _get_conversation(app, "00000000-0000-0000-0000-000000000000") diff --git a/api/tests/test_containers_integration_tests/controllers/console/app/test_message.py b/api/tests/test_containers_integration_tests/controllers/console/app/test_message.py index 6b51ec98bc..eff6dd789d 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/app/test_message.py +++ b/api/tests/test_containers_integration_tests/controllers/console/app/test_message.py @@ -148,14 +148,18 @@ def test_chat_message_list_success( account.id, created_at_offset_seconds=1, ) + # Capture IDs before the HTTP request detaches ORM instances from the session + app_id = app.id + conversation_id = conversation.id + second_id = second.id with patch( "controllers.console.app.message.attach_message_extra_contents", side_effect=_attach_message_extra_contents, ): response = test_client_with_containers.get( - f"/console/api/apps/{app.id}/chat-messages", - query_string={"conversation_id": conversation.id, "limit": 1}, + f"/console/api/apps/{app_id}/chat-messages", + query_string={"conversation_id": conversation_id, "limit": 1}, headers=authenticate_console_client(test_client_with_containers, account), ) @@ -165,7 +169,7 @@ def test_chat_message_list_success( assert payload["limit"] == 1 assert payload["has_more"] is True assert len(payload["data"]) == 1 - assert payload["data"][0]["id"] == second.id + assert payload["data"][0]["id"] == second_id def test_message_feedback_not_found( diff --git a/api/tests/test_containers_integration_tests/controllers/console/app/test_workflow_draft_variable.py b/api/tests/test_containers_integration_tests/controllers/console/app/test_workflow_draft_variable.py index 8ddf867370..290be87697 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/app/test_workflow_draft_variable.py +++ b/api/tests/test_containers_integration_tests/controllers/console/app/test_workflow_draft_variable.py @@ -3,12 +3,12 @@ import uuid from flask.testing import FlaskClient -from graphon.variables.segments import StringSegment from sqlalchemy import select from sqlalchemy.orm import Session from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, ENVIRONMENT_VARIABLE_NODE_ID from factories.variable_factory import segment_to_variable +from graphon.variables.segments import StringSegment from models import Workflow from models.model import AppMode from models.workflow import WorkflowDraftVariable diff --git a/api/tests/test_containers_integration_tests/controllers/console/auth/test_email_register.py b/api/tests/test_containers_integration_tests/controllers/console/auth/test_email_register.py index 879c337319..320da85b60 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/auth/test_email_register.py +++ b/api/tests/test_containers_integration_tests/controllers/console/auth/test_email_register.py @@ -158,7 +158,10 @@ def test_get_account_by_email_with_case_fallback_falls_back_to_lowercase(): second_result.scalar_one_or_none.return_value = expected_account mock_session.execute.side_effect = [first_result, second_result] - result = AccountService.get_account_by_email_with_case_fallback("Case@Test.com", session=mock_session) + with patch("services.account_service.session_factory") as mock_factory: + mock_factory.create_session.return_value.__enter__ = MagicMock(return_value=mock_session) + mock_factory.create_session.return_value.__exit__ = MagicMock(return_value=False) + result = AccountService.get_account_by_email_with_case_fallback("Case@Test.com") assert result is expected_account assert mock_session.execute.call_count == 2 diff --git a/api/tests/test_containers_integration_tests/controllers/console/auth/test_forgot_password.py b/api/tests/test_containers_integration_tests/controllers/console/auth/test_forgot_password.py index 7b7393dade..d2703ed5cc 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/auth/test_forgot_password.py +++ b/api/tests/test_containers_integration_tests/controllers/console/auth/test_forgot_password.py @@ -113,12 +113,14 @@ class TestForgotPasswordCheckApi: class TestForgotPasswordResetApi: @patch("controllers.console.auth.forgot_password.ForgotPasswordResetApi._update_existing_account") @patch("controllers.console.auth.forgot_password.AccountService.get_account_by_email_with_case_fallback") + @patch("controllers.console.auth.forgot_password.db") @patch("controllers.console.auth.forgot_password.AccountService.revoke_reset_password_token") @patch("controllers.console.auth.forgot_password.AccountService.get_reset_password_data") def test_reset_fetches_account_with_original_email( self, mock_get_reset_data, mock_revoke_token, + mock_db, mock_get_account, mock_update_account, app, @@ -126,6 +128,7 @@ class TestForgotPasswordResetApi: mock_get_reset_data.return_value = {"phase": "reset", "email": "User@Example.com"} mock_account = MagicMock() mock_get_account.return_value = mock_account + mock_db.session.merge.return_value = mock_account wraps_features = SimpleNamespace(enable_email_password_login=True) with ( @@ -161,7 +164,10 @@ def test_get_account_by_email_with_case_fallback_falls_back_to_lowercase(): second_result.scalar_one_or_none.return_value = expected_account mock_session.execute.side_effect = [first_result, second_result] - result = AccountService.get_account_by_email_with_case_fallback("Mixed@Test.com", session=mock_session) + with patch("services.account_service.session_factory") as mock_factory: + mock_factory.create_session.return_value.__enter__ = MagicMock(return_value=mock_session) + mock_factory.create_session.return_value.__exit__ = MagicMock(return_value=False) + result = AccountService.get_account_by_email_with_case_fallback("Mixed@Test.com") assert result is expected_account assert mock_session.execute.call_count == 2 diff --git a/api/tests/test_containers_integration_tests/controllers/console/auth/test_oauth.py b/api/tests/test_containers_integration_tests/controllers/console/auth/test_oauth.py index a2f1328579..1eabb45422 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/auth/test_oauth.py +++ b/api/tests/test_containers_integration_tests/controllers/console/auth/test_oauth.py @@ -437,7 +437,10 @@ class TestAccountGeneration: second_result.scalar_one_or_none.return_value = expected_account mock_session.execute.side_effect = [first_result, second_result] - result = AccountService.get_account_by_email_with_case_fallback("Case@Test.com", session=mock_session) + with patch("services.account_service.session_factory") as mock_factory: + mock_factory.create_session.return_value.__enter__ = MagicMock(return_value=mock_session) + mock_factory.create_session.return_value.__exit__ = MagicMock(return_value=False) + result = AccountService.get_account_by_email_with_case_fallback("Case@Test.com") assert result is expected_account assert mock_session.execute.call_count == 2 diff --git a/api/tests/test_containers_integration_tests/controllers/console/auth/test_password_reset.py b/api/tests/test_containers_integration_tests/controllers/console/auth/test_password_reset.py index 8f9db287e3..50249bcd74 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/auth/test_password_reset.py +++ b/api/tests/test_containers_integration_tests/controllers/console/auth/test_password_reset.py @@ -335,10 +335,12 @@ class TestForgotPasswordResetApi: @patch("controllers.console.auth.forgot_password.AccountService.get_reset_password_data") @patch("controllers.console.auth.forgot_password.AccountService.revoke_reset_password_token") @patch("controllers.console.auth.forgot_password.AccountService.get_account_by_email_with_case_fallback") + @patch("controllers.console.auth.forgot_password.db") @patch("controllers.console.auth.forgot_password.TenantService.get_join_tenants") def test_reset_password_success( self, mock_get_tenants, + mock_db, mock_get_account, mock_revoke_token, mock_get_data, @@ -356,6 +358,7 @@ class TestForgotPasswordResetApi: # Arrange mock_get_data.return_value = {"email": "test@example.com", "phase": "reset"} mock_get_account.return_value = mock_account + mock_db.session.merge.return_value = mock_account mock_get_tenants.return_value = [MagicMock()] # Act diff --git a/api/tests/test_containers_integration_tests/controllers/console/helpers.py b/api/tests/test_containers_integration_tests/controllers/console/helpers.py index 9e2084f393..a8ecf94da1 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/helpers.py +++ b/api/tests/test_containers_integration_tests/controllers/console/helpers.py @@ -11,7 +11,7 @@ from constants import HEADER_NAME_CSRF_TOKEN from libs.datetime_utils import naive_utc_now from libs.token import _real_cookie_name, generate_csrf_token from models import Account, DifySetup, Tenant, TenantAccountJoin -from models.account import AccountStatus, TenantAccountRole +from models.account import AccountStatus, TenantAccountRole, TenantStatus from models.model import App, AppMode from services.account_service import AccountService @@ -37,7 +37,7 @@ def create_console_account_and_tenant(db_session: Session) -> tuple[Account, Ten db_session.add(account) db_session.commit() - tenant = Tenant(name="Test Tenant", status="normal") + tenant = Tenant(name="Test Tenant", status=TenantStatus.NORMAL) db_session.add(tenant) db_session.commit() diff --git a/api/tests/test_containers_integration_tests/controllers/service_api/test_site.py b/api/tests/test_containers_integration_tests/controllers/service_api/test_site.py new file mode 100644 index 0000000000..4e884626a7 --- /dev/null +++ b/api/tests/test_containers_integration_tests/controllers/service_api/test_site.py @@ -0,0 +1,110 @@ +""" +Testcontainers integration tests for Service API Site controller. +""" + +from __future__ import annotations + +import pytest +from flask import Flask +from sqlalchemy.orm import Session +from werkzeug.exceptions import Forbidden + +from controllers.service_api.app.site import AppSiteApi +from models.account import Tenant, TenantStatus +from models.model import App, AppMode, Site + + +@pytest.fixture +def app(flask_app_with_containers) -> Flask: + return flask_app_with_containers + + +def _unwrap(method): + fn = method + while hasattr(fn, "__wrapped__"): + fn = fn.__wrapped__ + return fn + + +def _create_tenant(db_session: Session, *, status: TenantStatus = TenantStatus.NORMAL) -> Tenant: + tenant = Tenant(name="service-api-site-tenant", status=status) + db_session.add(tenant) + db_session.commit() + return tenant + + +def _create_app(db_session: Session, tenant_id: str) -> App: + app_model = App( + tenant_id=tenant_id, + mode=AppMode.CHAT, + name="service-api-site-app", + enable_site=True, + enable_api=True, + status="normal", + ) + db_session.add(app_model) + db_session.commit() + return app_model + + +def _create_site(db_session: Session, app_id: str) -> Site: + site = Site( + app_id=app_id, + title="Service API Site", + icon_type="emoji", + icon="robot", + icon_background="#ffffff", + description="Service API test site", + default_language="en-US", + prompt_public=True, + show_workflow_steps=True, + customize_token_strategy="not_allow", + use_icon_as_answer_icon=False, + chat_color_theme="light", + chat_color_theme_inverted=False, + ) + db_session.add(site) + db_session.commit() + return site + + +class TestAppSiteApi: + def test_get_site_success(self, app: Flask, db_session_with_containers: Session) -> None: + tenant = _create_tenant(db_session_with_containers) + app_model = _create_app(db_session_with_containers, tenant.id) + _create_site(db_session_with_containers, app_model.id) + + with app.test_request_context("/site", method="GET", headers={"Authorization": "Bearer test-token"}): + api = AppSiteApi() + response = _unwrap(api.get)(api, app_model=app_model) + + assert response["title"] == "Service API Site" + assert response["icon"] == "robot" + assert response["description"] == "Service API test site" + + def test_get_site_not_found(self, app: Flask, db_session_with_containers: Session) -> None: + tenant = _create_tenant(db_session_with_containers) + app_model = _create_app(db_session_with_containers, tenant.id) + + with app.test_request_context("/site", method="GET", headers={"Authorization": "Bearer test-token"}): + api = AppSiteApi() + with pytest.raises(Forbidden): + _unwrap(api.get)(api, app_model=app_model) + + def test_get_site_tenant_archived(self, app: Flask, db_session_with_containers: Session) -> None: + tenant = _create_tenant(db_session_with_containers) + app_model = _create_app(db_session_with_containers, tenant.id) + _create_site(db_session_with_containers, app_model.id) + + archived_tenant = db_session_with_containers.get(Tenant, tenant.id) + assert archived_tenant is not None + archived_tenant.status = TenantStatus.ARCHIVE + db_session_with_containers.commit() + + app_model = db_session_with_containers.get(App, app_model.id) + assert app_model is not None + + with app.test_request_context("/site", method="GET", headers={"Authorization": "Bearer test-token"}): + api = AppSiteApi() + with pytest.raises(Forbidden): + _unwrap(api.get)(api, app_model=app_model) diff --git a/api/tests/unit_tests/controllers/web/test_site.py b/api/tests/test_containers_integration_tests/controllers/web/test_site.py similarity index 51% rename from api/tests/unit_tests/controllers/web/test_site.py rename to api/tests/test_containers_integration_tests/controllers/web/test_site.py index 6e9d754c43..9adb26ff3d 100644 --- a/api/tests/unit_tests/controllers/web/test_site.py +++ b/api/tests/test_containers_integration_tests/controllers/web/test_site.py @@ -1,28 +1,48 @@ -"""Unit tests for controllers.web.site endpoints.""" +"""Testcontainers integration tests for controllers.web.site endpoints.""" from __future__ import annotations from types import SimpleNamespace -from unittest.mock import MagicMock, patch +from unittest.mock import patch import pytest from flask import Flask +from sqlalchemy.orm import Session from werkzeug.exceptions import Forbidden from controllers.web.site import AppSiteApi, AppSiteInfo +from models import Tenant, TenantStatus +from models.model import App, AppMode, CustomizeTokenStrategy, Site -def _tenant(*, status: str = "normal") -> SimpleNamespace: - return SimpleNamespace( - id="tenant-1", - status=status, - plan="basic", - custom_config_dict={"remove_webapp_brand": False, "replace_webapp_logo": False}, +@pytest.fixture +def app(flask_app_with_containers) -> Flask: + return flask_app_with_containers + + +def _create_tenant(db_session: Session, *, status: TenantStatus = TenantStatus.NORMAL) -> Tenant: + tenant = Tenant(name="test-tenant", status=status) + db_session.add(tenant) + db_session.commit() + return tenant + + +def _create_app(db_session: Session, tenant_id: str, *, enable_site: bool = True) -> App: + app_model = App( + tenant_id=tenant_id, + mode=AppMode.CHAT, + name="test-app", + enable_site=enable_site, + enable_api=True, ) + db_session.add(app_model) + db_session.commit() + return app_model -def _site() -> SimpleNamespace: - return SimpleNamespace( +def _create_site(db_session: Session, app_id: str) -> Site: + site = Site( + app_id=app_id, title="Site", icon_type="emoji", icon="robot", @@ -31,77 +51,64 @@ def _site() -> SimpleNamespace: default_language="en", chat_color_theme="light", chat_color_theme_inverted=False, - copyright=None, - privacy_policy=None, - custom_disclaimer=None, + customize_token_strategy=CustomizeTokenStrategy.NOT_ALLOW, + code=f"code-{app_id[-6:]}", prompt_public=False, show_workflow_steps=True, use_icon_as_answer_icon=False, ) + db_session.add(site) + db_session.commit() + return site -# --------------------------------------------------------------------------- -# AppSiteApi -# --------------------------------------------------------------------------- class TestAppSiteApi: @patch("controllers.web.site.FeatureService.get_features") - @patch("controllers.web.site.db") - def test_happy_path(self, mock_db: MagicMock, mock_features: MagicMock, app: Flask) -> None: + def test_happy_path(self, mock_features, app: Flask, db_session_with_containers: Session) -> None: app.config["RESTX_MASK_HEADER"] = "X-Fields" - mock_features.return_value = SimpleNamespace(can_replace_logo=False) - site_obj = _site() - mock_db.session.scalar.return_value = site_obj - tenant = _tenant() - app_model = SimpleNamespace(id="app-1", tenant_id="tenant-1", tenant=tenant, enable_site=True) + tenant = _create_tenant(db_session_with_containers) + app_model = _create_app(db_session_with_containers, tenant.id) + _create_site(db_session_with_containers, app_model.id) end_user = SimpleNamespace(id="eu-1") + mock_features.return_value = SimpleNamespace(can_replace_logo=False) with app.test_request_context("/site"): result = AppSiteApi().get(app_model, end_user) - # marshal_with serializes AppSiteInfo to a dict - assert result["app_id"] == "app-1" + assert result["app_id"] == app_model.id assert result["plan"] == "basic" assert result["enable_site"] is True - @patch("controllers.web.site.db") - def test_missing_site_raises_forbidden(self, mock_db: MagicMock, app: Flask) -> None: + def test_missing_site_raises_forbidden(self, app: Flask, db_session_with_containers: Session) -> None: app.config["RESTX_MASK_HEADER"] = "X-Fields" - mock_db.session.scalar.return_value = None - tenant = _tenant() - app_model = SimpleNamespace(id="app-1", tenant_id="tenant-1", tenant=tenant, enable_site=True) + tenant = _create_tenant(db_session_with_containers) + app_model = _create_app(db_session_with_containers, tenant.id) end_user = SimpleNamespace(id="eu-1") with app.test_request_context("/site"): with pytest.raises(Forbidden): AppSiteApi().get(app_model, end_user) - @patch("controllers.web.site.db") - def test_archived_tenant_raises_forbidden(self, mock_db: MagicMock, app: Flask) -> None: + @patch("controllers.web.site.FeatureService.get_features") + def test_archived_tenant_raises_forbidden( + self, mock_features, app: Flask, db_session_with_containers: Session + ) -> None: app.config["RESTX_MASK_HEADER"] = "X-Fields" - from models.account import TenantStatus - - mock_db.session.scalar.return_value = _site() - tenant = SimpleNamespace( - id="tenant-1", - status=TenantStatus.ARCHIVE, - plan="basic", - custom_config_dict={}, - ) - app_model = SimpleNamespace(id="app-1", tenant_id="tenant-1", tenant=tenant) + tenant = _create_tenant(db_session_with_containers, status=TenantStatus.ARCHIVE) + app_model = _create_app(db_session_with_containers, tenant.id) + _create_site(db_session_with_containers, app_model.id) end_user = SimpleNamespace(id="eu-1") + mock_features.return_value = SimpleNamespace(can_replace_logo=False) with app.test_request_context("/site"): with pytest.raises(Forbidden): AppSiteApi().get(app_model, end_user) -# --------------------------------------------------------------------------- -# AppSiteInfo -# --------------------------------------------------------------------------- class TestAppSiteInfo: def test_basic_fields(self) -> None: - tenant = _tenant() - site_obj = _site() + tenant = SimpleNamespace(id="tenant-1", plan="basic", custom_config_dict={}) + site_obj = SimpleNamespace() info = AppSiteInfo(tenant, SimpleNamespace(id="app-1", enable_site=True), site_obj, "eu-1", False) assert info.app_id == "app-1" @@ -118,7 +125,7 @@ class TestAppSiteInfo: plan="pro", custom_config_dict={"remove_webapp_brand": True, "replace_webapp_logo": True}, ) - site_obj = _site() + site_obj = SimpleNamespace() info = AppSiteInfo(tenant, SimpleNamespace(id="app-1", enable_site=True), site_obj, "eu-1", True) assert info.can_replace_logo is True diff --git a/api/tests/test_containers_integration_tests/controllers/web/test_web_forgot_password.py b/api/tests/test_containers_integration_tests/controllers/web/test_web_forgot_password.py index 04ad143103..f14b2c0ae5 100644 --- a/api/tests/test_containers_integration_tests/controllers/web/test_web_forgot_password.py +++ b/api/tests/test_containers_integration_tests/controllers/web/test_web_forgot_password.py @@ -37,10 +37,8 @@ class TestForgotPasswordSendEmailApi: @patch("controllers.web.forgot_password.AccountService.get_account_by_email_with_case_fallback") @patch("controllers.web.forgot_password.AccountService.is_email_send_ip_limit", return_value=False) @patch("controllers.web.forgot_password.extract_remote_ip", return_value="127.0.0.1") - @patch("controllers.web.forgot_password.sessionmaker") def test_should_normalize_email_before_sending( self, - mock_session_cls, mock_extract_ip, mock_rate_limit, mock_get_account, @@ -50,19 +48,16 @@ class TestForgotPasswordSendEmailApi: mock_account = MagicMock() mock_get_account.return_value = mock_account mock_send_mail.return_value = "token-123" - mock_session = MagicMock() - mock_session_cls.return_value.begin.return_value.__enter__.return_value = mock_session - with patch("controllers.web.forgot_password.db", SimpleNamespace(engine="engine")): - with app.test_request_context( - "/web/forgot-password", - method="POST", - json={"email": "User@Example.com", "language": "zh-Hans"}, - ): - response = ForgotPasswordSendEmailApi().post() + with app.test_request_context( + "/web/forgot-password", + method="POST", + json={"email": "User@Example.com", "language": "zh-Hans"}, + ): + response = ForgotPasswordSendEmailApi().post() assert response == {"result": "success", "data": "token-123"} - mock_get_account.assert_called_once_with("User@Example.com", session=mock_session) + mock_get_account.assert_called_once_with("User@Example.com") mock_send_mail.assert_called_once_with(account=mock_account, email="user@example.com", language="zh-Hans") mock_extract_ip.assert_called_once() mock_rate_limit.assert_called_once_with("127.0.0.1") @@ -153,14 +148,14 @@ class TestForgotPasswordResetApi: @patch("controllers.web.forgot_password.ForgotPasswordResetApi._update_existing_account") @patch("controllers.web.forgot_password.AccountService.get_account_by_email_with_case_fallback") - @patch("controllers.web.forgot_password.sessionmaker") + @patch("controllers.web.forgot_password.db") @patch("controllers.web.forgot_password.AccountService.revoke_reset_password_token") @patch("controllers.web.forgot_password.AccountService.get_reset_password_data") def test_should_fetch_account_with_fallback( self, mock_get_reset_data, mock_revoke_token, - mock_session_cls, + mock_db, mock_get_account, mock_update_account, app, @@ -168,29 +163,27 @@ class TestForgotPasswordResetApi: mock_get_reset_data.return_value = {"phase": "reset", "email": "User@Example.com", "code": "1234"} mock_account = MagicMock() mock_get_account.return_value = mock_account - mock_session = MagicMock() - mock_session_cls.return_value.begin.return_value.__enter__.return_value = mock_session + mock_db.session.merge.return_value = mock_account - with patch("controllers.web.forgot_password.db", SimpleNamespace(engine="engine")): - with app.test_request_context( - "/web/forgot-password/resets", - method="POST", - json={ - "token": "token-123", - "new_password": "ValidPass123!", - "password_confirm": "ValidPass123!", - }, - ): - response = ForgotPasswordResetApi().post() + with app.test_request_context( + "/web/forgot-password/resets", + method="POST", + json={ + "token": "token-123", + "new_password": "ValidPass123!", + "password_confirm": "ValidPass123!", + }, + ): + response = ForgotPasswordResetApi().post() assert response == {"result": "success"} - mock_get_account.assert_called_once_with("User@Example.com", session=mock_session) + mock_get_account.assert_called_once_with("User@Example.com") mock_update_account.assert_called_once() mock_revoke_token.assert_called_once_with("token-123") @patch("controllers.web.forgot_password.hash_password", return_value=b"hashed-value") @patch("controllers.web.forgot_password.secrets.token_bytes", return_value=b"0123456789abcdef") - @patch("controllers.web.forgot_password.sessionmaker") + @patch("controllers.web.forgot_password.db") @patch("controllers.web.forgot_password.AccountService.revoke_reset_password_token") @patch("controllers.web.forgot_password.AccountService.get_reset_password_data") @patch("controllers.web.forgot_password.AccountService.get_account_by_email_with_case_fallback") @@ -199,7 +192,7 @@ class TestForgotPasswordResetApi: mock_get_account, mock_get_reset_data, mock_revoke_token, - mock_session_cls, + mock_db, mock_token_bytes, mock_hash_password, app, @@ -207,20 +200,18 @@ class TestForgotPasswordResetApi: mock_get_reset_data.return_value = {"phase": "reset", "email": "user@example.com"} account = MagicMock() mock_get_account.return_value = account - mock_session = MagicMock() - mock_session_cls.return_value.begin.return_value.__enter__.return_value = mock_session + mock_db.session.merge.return_value = account - with patch("controllers.web.forgot_password.db", SimpleNamespace(engine="engine")): - with app.test_request_context( - "/web/forgot-password/resets", - method="POST", - json={ - "token": "reset-token", - "new_password": "StrongPass123!", - "password_confirm": "StrongPass123!", - }, - ): - response = ForgotPasswordResetApi().post() + with app.test_request_context( + "/web/forgot-password/resets", + method="POST", + json={ + "token": "reset-token", + "new_password": "StrongPass123!", + "password_confirm": "StrongPass123!", + }, + ): + response = ForgotPasswordResetApi().post() assert response == {"result": "success"} mock_get_reset_data.assert_called_once_with("reset-token") diff --git a/api/tests/test_containers_integration_tests/core/app/layers/test_pause_state_persist_layer.py b/api/tests/test_containers_integration_tests/core/app/layers/test_pause_state_persist_layer.py index 2b4c1b59ab..c342e8994b 100644 --- a/api/tests/test_containers_integration_tests/core/app/layers/test_pause_state_persist_layer.py +++ b/api/tests/test_containers_integration_tests/core/app/layers/test_pause_state_persist_layer.py @@ -22,13 +22,6 @@ import uuid from time import time import pytest -from graphon.entities.pause_reason import SchedulingPause -from graphon.enums import WorkflowExecutionStatus -from graphon.graph_engine.entities.commands import GraphEngineCommand -from graphon.graph_engine.layers.base import GraphEngineLayerNotInitializedError -from graphon.graph_events import GraphRunPausedEvent -from graphon.model_runtime.entities.llm_entities import LLMUsage -from graphon.runtime import GraphRuntimeState, ReadOnlyGraphRuntimeState, ReadOnlyGraphRuntimeStateWrapper, VariablePool from sqlalchemy import Engine, delete, select from sqlalchemy.orm import Session @@ -40,6 +33,13 @@ from core.app.layers.pause_state_persist_layer import ( ) from core.workflow.system_variables import build_system_variables from extensions.ext_storage import storage +from graphon.entities.pause_reason import SchedulingPause +from graphon.enums import WorkflowExecutionStatus +from graphon.graph_engine.entities.commands import GraphEngineCommand +from graphon.graph_engine.layers.base import GraphEngineLayerNotInitializedError +from graphon.graph_events import GraphRunPausedEvent +from graphon.model_runtime.entities.llm_entities import LLMUsage +from graphon.runtime import GraphRuntimeState, ReadOnlyGraphRuntimeState, ReadOnlyGraphRuntimeStateWrapper, VariablePool from libs.datetime_utils import naive_utc_now from models import Account from models import WorkflowPause as WorkflowPauseModel @@ -88,11 +88,11 @@ class TestPauseStatePersistenceLayerTestContainers: def setup_test_data(self, db_session_with_containers, file_service, workflow_run_service): """Set up test data for each test method using TestContainers.""" # Create test tenant and account - from models.account import Tenant, TenantAccountJoin, TenantAccountRole + from models.account import AccountStatus, Tenant, TenantAccountJoin, TenantAccountRole, TenantStatus tenant = Tenant( name="Test Tenant", - status="normal", + status=TenantStatus.NORMAL, ) db_session_with_containers.add(tenant) db_session_with_containers.commit() @@ -101,7 +101,7 @@ class TestPauseStatePersistenceLayerTestContainers: email="test@example.com", name="Test User", interface_language="en-US", - status="active", + status=AccountStatus.ACTIVE, ) db_session_with_containers.add(account) db_session_with_containers.commit() @@ -557,11 +557,9 @@ class TestPauseStatePersistenceLayerTestContainers: self.session.refresh(self.test_workflow_run) assert self.test_workflow_run.status == WorkflowExecutionStatus.RUNNING - pause_states = ( - self.session.query(WorkflowPauseModel) - .filter(WorkflowPauseModel.workflow_run_id == self.test_workflow_run_id) - .all() - ) + pause_states = self.session.scalars( + select(WorkflowPauseModel).where(WorkflowPauseModel.workflow_run_id == self.test_workflow_run_id) + ).all() assert len(pause_states) == 0 def test_layer_requires_initialization(self, db_session_with_containers): diff --git a/api/tests/test_containers_integration_tests/core/repositories/test_human_input_form_repository_impl.py b/api/tests/test_containers_integration_tests/core/repositories/test_human_input_form_repository_impl.py index 13caad799e..14d5740072 100644 --- a/api/tests/test_containers_integration_tests/core/repositories/test_human_input_form_repository_impl.py +++ b/api/tests/test_containers_integration_tests/core/repositories/test_human_input_form_repository_impl.py @@ -4,7 +4,6 @@ from __future__ import annotations from uuid import uuid4 -from graphon.nodes.human_input.entities import FormDefinition, HumanInputNodeData, UserAction from sqlalchemy import Engine, select from sqlalchemy.orm import Session @@ -18,7 +17,15 @@ from core.workflow.human_input_compat import ( MemberRecipient, WebAppDeliveryMethod, ) -from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole +from graphon.nodes.human_input.entities import FormDefinition, HumanInputNodeData, UserAction +from models.account import ( + Account, + AccountStatus, + Tenant, + TenantAccountJoin, + TenantAccountRole, + TenantStatus, +) from models.human_input import ( EmailExternalRecipientPayload, EmailMemberRecipientPayload, @@ -29,7 +36,7 @@ from models.human_input import ( def _create_tenant_with_members(session: Session, member_emails: list[str]) -> tuple[Tenant, list[Account]]: - tenant = Tenant(name="Test Tenant", status="normal") + tenant = Tenant(name="Test Tenant", status=TenantStatus.NORMAL) session.add(tenant) session.flush() @@ -39,7 +46,7 @@ def _create_tenant_with_members(session: Session, member_emails: list[str]) -> t email=email, name=f"Member {index}", interface_language="en-US", - status="active", + status=AccountStatus.ACTIVE, ) session.add(account) session.flush() diff --git a/api/tests/test_containers_integration_tests/core/workflow/test_human_input_resume_node_execution.py b/api/tests/test_containers_integration_tests/core/workflow/test_human_input_resume_node_execution.py index 0a9b476afc..da4f8847d6 100644 --- a/api/tests/test_containers_integration_tests/core/workflow/test_human_input_resume_node_execution.py +++ b/api/tests/test_containers_integration_tests/core/workflow/test_human_input_resume_node_execution.py @@ -4,6 +4,17 @@ from datetime import timedelta from unittest.mock import MagicMock import pytest +from sqlalchemy import delete, select +from sqlalchemy.orm import Session + +from core.app.app_config.entities import WorkflowUIBasedAppConfig +from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity +from core.app.workflow.layers import PersistenceWorkflowInfo, WorkflowPersistenceLayer +from core.repositories.human_input_repository import HumanInputFormEntity, HumanInputFormRepository +from core.repositories.sqlalchemy_workflow_execution_repository import SQLAlchemyWorkflowExecutionRepository +from core.repositories.sqlalchemy_workflow_node_execution_repository import SQLAlchemyWorkflowNodeExecutionRepository +from core.workflow.node_runtime import DifyHumanInputNodeRuntime +from core.workflow.system_variables import build_system_variables from graphon.enums import WorkflowType from graphon.graph import Graph from graphon.graph_engine import GraphEngine @@ -16,20 +27,9 @@ from graphon.nodes.human_input.human_input_node import HumanInputNode from graphon.nodes.start.entities import StartNodeData from graphon.nodes.start.start_node import StartNode from graphon.runtime import GraphRuntimeState, VariablePool -from sqlalchemy import delete, select -from sqlalchemy.orm import Session - -from core.app.app_config.entities import WorkflowUIBasedAppConfig -from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity -from core.app.workflow.layers import PersistenceWorkflowInfo, WorkflowPersistenceLayer -from core.repositories.human_input_repository import HumanInputFormEntity, HumanInputFormRepository -from core.repositories.sqlalchemy_workflow_execution_repository import SQLAlchemyWorkflowExecutionRepository -from core.repositories.sqlalchemy_workflow_node_execution_repository import SQLAlchemyWorkflowNodeExecutionRepository -from core.workflow.node_runtime import DifyHumanInputNodeRuntime -from core.workflow.system_variables import build_system_variables from libs.datetime_utils import naive_utc_now from models import Account -from models.account import Tenant, TenantAccountJoin, TenantAccountRole +from models.account import AccountStatus, Tenant, TenantAccountJoin, TenantAccountRole, TenantStatus from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom from models.model import App, AppMode, IconType from models.workflow import Workflow, WorkflowNodeExecutionModel, WorkflowNodeExecutionTriggeredFrom, WorkflowRun @@ -175,7 +175,7 @@ class TestHumanInputResumeNodeExecutionIntegration: def setup_test_data(self, db_session_with_containers: Session): tenant = Tenant( name="Test Tenant", - status="normal", + status=TenantStatus.NORMAL, ) db_session_with_containers.add(tenant) db_session_with_containers.commit() @@ -184,7 +184,7 @@ class TestHumanInputResumeNodeExecutionIntegration: email="test@example.com", name="Test User", interface_language="en-US", - status="active", + status=AccountStatus.ACTIVE, ) db_session_with_containers.add(account) db_session_with_containers.commit() diff --git a/api/tests/test_containers_integration_tests/factories/test_storage_key_loader.py b/api/tests/test_containers_integration_tests/factories/test_storage_key_loader.py index cc72dc1cf3..2e207ddc67 100644 --- a/api/tests/test_containers_integration_tests/factories/test_storage_key_loader.py +++ b/api/tests/test_containers_integration_tests/factories/test_storage_key_loader.py @@ -4,13 +4,13 @@ from unittest.mock import patch from uuid import uuid4 import pytest -from graphon.file import File, FileTransferMethod, FileType from sqlalchemy.orm import Session from core.app.file_access import DatabaseFileAccessController from extensions.ext_database import db from extensions.storage.storage_type import StorageType from factories.file_factory import StorageKeyLoader +from graphon.file import File, FileTransferMethod, FileType from models import ToolFile, UploadFile from models.enums import CreatorUserRole diff --git a/api/tests/test_containers_integration_tests/helpers/execution_extra_content.py b/api/tests/test_containers_integration_tests/helpers/execution_extra_content.py index b745aed141..2fd289dfbc 100644 --- a/api/tests/test_containers_integration_tests/helpers/execution_extra_content.py +++ b/api/tests/test_containers_integration_tests/helpers/execution_extra_content.py @@ -6,7 +6,6 @@ from decimal import Decimal from uuid import uuid4 from graphon.nodes.human_input.entities import FormDefinition, UserAction - from libs.datetime_utils import naive_utc_now from models.account import Account, Tenant, TenantAccountJoin from models.enums import ConversationFromSource, InvokeFrom diff --git a/api/tests/test_containers_integration_tests/models/test_account.py b/api/tests/test_containers_integration_tests/models/test_account.py index 078dc0e8de..6fd6716cbb 100644 --- a/api/tests/test_containers_integration_tests/models/test_account.py +++ b/api/tests/test_containers_integration_tests/models/test_account.py @@ -1,79 +1,202 @@ -# import secrets +""" +Integration tests for Account and Tenant model methods that interact with the database. -# import pytest -# from sqlalchemy import select -# from sqlalchemy.orm import Session -# from sqlalchemy.orm.exc import DetachedInstanceError +Migrated from unit_tests/models/test_account_models.py, replacing +@patch("models.account.db") mock patches with real PostgreSQL operations. -# from libs.datetime_utils import naive_utc_now -# from models.account import Account, Tenant, TenantAccountJoin +Covers: +- Account.current_tenant setter (sets _current_tenant and role from TenantAccountJoin) +- Account.set_tenant_id (resolves tenant + role from real join row) +- Account.get_by_openid (AccountIntegrate lookup then Account fetch) +- Tenant.get_accounts (returns accounts linked via TenantAccountJoin) +""" + +from collections.abc import Generator +from uuid import uuid4 + +import pytest +from sqlalchemy import delete +from sqlalchemy.orm import Session + +from models.account import Account, AccountIntegrate, Tenant, TenantAccountJoin, TenantAccountRole -# @pytest.fixture -# def session(db_session_with_containers): -# with Session(db_session_with_containers.get_bind()) as session: -# yield session +def _cleanup_tracked_rows(db_session: Session, tracked: list) -> None: + """Delete rows tracked during the test so committed state does not leak into the DB. + + Rolls back any pending (uncommitted) session state first, then issues DELETE + statements by primary key for each tracked entity (in reverse creation order) + and commits. This cleans up rows created via either flush() or commit(). + """ + db_session.rollback() + for entity in reversed(tracked): + db_session.execute(delete(type(entity)).where(type(entity).id == entity.id)) + db_session.commit() -# @pytest.fixture -# def account(session): -# account = Account( -# name="test account", -# email=f"test_{secrets.token_hex(8)}@example.com", -# ) -# session.add(account) -# session.commit() -# return account +def _build_tenant() -> Tenant: + return Tenant(name=f"Tenant {uuid4()}") -# @pytest.fixture -# def tenant(session): -# tenant = Tenant(name="test tenant") -# session.add(tenant) -# session.commit() -# return tenant +def _build_account(email_prefix: str = "account") -> Account: + return Account( + name=f"Account {uuid4()}", + email=f"{email_prefix}_{uuid4()}@example.com", + password="hashed-password", + password_salt="salt", + interface_language="en-US", + timezone="UTC", + ) -# @pytest.fixture -# def tenant_account_join(session, account, tenant): -# tenant_join = TenantAccountJoin(account_id=account.id, tenant_id=tenant.id) -# session.add(tenant_join) -# session.commit() -# yield tenant_join -# session.delete(tenant_join) -# session.commit() +class _DBTrackingTestBase: + """Base class providing a tracker list and shared row factories for account/tenant tests.""" + + _tracked: list + + @pytest.fixture(autouse=True) + def _setup_cleanup(self, db_session_with_containers: Session) -> Generator[None, None, None]: + self._tracked = [] + yield + _cleanup_tracked_rows(db_session_with_containers, self._tracked) + + def _create_tenant(self, db_session: Session) -> Tenant: + tenant = _build_tenant() + db_session.add(tenant) + db_session.flush() + self._tracked.append(tenant) + return tenant + + def _create_account(self, db_session: Session, email_prefix: str = "account") -> Account: + account = _build_account(email_prefix) + db_session.add(account) + db_session.flush() + self._tracked.append(account) + return account + + def _create_join( + self, db_session: Session, tenant_id: str, account_id: str, role: TenantAccountRole, current: bool = True + ) -> TenantAccountJoin: + join = TenantAccountJoin(tenant_id=tenant_id, account_id=account_id, role=role, current=current) + db_session.add(join) + db_session.flush() + self._tracked.append(join) + return join -# class TestAccountTenant: -# def test_set_current_tenant_should_reload_tenant( -# self, -# db_session_with_containers, -# account, -# tenant, -# tenant_account_join, -# ): -# with Session(db_session_with_containers.get_bind(), expire_on_commit=True) as session: -# scoped_tenant = session.scalars(select(Tenant).where(Tenant.id == tenant.id)).one() -# account.current_tenant = scoped_tenant -# scoped_tenant.created_at = naive_utc_now() -# # session.commit() +class TestAccountCurrentTenantSetter(_DBTrackingTestBase): + """Integration tests for Account.current_tenant property setter.""" -# # Ensure the tenant used in assignment is detached. -# with pytest.raises(DetachedInstanceError): -# _ = scoped_tenant.name + def test_current_tenant_property_returns_cached_tenant(self, db_session_with_containers: Session) -> None: + """current_tenant getter returns the in-memory _current_tenant without DB access.""" + account = self._create_account(db_session_with_containers) + tenant = self._create_tenant(db_session_with_containers) + account._current_tenant = tenant -# assert account._current_tenant.id == tenant.id -# assert account._current_tenant.id == tenant.id + assert account.current_tenant is tenant -# def test_set_tenant_id_should_load_tenant_as_not_expire( -# self, -# flask_app_with_containers, -# account, -# tenant, -# tenant_account_join, -# ): -# with flask_app_with_containers.test_request_context(): -# account.set_tenant_id(tenant.id) + def test_current_tenant_setter_sets_tenant_and_role_when_join_exists( + self, db_session_with_containers: Session + ) -> None: + """Setting current_tenant loads the join row and assigns role when relationship exists.""" + tenant = self._create_tenant(db_session_with_containers) + account = self._create_account(db_session_with_containers) + self._create_join(db_session_with_containers, tenant.id, account.id, TenantAccountRole.OWNER) + db_session_with_containers.commit() -# assert account._current_tenant.id == tenant.id -# assert account._current_tenant.id == tenant.id + account.current_tenant = tenant + + assert account._current_tenant is not None + assert account._current_tenant.id == tenant.id + assert account.role == TenantAccountRole.OWNER + + def test_current_tenant_setter_sets_none_when_no_join_exists(self, db_session_with_containers: Session) -> None: + """Setting current_tenant results in _current_tenant=None when no join row exists.""" + tenant = self._create_tenant(db_session_with_containers) + account = self._create_account(db_session_with_containers) + db_session_with_containers.commit() + + account.current_tenant = tenant + + assert account._current_tenant is None + + +class TestAccountSetTenantId(_DBTrackingTestBase): + """Integration tests for Account.set_tenant_id method.""" + + def test_set_tenant_id_sets_tenant_and_role_when_relationship_exists( + self, db_session_with_containers: Session + ) -> None: + """set_tenant_id loads the tenant and assigns role when a join row exists.""" + tenant = self._create_tenant(db_session_with_containers) + account = self._create_account(db_session_with_containers) + self._create_join(db_session_with_containers, tenant.id, account.id, TenantAccountRole.ADMIN) + db_session_with_containers.commit() + + account.set_tenant_id(tenant.id) + + assert account._current_tenant is not None + assert account._current_tenant.id == tenant.id + assert account.role == TenantAccountRole.ADMIN + + def test_set_tenant_id_does_not_set_tenant_when_no_relationship_exists( + self, db_session_with_containers: Session + ) -> None: + """set_tenant_id does nothing when no join row matches the tenant.""" + tenant = self._create_tenant(db_session_with_containers) + account = self._create_account(db_session_with_containers) + db_session_with_containers.commit() + + account.set_tenant_id(tenant.id) + + assert account._current_tenant is None + + +class TestAccountGetByOpenId(_DBTrackingTestBase): + """Integration tests for Account.get_by_openid class method.""" + + def test_get_by_openid_returns_account_when_integrate_exists(self, db_session_with_containers: Session) -> None: + """get_by_openid returns the Account when a matching AccountIntegrate row exists.""" + account = self._create_account(db_session_with_containers, email_prefix="openid") + provider = "google" + open_id = f"google_{uuid4()}" + + integrate = AccountIntegrate( + account_id=account.id, + provider=provider, + open_id=open_id, + encrypted_token="token", + ) + db_session_with_containers.add(integrate) + db_session_with_containers.flush() + self._tracked.append(integrate) + + result = Account.get_by_openid(provider, open_id) + + assert result is not None + assert result.id == account.id + + def test_get_by_openid_returns_none_when_no_integrate_exists(self, db_session_with_containers: Session) -> None: + """get_by_openid returns None when no AccountIntegrate row matches.""" + result = Account.get_by_openid("github", f"github_{uuid4()}") + + assert result is None + + +class TestTenantGetAccounts(_DBTrackingTestBase): + """Integration tests for Tenant.get_accounts method.""" + + def test_get_accounts_returns_linked_accounts(self, db_session_with_containers: Session) -> None: + """get_accounts returns all accounts linked to the tenant via TenantAccountJoin.""" + tenant = self._create_tenant(db_session_with_containers) + account1 = self._create_account(db_session_with_containers, email_prefix="tenant_member") + account2 = self._create_account(db_session_with_containers, email_prefix="tenant_member") + self._create_join(db_session_with_containers, tenant.id, account1.id, TenantAccountRole.OWNER, current=False) + self._create_join(db_session_with_containers, tenant.id, account2.id, TenantAccountRole.NORMAL, current=False) + + accounts = tenant.get_accounts() + + assert len(accounts) == 2 + account_ids = {a.id for a in accounts} + assert account1.id in account_ids + assert account2.id in account_ids diff --git a/api/tests/test_containers_integration_tests/models/test_conversation_message_inputs.py b/api/tests/test_containers_integration_tests/models/test_conversation_message_inputs.py new file mode 100644 index 0000000000..f10f519e25 --- /dev/null +++ b/api/tests/test_containers_integration_tests/models/test_conversation_message_inputs.py @@ -0,0 +1,149 @@ +""" +Integration tests for Conversation.inputs and Message.inputs tenant resolution. + +Migrated from unit_tests/models/test_model.py, replacing db.session.scalar monkeypatching +with a real App in PostgreSQL so the _resolve_app_tenant_id lookup executes against the DB. +""" + +from collections.abc import Generator +from unittest.mock import patch +from uuid import uuid4 + +import pytest +from sqlalchemy.orm import Session + +from core.workflow.file_reference import build_file_reference +from graphon.file import FILE_MODEL_IDENTITY, FileTransferMethod +from models.model import App, AppMode, Conversation, Message + + +def _build_local_file_mapping(record_id: str, *, tenant_id: str | None = None) -> dict: + mapping: dict = { + "dify_model_identity": FILE_MODEL_IDENTITY, + "transfer_method": FileTransferMethod.LOCAL_FILE, + "reference": build_file_reference(record_id=record_id), + "type": "document", + "filename": "example.txt", + "extension": ".txt", + "mime_type": "text/plain", + "size": 1, + } + if tenant_id is not None: + mapping["tenant_id"] = tenant_id + return mapping + + +class TestConversationMessageInputsTenantResolution: + """Integration tests for Conversation/Message.inputs tenant resolution via real DB lookup.""" + + @pytest.fixture(autouse=True) + def _auto_rollback(self, db_session_with_containers: Session) -> Generator[None, None, None]: + """Automatically rollback session changes after each test.""" + yield + db_session_with_containers.rollback() + + def _create_app(self, db_session: Session) -> App: + tenant_id = str(uuid4()) + app = App( + tenant_id=tenant_id, + name=f"App {uuid4()}", + mode=AppMode.CHAT, + enable_site=False, + enable_api=True, + is_demo=False, + is_public=False, + is_universal=False, + created_by=str(uuid4()), + updated_by=str(uuid4()), + ) + db_session.add(app) + db_session.flush() + return app + + @pytest.mark.parametrize("owner_cls", [Conversation, Message]) + def test_inputs_resolves_tenant_via_db_for_local_file( + self, + db_session_with_containers: Session, + owner_cls: type, + ) -> None: + """Inputs resolves tenant_id from real App row when file mapping has no tenant_id.""" + app = self._create_app(db_session_with_containers) + build_calls: list[tuple[dict, str]] = [] + + def fake_build_from_mapping( + *, mapping, tenant_id, config=None, strict_type_validation=False, access_controller + ): + build_calls.append((dict(mapping), tenant_id)) + return {"tenant_id": tenant_id, "upload_file_id": mapping.get("upload_file_id")} + + with patch("factories.file_factory.build_from_mapping", fake_build_from_mapping): + owner = owner_cls(app_id=app.id) + owner.inputs = {"file": _build_local_file_mapping("upload-1")} + + restored_inputs = owner.inputs + + # The tenant_id should come from the real App row in the DB + assert restored_inputs["file"] == {"tenant_id": app.tenant_id, "upload_file_id": "upload-1"} + assert len(build_calls) == 1 + assert build_calls[0][1] == app.tenant_id + + @pytest.mark.parametrize("owner_cls", [Conversation, Message]) + def test_inputs_uses_serialized_tenant_id_skipping_db_lookup( + self, + db_session_with_containers: Session, + owner_cls: type, + ) -> None: + """Inputs uses tenant_id from the file mapping payload without hitting the DB.""" + app = self._create_app(db_session_with_containers) + payload_tenant_id = "tenant-from-payload" + build_calls: list[tuple[dict, str]] = [] + + def fake_build_from_mapping( + *, mapping, tenant_id, config=None, strict_type_validation=False, access_controller + ): + build_calls.append((dict(mapping), tenant_id)) + return {"tenant_id": tenant_id, "upload_file_id": mapping.get("upload_file_id")} + + with patch("factories.file_factory.build_from_mapping", fake_build_from_mapping): + owner = owner_cls(app_id=app.id) + owner.inputs = {"file": _build_local_file_mapping("upload-1", tenant_id=payload_tenant_id)} + + restored_inputs = owner.inputs + + assert restored_inputs["file"] == {"tenant_id": payload_tenant_id, "upload_file_id": "upload-1"} + assert len(build_calls) == 1 + assert build_calls[0][1] == payload_tenant_id + + @pytest.mark.parametrize("owner_cls", [Conversation, Message]) + def test_inputs_resolves_tenant_for_file_list( + self, + db_session_with_containers: Session, + owner_cls: type, + ) -> None: + """Inputs resolves tenant_id for a list of file mappings.""" + app = self._create_app(db_session_with_containers) + build_calls: list[tuple[dict, str]] = [] + + def fake_build_from_mapping( + *, mapping, tenant_id, config=None, strict_type_validation=False, access_controller + ): + build_calls.append((dict(mapping), tenant_id)) + return {"tenant_id": tenant_id, "upload_file_id": mapping.get("upload_file_id")} + + with patch("factories.file_factory.build_from_mapping", fake_build_from_mapping): + owner = owner_cls(app_id=app.id) + owner.inputs = { + "files": [ + _build_local_file_mapping("upload-1"), + _build_local_file_mapping("upload-2"), + ] + } + + restored_inputs = owner.inputs + + assert len(build_calls) == 2 + assert all(call[1] == app.tenant_id for call in build_calls) + assert restored_inputs["files"] == [ + {"tenant_id": app.tenant_id, "upload_file_id": "upload-1"}, + {"tenant_id": app.tenant_id, "upload_file_id": "upload-2"}, + ] diff --git a/api/tests/test_containers_integration_tests/models/test_conversation_status_count.py b/api/tests/test_containers_integration_tests/models/test_conversation_status_count.py new file mode 100644 index 0000000000..6352f815df --- /dev/null +++ b/api/tests/test_containers_integration_tests/models/test_conversation_status_count.py @@ -0,0 +1,314 @@ +""" +Integration tests for Conversation.status_count and Site.generate_code model properties. + +Migrated from unit_tests/models/test_app_models.py TestConversationStatusCount and +test_site_generate_code, replacing db.session.scalars mocks with real PostgreSQL queries. +""" + +from collections.abc import Generator +from uuid import uuid4 + +import pytest +from sqlalchemy.orm import Session + +from graphon.enums import WorkflowExecutionStatus +from models.enums import ConversationFromSource, InvokeFrom +from models.model import App, AppMode, Conversation, Message, Site +from models.workflow import Workflow, WorkflowRun, WorkflowRunTriggeredFrom, WorkflowType + + +class TestConversationStatusCount: + """Integration tests for Conversation.status_count property.""" + + @pytest.fixture(autouse=True) + def _auto_rollback(self, db_session_with_containers: Session) -> Generator[None, None, None]: + """Automatically rollback session changes after each test.""" + yield + db_session_with_containers.rollback() + + def _create_app(self, db_session: Session, tenant_id: str, created_by: str) -> App: + app = App( + tenant_id=tenant_id, + name=f"App {uuid4()}", + mode=AppMode.ADVANCED_CHAT, + enable_site=False, + enable_api=True, + is_demo=False, + is_public=False, + is_universal=False, + created_by=created_by, + updated_by=created_by, + ) + db_session.add(app) + db_session.flush() + return app + + def _create_conversation(self, db_session: Session, app: App) -> Conversation: + conversation = Conversation( + app_id=app.id, + mode=app.mode, + name=f"Conversation {uuid4()}", + summary="", + inputs={}, + introduction="", + system_instruction="", + system_instruction_tokens=0, + status="normal", + invoke_from=InvokeFrom.WEB_APP, + from_source=ConversationFromSource.API, + dialogue_count=0, + is_deleted=False, + ) + conversation.inputs = {} + db_session.add(conversation) + db_session.flush() + return conversation + + def _create_workflow(self, db_session: Session, app: App, created_by: str) -> Workflow: + workflow = Workflow( + tenant_id=app.tenant_id, + app_id=app.id, + type=WorkflowType.CHAT, + version="draft", + graph="{}", + created_by=created_by, + ) + workflow._features = "{}" + db_session.add(workflow) + db_session.flush() + return workflow + + def _create_workflow_run( + self, db_session: Session, app: App, workflow: Workflow, status: WorkflowExecutionStatus, created_by: str + ) -> WorkflowRun: + run = WorkflowRun( + tenant_id=app.tenant_id, + app_id=app.id, + workflow_id=workflow.id, + type=WorkflowType.CHAT, + triggered_from=WorkflowRunTriggeredFrom.APP_RUN, + version="draft", + status=status, + created_by_role="account", + created_by=created_by, + ) + db_session.add(run) + db_session.flush() + return run + + def _create_message( + self, db_session: Session, app: App, conversation: Conversation, workflow_run_id: str | None = None + ) -> Message: + message = Message( + app_id=app.id, + conversation_id=conversation.id, + _inputs={}, + query="Test query", + message={"role": "user", "content": "Test query"}, + answer="Test answer", + model_provider="openai", + model_id="gpt-4", + message_tokens=10, + message_unit_price=0, + answer_tokens=10, + answer_unit_price=0, + total_price=0, + currency="USD", + from_source=ConversationFromSource.API, + invoke_from=InvokeFrom.WEB_APP, + workflow_run_id=workflow_run_id, + ) + db_session.add(message) + db_session.flush() + return message + + def test_status_count_returns_none_when_no_messages(self, db_session_with_containers: Session) -> None: + """status_count returns None when conversation has no messages with workflow_run_id.""" + tenant_id = str(uuid4()) + created_by = str(uuid4()) + + app = self._create_app(db_session_with_containers, tenant_id, created_by) + conversation = self._create_conversation(db_session_with_containers, app) + + result = conversation.status_count + + assert result is None + + def test_status_count_returns_none_when_messages_have_no_workflow_run_id( + self, db_session_with_containers: Session + ) -> None: + """status_count returns None when messages exist but none have workflow_run_id.""" + tenant_id = str(uuid4()) + created_by = str(uuid4()) + + app = self._create_app(db_session_with_containers, tenant_id, created_by) + conversation = self._create_conversation(db_session_with_containers, app) + self._create_message(db_session_with_containers, app, conversation, workflow_run_id=None) + + result = conversation.status_count + + assert result is None + + def test_status_count_counts_succeeded_workflow_run(self, db_session_with_containers: Session) -> None: + """status_count correctly counts succeeded workflow runs.""" + tenant_id = str(uuid4()) + created_by = str(uuid4()) + + app = self._create_app(db_session_with_containers, tenant_id, created_by) + conversation = self._create_conversation(db_session_with_containers, app) + workflow = self._create_workflow(db_session_with_containers, app, created_by) + run = self._create_workflow_run( + db_session_with_containers, app, workflow, WorkflowExecutionStatus.SUCCEEDED, created_by + ) + self._create_message(db_session_with_containers, app, conversation, workflow_run_id=run.id) + + result = conversation.status_count + + assert result is not None + assert result["success"] == 1 + assert result["failed"] == 0 + assert result["partial_success"] == 0 + assert result["paused"] == 0 + + def test_status_count_counts_failed_workflow_run(self, db_session_with_containers: Session) -> None: + """status_count correctly counts failed workflow runs.""" + tenant_id = str(uuid4()) + created_by = str(uuid4()) + + app = self._create_app(db_session_with_containers, tenant_id, created_by) + conversation = self._create_conversation(db_session_with_containers, app) + workflow = self._create_workflow(db_session_with_containers, app, created_by) + run = self._create_workflow_run( + db_session_with_containers, app, workflow, WorkflowExecutionStatus.FAILED, created_by + ) + self._create_message(db_session_with_containers, app, conversation, workflow_run_id=run.id) + + result = conversation.status_count + + assert result is not None + assert result["success"] == 0 + assert result["failed"] == 1 + assert result["partial_success"] == 0 + assert result["paused"] == 0 + + def test_status_count_counts_paused_workflow_run(self, db_session_with_containers: Session) -> None: + """status_count correctly counts paused workflow runs.""" + tenant_id = str(uuid4()) + created_by = str(uuid4()) + + app = self._create_app(db_session_with_containers, tenant_id, created_by) + conversation = self._create_conversation(db_session_with_containers, app) + workflow = self._create_workflow(db_session_with_containers, app, created_by) + run = self._create_workflow_run( + db_session_with_containers, app, workflow, WorkflowExecutionStatus.PAUSED, created_by + ) + self._create_message(db_session_with_containers, app, conversation, workflow_run_id=run.id) + + result = conversation.status_count + + assert result is not None + assert result["success"] == 0 + assert result["failed"] == 0 + assert result["partial_success"] == 0 + assert result["paused"] == 1 + + def test_status_count_multiple_statuses(self, db_session_with_containers: Session) -> None: + """status_count counts multiple workflow runs with different statuses.""" + tenant_id = str(uuid4()) + created_by = str(uuid4()) + + app = self._create_app(db_session_with_containers, tenant_id, created_by) + conversation = self._create_conversation(db_session_with_containers, app) + workflow = self._create_workflow(db_session_with_containers, app, created_by) + + for status in [ + WorkflowExecutionStatus.SUCCEEDED, + WorkflowExecutionStatus.FAILED, + WorkflowExecutionStatus.PARTIAL_SUCCEEDED, + WorkflowExecutionStatus.PAUSED, + ]: + run = self._create_workflow_run(db_session_with_containers, app, workflow, status, created_by) + self._create_message(db_session_with_containers, app, conversation, workflow_run_id=run.id) + + result = conversation.status_count + + assert result is not None + assert result["success"] == 1 + assert result["failed"] == 1 + assert result["partial_success"] == 1 + assert result["paused"] == 1 + + def test_status_count_filters_workflow_runs_by_app_id(self, db_session_with_containers: Session) -> None: + """status_count excludes workflow runs belonging to a different app.""" + tenant_id = str(uuid4()) + created_by = str(uuid4()) + + app = self._create_app(db_session_with_containers, tenant_id, created_by) + other_app = self._create_app(db_session_with_containers, tenant_id, created_by) + conversation = self._create_conversation(db_session_with_containers, app) + workflow = self._create_workflow(db_session_with_containers, other_app, created_by) + + # Workflow run belongs to other_app, not app + other_run = self._create_workflow_run( + db_session_with_containers, other_app, workflow, WorkflowExecutionStatus.SUCCEEDED, created_by + ) + # Message references that run but is in a conversation under app + self._create_message(db_session_with_containers, app, conversation, workflow_run_id=other_run.id) + + result = conversation.status_count + + # The run should be excluded because app_id filter doesn't match + assert result is not None + assert result["success"] == 0 + + +class TestSiteGenerateCode: + """Integration tests for Site.generate_code static method.""" + + @pytest.fixture(autouse=True) + def _auto_rollback(self, db_session_with_containers: Session) -> Generator[None, None, None]: + """Automatically rollback session changes after each test.""" + yield + db_session_with_containers.rollback() + + def test_generate_code_returns_string_of_correct_length(self, db_session_with_containers: Session) -> None: + """Site.generate_code returns a code string of the requested length.""" + code = Site.generate_code(8) + + assert isinstance(code, str) + assert len(code) == 8 + + def test_generate_code_avoids_duplicates(self, db_session_with_containers: Session) -> None: + """Site.generate_code returns a code not already in use.""" + tenant_id = str(uuid4()) + app = App( + tenant_id=tenant_id, + name="Test App", + mode=AppMode.CHAT, + enable_site=True, + enable_api=False, + is_demo=False, + is_public=False, + is_universal=False, + created_by=str(uuid4()), + updated_by=str(uuid4()), + ) + db_session_with_containers.add(app) + db_session_with_containers.flush() + + site = Site( + app_id=app.id, + title="Test Site", + default_language="en-US", + customize_token_strategy="not_allow", + ) + # Set an explicit code so generate_code must avoid it + site.code = "AAAAAAAA" + db_session_with_containers.add(site) + db_session_with_containers.flush() + + code = Site.generate_code(8) + + assert isinstance(code, str) + assert len(code) == 8 + assert code != site.code diff --git a/api/tests/test_containers_integration_tests/models/test_types_enum_text.py b/api/tests/test_containers_integration_tests/models/test_types_enum_text.py index 8aec6b6acc..b325c97f7d 100644 --- a/api/tests/test_containers_integration_tests/models/test_types_enum_text.py +++ b/api/tests/test_containers_integration_tests/models/test_types_enum_text.py @@ -4,13 +4,13 @@ from typing import Any, NamedTuple import pytest import sqlalchemy as sa -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import exc as sa_exc -from sqlalchemy import insert +from sqlalchemy import insert, select from sqlalchemy.engine import Connection, Engine from sqlalchemy.orm import DeclarativeBase, Mapped, Session, mapped_column from sqlalchemy.sql.sqltypes import VARCHAR +from graphon.model_runtime.entities.model_entities import ModelType from models.types import EnumText _USER_TABLE = "enum_text_users" @@ -137,12 +137,12 @@ class TestEnumText: session.commit() with Session(engine_with_containers) as session: - user = session.query(_User).where(_User.id == admin_user_id).first() + user = session.scalar(select(_User).where(_User.id == admin_user_id).limit(1)) assert user.user_type == _UserType.admin assert user.user_type_nullable is None with Session(engine_with_containers) as session: - user = session.query(_User).where(_User.id == normal_user_id).first() + user = session.scalar(select(_User).where(_User.id == normal_user_id).limit(1)) assert user.user_type == _UserType.normal assert user.user_type_nullable == _UserType.normal @@ -206,7 +206,7 @@ class TestEnumText: with pytest.raises(ValueError) as exc: with Session(engine_with_containers) as session: - _user = session.query(_User).where(_User.id == 1).first() + _user = session.scalar(select(_User).where(_User.id == 1).limit(1)) assert str(exc.value) == "'invalid' is not a valid _UserType" @@ -222,7 +222,7 @@ class TestEnumText: session.commit() with Session(engine_with_containers) as session: - records = session.query(_LegacyModelTypeRecord).order_by(_LegacyModelTypeRecord.id).all() + records = session.scalars(select(_LegacyModelTypeRecord).order_by(_LegacyModelTypeRecord.id)).all() assert [record.model_type for record in records] == [ ModelType.LLM, diff --git a/api/tests/test_containers_integration_tests/models/test_workflow_node_execution_model.py b/api/tests/test_containers_integration_tests/models/test_workflow_node_execution_model.py new file mode 100644 index 0000000000..14c2263110 --- /dev/null +++ b/api/tests/test_containers_integration_tests/models/test_workflow_node_execution_model.py @@ -0,0 +1,170 @@ +""" +Integration tests for WorkflowNodeExecutionModel.created_by_account and .created_by_end_user. + +Migrated from unit_tests/models/test_workflow_trigger_log.py, replacing +monkeypatch.setattr(db.session, "scalar", ...) with real Account/EndUser rows +persisted in PostgreSQL so the db.session.get() call executes against the DB. +""" + +from collections.abc import Generator +from uuid import uuid4 + +import pytest +from sqlalchemy.orm import Session + +from models.account import Account +from models.enums import CreatorUserRole +from models.model import App, AppMode, EndUser +from models.workflow import WorkflowNodeExecutionModel, WorkflowNodeExecutionTriggeredFrom + + +class TestWorkflowNodeExecutionModelCreatedBy: + """Integration tests for WorkflowNodeExecutionModel creator lookup properties.""" + + @pytest.fixture(autouse=True) + def _auto_rollback(self, db_session_with_containers: Session) -> Generator[None, None, None]: + """Automatically rollback session changes after each test.""" + yield + db_session_with_containers.rollback() + + def _create_account(self, db_session: Session) -> Account: + account = Account( + name="Test Account", + email=f"test_{uuid4()}@example.com", + password="hashed-password", + password_salt="salt", + interface_language="en-US", + timezone="UTC", + ) + db_session.add(account) + db_session.flush() + return account + + def _create_end_user(self, db_session: Session, tenant_id: str, app_id: str) -> EndUser: + end_user = EndUser( + tenant_id=tenant_id, + app_id=app_id, + type="service_api", + external_user_id=f"ext-{uuid4()}", + name="End User", + session_id=f"session-{uuid4()}", + ) + end_user.is_anonymous = False + db_session.add(end_user) + db_session.flush() + return end_user + + def _create_app(self, db_session: Session, tenant_id: str, created_by: str) -> App: + app = App( + tenant_id=tenant_id, + name=f"App {uuid4()}", + mode=AppMode.WORKFLOW, + enable_site=False, + enable_api=True, + is_demo=False, + is_public=False, + is_universal=False, + created_by=created_by, + updated_by=created_by, + ) + db_session.add(app) + db_session.flush() + return app + + def _make_execution( + self, tenant_id: str, app_id: str, created_by_role: str, created_by: str + ) -> WorkflowNodeExecutionModel: + return WorkflowNodeExecutionModel( + tenant_id=tenant_id, + app_id=app_id, + workflow_id=str(uuid4()), + triggered_from=WorkflowNodeExecutionTriggeredFrom.WORKFLOW_RUN, + workflow_run_id=None, + index=1, + predecessor_node_id=None, + node_execution_id=None, + node_id="n1", + node_type="start", + title="Start", + inputs=None, + process_data=None, + outputs=None, + status="succeeded", + error=None, + elapsed_time=0.0, + execution_metadata=None, + created_by_role=created_by_role, + created_by=created_by, + ) + + def test_created_by_account_returns_account_when_role_is_account(self, db_session_with_containers: Session) -> None: + """created_by_account returns the Account row when role is ACCOUNT.""" + account = self._create_account(db_session_with_containers) + app = self._create_app(db_session_with_containers, str(uuid4()), account.id) + + execution = self._make_execution( + tenant_id=app.tenant_id, + app_id=app.id, + created_by_role=CreatorUserRole.ACCOUNT.value, + created_by=account.id, + ) + + result = execution.created_by_account + + assert result is not None + assert result.id == account.id + + def test_created_by_account_returns_none_when_role_is_end_user(self, db_session_with_containers: Session) -> None: + """created_by_account returns None when role is END_USER, even if an Account exists.""" + account = self._create_account(db_session_with_containers) + app = self._create_app(db_session_with_containers, str(uuid4()), account.id) + + execution = self._make_execution( + tenant_id=app.tenant_id, + app_id=app.id, + created_by_role=CreatorUserRole.END_USER.value, + created_by=account.id, + ) + + result = execution.created_by_account + + assert result is None + + def test_created_by_end_user_returns_end_user_when_role_is_end_user( + self, db_session_with_containers: Session + ) -> None: + """created_by_end_user returns the EndUser row when role is END_USER.""" + account = self._create_account(db_session_with_containers) + tenant_id = str(uuid4()) + app = self._create_app(db_session_with_containers, tenant_id, account.id) + end_user = self._create_end_user(db_session_with_containers, tenant_id, app.id) + + execution = self._make_execution( + tenant_id=tenant_id, + app_id=app.id, + created_by_role=CreatorUserRole.END_USER.value, + created_by=end_user.id, + ) + + result = execution.created_by_end_user + + assert result is not None + assert result.id == end_user.id + + def test_created_by_end_user_returns_none_when_role_is_account(self, db_session_with_containers: Session) -> None: + """created_by_end_user returns None when role is ACCOUNT, even if an EndUser exists.""" + account = self._create_account(db_session_with_containers) + tenant_id = str(uuid4()) + app = self._create_app(db_session_with_containers, tenant_id, account.id) + end_user = self._create_end_user(db_session_with_containers, tenant_id, app.id) + + execution = self._make_execution( + tenant_id=tenant_id, + app_id=app.id, + created_by_role=CreatorUserRole.ACCOUNT.value, + created_by=end_user.id, + ) + + result = execution.created_by_end_user + + assert result is None diff --git a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_node_execution_repository.py b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_node_execution_repository.py index a68b3a08c7..641399c7f9 100644 --- a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_node_execution_repository.py +++ b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_node_execution_repository.py @@ -5,10 +5,10 @@ from __future__ import annotations from datetime import timedelta from uuid import uuid4 -from graphon.enums import WorkflowNodeExecutionStatus from sqlalchemy import Engine, delete from sqlalchemy.orm import Session, sessionmaker +from graphon.enums import WorkflowNodeExecutionStatus from libs.datetime_utils import naive_utc_now from models.enums import CreatorUserRole from models.workflow import WorkflowNodeExecutionModel diff --git a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_run_repository.py b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_run_repository.py index d28cfda159..aebe87839c 100644 --- a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_run_repository.py +++ b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_run_repository.py @@ -8,15 +8,15 @@ from unittest.mock import Mock from uuid import uuid4 import pytest +from sqlalchemy import Engine, delete, select +from sqlalchemy.orm import Session, sessionmaker + +from extensions.ext_storage import storage from graphon.entities import WorkflowExecution from graphon.entities.pause_reason import HumanInputRequired, PauseReasonType from graphon.enums import WorkflowExecutionStatus from graphon.nodes.human_input.entities import FormDefinition, FormInput, UserAction from graphon.nodes.human_input.enums import FormInputType, HumanInputFormStatus -from sqlalchemy import Engine, delete, select -from sqlalchemy.orm import Session, sessionmaker - -from extensions.ext_storage import storage from libs.datetime_utils import naive_utc_now from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom from models.human_input import ( @@ -220,7 +220,6 @@ class TestDeleteRunsWithRelated: created_by=test_scope.user_id, ) pause = WorkflowPause( - id=str(uuid4()), workflow_id=test_scope.workflow_id, workflow_run_id=workflow_run.id, state_object_key=f"workflow-state-{uuid4()}.json", @@ -280,7 +279,6 @@ class TestCountRunsWithRelated: created_by=test_scope.user_id, ) pause = WorkflowPause( - id=str(uuid4()), workflow_id=test_scope.workflow_id, workflow_run_id=workflow_run.id, state_object_key=f"workflow-state-{uuid4()}.json", @@ -544,7 +542,6 @@ class TestPrivateWorkflowPauseEntity: status=WorkflowExecutionStatus.RUNNING, ) pause = WorkflowPause( - id=str(uuid4()), workflow_id=test_scope.workflow_id, workflow_run_id=workflow_run.id, state_object_key=f"workflow-state-{uuid4()}.json", @@ -574,7 +571,6 @@ class TestPrivateWorkflowPauseEntity: ) state_key = f"workflow-state-{uuid4()}.json" pause = WorkflowPause( - id=str(uuid4()), workflow_id=test_scope.workflow_id, workflow_run_id=workflow_run.id, state_object_key=state_key, @@ -606,7 +602,6 @@ class TestPrivateWorkflowPauseEntity: ) state_key = f"workflow-state-{uuid4()}.json" pause = WorkflowPause( - id=str(uuid4()), workflow_id=test_scope.workflow_id, workflow_run_id=workflow_run.id, state_object_key=state_key, @@ -672,7 +667,6 @@ class TestBuildHumanInputRequiredReason: status=WorkflowExecutionStatus.RUNNING, ) pause = WorkflowPause( - id=str(uuid4()), workflow_id=test_scope.workflow_id, workflow_run_id=workflow_run.id, state_object_key=f"workflow-state-{uuid4()}.json", diff --git a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_execution_extra_content_repository.py b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_execution_extra_content_repository.py index 7f44eb6ca3..aaf9a85d60 100644 --- a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_execution_extra_content_repository.py +++ b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_execution_extra_content_repository.py @@ -12,11 +12,11 @@ from decimal import Decimal from uuid import uuid4 import pytest -from graphon.nodes.human_input.entities import FormDefinition, UserAction -from graphon.nodes.human_input.enums import HumanInputFormStatus from sqlalchemy import Engine, delete, select from sqlalchemy.orm import Session, sessionmaker +from graphon.nodes.human_input.entities import FormDefinition, UserAction +from graphon.nodes.human_input.enums import HumanInputFormStatus from libs.datetime_utils import naive_utc_now from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole from models.enums import ConversationFromSource, InvokeFrom diff --git a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_workflow_node_execution_repository.py b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_workflow_node_execution_repository.py new file mode 100644 index 0000000000..fa78f1c28b --- /dev/null +++ b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_workflow_node_execution_repository.py @@ -0,0 +1,395 @@ +"""Testcontainers integration tests for SQLAlchemyWorkflowNodeExecutionRepository.""" + +from __future__ import annotations + +import json +from datetime import datetime +from decimal import Decimal +from uuid import uuid4 + +from sqlalchemy import Engine +from sqlalchemy.orm import Session, sessionmaker + +from core.repositories import SQLAlchemyWorkflowNodeExecutionRepository +from core.repositories.factory import OrderConfig +from graphon.entities import WorkflowNodeExecution +from graphon.enums import ( + BuiltinNodeTypes, + WorkflowNodeExecutionMetadataKey, + WorkflowNodeExecutionStatus, +) +from graphon.model_runtime.utils.encoders import jsonable_encoder +from models.account import Account, Tenant +from models.enums import CreatorUserRole +from models.workflow import WorkflowNodeExecutionModel, WorkflowNodeExecutionTriggeredFrom + + +def _create_account_with_tenant(session: Session) -> Account: + tenant = Tenant(name="Test Workspace") + session.add(tenant) + session.flush() + + account = Account(name="test", email=f"test-{uuid4()}@example.com") + session.add(account) + session.flush() + + account._current_tenant = tenant + return account + + +def _make_repo(session: Session, account: Account, app_id: str) -> SQLAlchemyWorkflowNodeExecutionRepository: + engine = session.get_bind() + assert isinstance(engine, Engine) + return SQLAlchemyWorkflowNodeExecutionRepository( + session_factory=sessionmaker(bind=engine, expire_on_commit=False), + user=account, + app_id=app_id, + triggered_from=WorkflowNodeExecutionTriggeredFrom.WORKFLOW_RUN, + ) + + +def _create_node_execution_model( + session: Session, + *, + tenant_id: str, + app_id: str, + workflow_id: str, + workflow_run_id: str, + index: int = 1, + status: WorkflowNodeExecutionStatus = WorkflowNodeExecutionStatus.RUNNING, +) -> WorkflowNodeExecutionModel: + model = WorkflowNodeExecutionModel( + id=str(uuid4()), + tenant_id=tenant_id, + app_id=app_id, + workflow_id=workflow_id, + triggered_from=WorkflowNodeExecutionTriggeredFrom.WORKFLOW_RUN, + workflow_run_id=workflow_run_id, + index=index, + predecessor_node_id=None, + node_execution_id=str(uuid4()), + node_id=f"node-{index}", + node_type=BuiltinNodeTypes.START, + title=f"Test Node {index}", + inputs='{"input_key": "input_value"}', + process_data='{"process_key": "process_value"}', + outputs='{"output_key": "output_value"}', + status=status, + error=None, + elapsed_time=1.5, + execution_metadata="{}", + created_at=datetime.now(), + created_by_role=CreatorUserRole.ACCOUNT, + created_by=str(uuid4()), + finished_at=None, + ) + session.add(model) + session.flush() + return model + + +class TestSave: + def test_save_new_record(self, db_session_with_containers: Session) -> None: + account = _create_account_with_tenant(db_session_with_containers) + app_id = str(uuid4()) + repo = _make_repo(db_session_with_containers, account, app_id) + + execution = WorkflowNodeExecution( + id=str(uuid4()), + workflow_id=str(uuid4()), + node_execution_id=str(uuid4()), + workflow_execution_id=str(uuid4()), + index=1, + predecessor_node_id=None, + node_id="node-1", + node_type=BuiltinNodeTypes.START, + title="Test Node", + inputs={"input_key": "input_value"}, + process_data={"process_key": "process_value"}, + outputs={"result": "success"}, + status=WorkflowNodeExecutionStatus.RUNNING, + error=None, + elapsed_time=1.5, + metadata={WorkflowNodeExecutionMetadataKey.TOTAL_TOKENS: 100}, + created_at=datetime.now(), + finished_at=None, + ) + + repo.save(execution) + + engine = db_session_with_containers.get_bind() + assert isinstance(engine, Engine) + with sessionmaker(bind=engine, expire_on_commit=False)() as verify_session: + saved = verify_session.get(WorkflowNodeExecutionModel, execution.id) + assert saved is not None + assert saved.tenant_id == account.current_tenant_id + assert saved.app_id == app_id + assert saved.node_id == "node-1" + assert saved.status == WorkflowNodeExecutionStatus.RUNNING + + def test_save_updates_existing_record(self, db_session_with_containers: Session) -> None: + account = _create_account_with_tenant(db_session_with_containers) + repo = _make_repo(db_session_with_containers, account, str(uuid4())) + + execution = WorkflowNodeExecution( + id=str(uuid4()), + workflow_id=str(uuid4()), + node_execution_id=str(uuid4()), + workflow_execution_id=str(uuid4()), + index=1, + predecessor_node_id=None, + node_id="node-1", + node_type=BuiltinNodeTypes.START, + title="Test Node", + inputs=None, + process_data=None, + outputs=None, + status=WorkflowNodeExecutionStatus.RUNNING, + error=None, + elapsed_time=0.0, + metadata=None, + created_at=datetime.now(), + finished_at=None, + ) + + repo.save(execution) + + execution.status = WorkflowNodeExecutionStatus.SUCCEEDED + execution.elapsed_time = 2.5 + repo.save(execution) + + engine = db_session_with_containers.get_bind() + assert isinstance(engine, Engine) + with sessionmaker(bind=engine, expire_on_commit=False)() as verify_session: + saved = verify_session.get(WorkflowNodeExecutionModel, execution.id) + assert saved is not None + assert saved.status == WorkflowNodeExecutionStatus.SUCCEEDED + assert saved.elapsed_time == 2.5 + + +class TestGetByWorkflowExecution: + def test_returns_executions_ordered(self, db_session_with_containers: Session) -> None: + account = _create_account_with_tenant(db_session_with_containers) + tenant_id = account.current_tenant_id + app_id = str(uuid4()) + workflow_id = str(uuid4()) + workflow_run_id = str(uuid4()) + repo = _make_repo(db_session_with_containers, account, app_id) + + _create_node_execution_model( + db_session_with_containers, + tenant_id=tenant_id, + app_id=app_id, + workflow_id=workflow_id, + workflow_run_id=workflow_run_id, + index=1, + status=WorkflowNodeExecutionStatus.SUCCEEDED, + ) + _create_node_execution_model( + db_session_with_containers, + tenant_id=tenant_id, + app_id=app_id, + workflow_id=workflow_id, + workflow_run_id=workflow_run_id, + index=2, + status=WorkflowNodeExecutionStatus.SUCCEEDED, + ) + db_session_with_containers.commit() + + order_config = OrderConfig(order_by=["index"], order_direction="desc") + result = repo.get_by_workflow_execution( + workflow_execution_id=workflow_run_id, + order_config=order_config, + ) + + assert len(result) == 2 + assert result[0].index == 2 + assert result[1].index == 1 + assert all(isinstance(r, WorkflowNodeExecution) for r in result) + + def test_excludes_paused_executions(self, db_session_with_containers: Session) -> None: + account = _create_account_with_tenant(db_session_with_containers) + tenant_id = account.current_tenant_id + app_id = str(uuid4()) + workflow_id = str(uuid4()) + workflow_run_id = str(uuid4()) + repo = _make_repo(db_session_with_containers, account, app_id) + + _create_node_execution_model( + db_session_with_containers, + tenant_id=tenant_id, + app_id=app_id, + workflow_id=workflow_id, + workflow_run_id=workflow_run_id, + index=1, + status=WorkflowNodeExecutionStatus.RUNNING, + ) + _create_node_execution_model( + db_session_with_containers, + tenant_id=tenant_id, + app_id=app_id, + workflow_id=workflow_id, + workflow_run_id=workflow_run_id, + index=2, + status=WorkflowNodeExecutionStatus.PAUSED, + ) + db_session_with_containers.commit() + + result = repo.get_by_workflow_execution(workflow_execution_id=workflow_run_id) + + assert len(result) == 1 + assert result[0].index == 1 + + +class TestToDbModel: + def test_converts_domain_to_db_model(self, db_session_with_containers: Session) -> None: + account = _create_account_with_tenant(db_session_with_containers) + app_id = str(uuid4()) + repo = _make_repo(db_session_with_containers, account, app_id) + + domain_model = WorkflowNodeExecution( + id="test-id", + workflow_id="test-workflow-id", + node_execution_id="test-node-execution-id", + workflow_execution_id="test-workflow-run-id", + index=1, + predecessor_node_id="test-predecessor-id", + node_id="test-node-id", + node_type=BuiltinNodeTypes.START, + title="Test Node", + inputs={"input_key": "input_value"}, + process_data={"process_key": "process_value"}, + outputs={"output_key": "output_value"}, + status=WorkflowNodeExecutionStatus.RUNNING, + error=None, + elapsed_time=1.5, + metadata={ + WorkflowNodeExecutionMetadataKey.TOTAL_TOKENS: 100, + WorkflowNodeExecutionMetadataKey.TOTAL_PRICE: Decimal("0.0"), + }, + created_at=datetime.now(), + finished_at=None, + ) + + db_model = repo._to_db_model(domain_model) + + assert isinstance(db_model, WorkflowNodeExecutionModel) + assert db_model.id == domain_model.id + assert db_model.tenant_id == account.current_tenant_id + assert db_model.app_id == app_id + assert db_model.workflow_id == domain_model.workflow_id + assert db_model.triggered_from == WorkflowNodeExecutionTriggeredFrom.WORKFLOW_RUN + assert db_model.workflow_run_id == domain_model.workflow_execution_id + assert db_model.index == domain_model.index + assert db_model.predecessor_node_id == domain_model.predecessor_node_id + assert db_model.node_execution_id == domain_model.node_execution_id + assert db_model.node_id == domain_model.node_id + assert db_model.node_type == domain_model.node_type + assert db_model.title == domain_model.title + assert db_model.inputs_dict == domain_model.inputs + assert db_model.process_data_dict == domain_model.process_data + assert db_model.outputs_dict == domain_model.outputs + assert db_model.execution_metadata_dict == jsonable_encoder(domain_model.metadata) + assert db_model.status == domain_model.status + assert db_model.error == domain_model.error + assert db_model.elapsed_time == domain_model.elapsed_time + assert db_model.created_at == domain_model.created_at + assert db_model.created_by_role == CreatorUserRole.ACCOUNT + assert db_model.created_by == account.id + assert db_model.finished_at == domain_model.finished_at + + +class TestToDomainModel: + def test_converts_db_to_domain_model(self, db_session_with_containers: Session) -> None: + account = _create_account_with_tenant(db_session_with_containers) + app_id = str(uuid4()) + repo = _make_repo(db_session_with_containers, account, app_id) + + inputs_dict = {"input_key": "input_value"} + process_data_dict = {"process_key": "process_value"} + outputs_dict = {"output_key": "output_value"} + metadata_dict = {str(WorkflowNodeExecutionMetadataKey.TOTAL_TOKENS): 100} + now = datetime.now() + + db_model = WorkflowNodeExecutionModel() + db_model.id = "test-id" + db_model.tenant_id = account.current_tenant_id + db_model.app_id = app_id + db_model.workflow_id = "test-workflow-id" + db_model.triggered_from = "workflow-run" + db_model.workflow_run_id = "test-workflow-run-id" + db_model.index = 1 + db_model.predecessor_node_id = "test-predecessor-id" + db_model.node_execution_id = "test-node-execution-id" + db_model.node_id = "test-node-id" + db_model.node_type = BuiltinNodeTypes.START + db_model.title = "Test Node" + db_model.inputs = json.dumps(inputs_dict) + db_model.process_data = json.dumps(process_data_dict) + db_model.outputs = json.dumps(outputs_dict) + db_model.status = WorkflowNodeExecutionStatus.RUNNING + db_model.error = None + db_model.elapsed_time = 1.5 + db_model.execution_metadata = json.dumps(metadata_dict) + db_model.created_at = now + db_model.created_by_role = "account" + db_model.created_by = account.id + db_model.finished_at = None + + domain_model = repo._to_domain_model(db_model) + + assert isinstance(domain_model, WorkflowNodeExecution) + assert domain_model.id == "test-id" + assert domain_model.workflow_id == "test-workflow-id" + assert domain_model.workflow_execution_id == "test-workflow-run-id" + assert domain_model.index == 1 + assert domain_model.predecessor_node_id == "test-predecessor-id" + assert domain_model.node_execution_id == "test-node-execution-id" + assert domain_model.node_id == "test-node-id" + assert domain_model.node_type == BuiltinNodeTypes.START + assert domain_model.title == "Test Node" + assert domain_model.inputs == inputs_dict + assert domain_model.process_data == process_data_dict + assert domain_model.outputs == outputs_dict + assert domain_model.status == WorkflowNodeExecutionStatus.RUNNING + assert domain_model.error is None + assert domain_model.elapsed_time == 1.5 + assert domain_model.metadata == {WorkflowNodeExecutionMetadataKey(k): v for k, v in metadata_dict.items()} + assert domain_model.created_at == now + assert domain_model.finished_at is None + + def test_domain_model_without_offload_data(self, db_session_with_containers: Session) -> None: + account = _create_account_with_tenant(db_session_with_containers) + repo = _make_repo(db_session_with_containers, account, str(uuid4())) + + process_data = {"normal": "data"} + db_model = WorkflowNodeExecutionModel() + db_model.id = str(uuid4()) + db_model.tenant_id = account.current_tenant_id + db_model.app_id = str(uuid4()) + db_model.workflow_id = str(uuid4()) + db_model.triggered_from = "workflow-run" + db_model.workflow_run_id = None + db_model.index = 1 + db_model.predecessor_node_id = None + db_model.node_execution_id = str(uuid4()) + db_model.node_id = "test-node-id" + db_model.node_type = "llm" + db_model.title = "Test Node" + db_model.inputs = None + db_model.process_data = json.dumps(process_data) + db_model.outputs = None + db_model.status = "succeeded" + db_model.error = None + db_model.elapsed_time = 1.5 + db_model.execution_metadata = "{}" + db_model.created_at = datetime.now() + db_model.created_by_role = "account" + db_model.created_by = account.id + db_model.finished_at = None + + domain_model = repo._to_domain_model(db_model) + + assert domain_model.process_data == process_data + assert domain_model.process_data_truncated is False + assert domain_model.get_truncated_process_data() is None diff --git a/api/tests/test_containers_integration_tests/repositories/test_workflow_run_repository.py b/api/tests/test_containers_integration_tests/repositories/test_workflow_run_repository.py index c5e9201ee3..d6f0657380 100644 --- a/api/tests/test_containers_integration_tests/repositories/test_workflow_run_repository.py +++ b/api/tests/test_containers_integration_tests/repositories/test_workflow_run_repository.py @@ -7,12 +7,12 @@ from datetime import timedelta from uuid import uuid4 import pytest -from graphon.entities import WorkflowExecution -from graphon.enums import WorkflowExecutionStatus from sqlalchemy import Engine, delete from sqlalchemy import exc as sa_exc from sqlalchemy.orm import Session, sessionmaker +from graphon.entities import WorkflowExecution +from graphon.enums import WorkflowExecutionStatus from libs.datetime_utils import naive_utc_now from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom from models.workflow import WorkflowRun, WorkflowType diff --git a/api/tests/integration_tests/vdb/__mock/__init__.py b/api/tests/test_containers_integration_tests/services/rag_pipeline/__init__.py similarity index 100% rename from api/tests/integration_tests/vdb/__mock/__init__.py rename to api/tests/test_containers_integration_tests/services/rag_pipeline/__init__.py diff --git a/api/tests/test_containers_integration_tests/services/rag_pipeline/test_rag_pipeline_service_db.py b/api/tests/test_containers_integration_tests/services/rag_pipeline/test_rag_pipeline_service_db.py new file mode 100644 index 0000000000..8fc1809a46 --- /dev/null +++ b/api/tests/test_containers_integration_tests/services/rag_pipeline/test_rag_pipeline_service_db.py @@ -0,0 +1,255 @@ +""" +Integration tests for RagPipelineService methods that interact with the database. + +Migrated from unit_tests/services/rag_pipeline/test_rag_pipeline_service.py, replacing +db.session.scalar/commit/delete mocker patches with real PostgreSQL operations. + +Covers: +- get_pipeline: Dataset and Pipeline lookups +- update_customized_pipeline_template: find + unique-name check + commit +- delete_customized_pipeline_template: find + delete + commit +""" + +from collections.abc import Generator +from types import SimpleNamespace +from unittest.mock import patch +from uuid import uuid4 + +import pytest +from sqlalchemy.orm import Session, sessionmaker + +from models.dataset import Dataset, Pipeline, PipelineCustomizedTemplate +from models.enums import DataSourceType +from services.entities.knowledge_entities.rag_pipeline_entities import IconInfo, PipelineTemplateInfoEntity +from services.rag_pipeline.rag_pipeline import RagPipelineService + + +class TestRagPipelineServiceGetPipeline: + """Integration tests for RagPipelineService.get_pipeline.""" + + @pytest.fixture(autouse=True) + def _auto_rollback(self, db_session_with_containers: Session) -> Generator[None, None, None]: + yield + db_session_with_containers.rollback() + + def _make_service(self, flask_app_with_containers) -> RagPipelineService: + with ( + patch( + "services.rag_pipeline.rag_pipeline.DifyAPIRepositoryFactory.create_api_workflow_node_execution_repository", + return_value=None, + ), + patch( + "services.rag_pipeline.rag_pipeline.DifyAPIRepositoryFactory.create_api_workflow_run_repository", + return_value=None, + ), + ): + session_factory = sessionmaker(bind=flask_app_with_containers.extensions["sqlalchemy"].engine) + return RagPipelineService(session_maker=session_factory) + + def _create_pipeline(self, db_session: Session, tenant_id: str, created_by: str) -> Pipeline: + pipeline = Pipeline( + tenant_id=tenant_id, + name=f"Pipeline {uuid4()}", + description="", + created_by=created_by, + ) + db_session.add(pipeline) + db_session.flush() + return pipeline + + def _create_dataset( + self, db_session: Session, tenant_id: str, created_by: str, pipeline_id: str | None = None + ) -> Dataset: + dataset = Dataset( + tenant_id=tenant_id, + name=f"Dataset {uuid4()}", + data_source_type=DataSourceType.UPLOAD_FILE, + created_by=created_by, + pipeline_id=pipeline_id, + ) + db_session.add(dataset) + db_session.flush() + return dataset + + def test_get_pipeline_raises_when_dataset_not_found( + self, db_session_with_containers: Session, flask_app_with_containers + ) -> None: + """get_pipeline raises ValueError when dataset does not exist.""" + service = self._make_service(flask_app_with_containers) + + with pytest.raises(ValueError, match="Dataset not found"): + service.get_pipeline(tenant_id=str(uuid4()), dataset_id=str(uuid4())) + + def test_get_pipeline_raises_when_pipeline_not_found( + self, db_session_with_containers: Session, flask_app_with_containers + ) -> None: + """get_pipeline raises ValueError when dataset exists but has no linked pipeline.""" + tenant_id = str(uuid4()) + created_by = str(uuid4()) + dataset = self._create_dataset(db_session_with_containers, tenant_id, created_by, pipeline_id=None) + db_session_with_containers.flush() + + service = self._make_service(flask_app_with_containers) + + with pytest.raises(ValueError, match="(Dataset not found|Pipeline not found)"): + service.get_pipeline(tenant_id=tenant_id, dataset_id=dataset.id) + + def test_get_pipeline_returns_pipeline_when_found( + self, db_session_with_containers: Session, flask_app_with_containers + ) -> None: + """get_pipeline returns the Pipeline when both Dataset and Pipeline exist.""" + tenant_id = str(uuid4()) + created_by = str(uuid4()) + + pipeline = self._create_pipeline(db_session_with_containers, tenant_id, created_by) + dataset = self._create_dataset(db_session_with_containers, tenant_id, created_by, pipeline_id=pipeline.id) + db_session_with_containers.flush() + + service = self._make_service(flask_app_with_containers) + + result = service.get_pipeline(tenant_id=tenant_id, dataset_id=dataset.id) + + assert result.id == pipeline.id + + +class TestUpdateCustomizedPipelineTemplate: + """Integration tests for RagPipelineService.update_customized_pipeline_template.""" + + @pytest.fixture(autouse=True) + def _auto_rollback(self, db_session_with_containers: Session) -> Generator[None, None, None]: + yield + db_session_with_containers.rollback() + + def _create_template( + self, db_session: Session, tenant_id: str, created_by: str, name: str = "Template" + ) -> PipelineCustomizedTemplate: + template = PipelineCustomizedTemplate( + tenant_id=tenant_id, + name=name, + description="Original description", + chunk_structure="fixed_size", + icon={"type": "emoji", "value": "📄"}, + position=1, + yaml_content="{}", + install_count=0, + language="en-US", + created_by=created_by, + ) + db_session.add(template) + db_session.flush() + return template + + def test_update_template_succeeds(self, db_session_with_containers: Session, flask_app_with_containers) -> None: + """update_customized_pipeline_template updates name and description.""" + tenant_id = str(uuid4()) + created_by = str(uuid4()) + template = self._create_template(db_session_with_containers, tenant_id, created_by) + db_session_with_containers.flush() + + fake_user = SimpleNamespace(id=created_by, current_tenant_id=tenant_id) + + with patch("services.rag_pipeline.rag_pipeline.current_user", fake_user): + info = PipelineTemplateInfoEntity( + name="Updated Name", + description="Updated description", + icon_info=IconInfo(icon="🔥"), + ) + result = RagPipelineService.update_customized_pipeline_template(template.id, info) + + assert result.name == "Updated Name" + assert result.description == "Updated description" + + def test_update_template_raises_when_not_found( + self, db_session_with_containers: Session, flask_app_with_containers + ) -> None: + """update_customized_pipeline_template raises ValueError when template doesn't exist.""" + fake_user = SimpleNamespace(id=str(uuid4()), current_tenant_id=str(uuid4())) + + with patch("services.rag_pipeline.rag_pipeline.current_user", fake_user): + info = PipelineTemplateInfoEntity( + name="New Name", + description="desc", + icon_info=IconInfo(icon="📄"), + ) + with pytest.raises(ValueError, match="Customized pipeline template not found"): + RagPipelineService.update_customized_pipeline_template(str(uuid4()), info) + + def test_update_template_raises_on_duplicate_name( + self, db_session_with_containers: Session, flask_app_with_containers + ) -> None: + """update_customized_pipeline_template raises ValueError when new name already exists.""" + tenant_id = str(uuid4()) + created_by = str(uuid4()) + template1 = self._create_template(db_session_with_containers, tenant_id, created_by, name="Original") + self._create_template(db_session_with_containers, tenant_id, created_by, name="Duplicate") + db_session_with_containers.flush() + + fake_user = SimpleNamespace(id=created_by, current_tenant_id=tenant_id) + + with patch("services.rag_pipeline.rag_pipeline.current_user", fake_user): + info = PipelineTemplateInfoEntity( + name="Duplicate", + description="desc", + icon_info=IconInfo(icon="📄"), + ) + with pytest.raises(ValueError, match="Template name is already exists"): + RagPipelineService.update_customized_pipeline_template(template1.id, info) + + +class TestDeleteCustomizedPipelineTemplate: + """Integration tests for RagPipelineService.delete_customized_pipeline_template.""" + + @pytest.fixture(autouse=True) + def _auto_rollback(self, db_session_with_containers: Session) -> Generator[None, None, None]: + yield + db_session_with_containers.rollback() + + def _create_template(self, db_session: Session, tenant_id: str, created_by: str) -> PipelineCustomizedTemplate: + template = PipelineCustomizedTemplate( + tenant_id=tenant_id, + name=f"Template {uuid4()}", + description="Description", + chunk_structure="fixed_size", + icon={"type": "emoji", "value": "📄"}, + position=1, + yaml_content="{}", + install_count=0, + language="en-US", + created_by=created_by, + ) + db_session.add(template) + db_session.flush() + return template + + def test_delete_template_succeeds(self, db_session_with_containers: Session, flask_app_with_containers) -> None: + """delete_customized_pipeline_template removes the template from the DB.""" + tenant_id = str(uuid4()) + created_by = str(uuid4()) + template = self._create_template(db_session_with_containers, tenant_id, created_by) + template_id = template.id + db_session_with_containers.flush() + + fake_user = SimpleNamespace(id=created_by, current_tenant_id=tenant_id) + + with patch("services.rag_pipeline.rag_pipeline.current_user", fake_user): + RagPipelineService.delete_customized_pipeline_template(template_id) + + # Verify the record is deleted within the same context + from sqlalchemy import select + + from extensions.ext_database import db as ext_db + + remaining = ext_db.session.scalar( + select(PipelineCustomizedTemplate).where(PipelineCustomizedTemplate.id == template_id) + ) + assert remaining is None + + def test_delete_template_raises_when_not_found( + self, db_session_with_containers: Session, flask_app_with_containers + ) -> None: + """delete_customized_pipeline_template raises ValueError when template doesn't exist.""" + fake_user = SimpleNamespace(id=str(uuid4()), current_tenant_id=str(uuid4())) + + with patch("services.rag_pipeline.rag_pipeline.current_user", fake_user): + with pytest.raises(ValueError, match="Customized pipeline template not found"): + RagPipelineService.delete_customized_pipeline_template(str(uuid4())) diff --git a/api/tests/test_containers_integration_tests/services/test_account_service.py b/api/tests/test_containers_integration_tests/services/test_account_service.py index cc9596d15f..9a53ff087c 100644 --- a/api/tests/test_containers_integration_tests/services/test_account_service.py +++ b/api/tests/test_containers_integration_tests/services/test_account_service.py @@ -9,7 +9,7 @@ from werkzeug.exceptions import Unauthorized from configs import dify_config from controllers.console.error import AccountNotFound, NotAllowedCreateWorkspace -from models import AccountStatus, TenantAccountJoin +from models import AccountStatus, TenantAccountJoin, TenantStatus from services.account_service import AccountService, RegisterService, TenantService, TokenPair from services.errors.account import ( AccountAlreadyInTenantError, @@ -2851,7 +2851,7 @@ class TestRegisterService: interface_language="en-US", password=existing_pending_member_password, ) - existing_account.status = "pending" + existing_account.status = AccountStatus.PENDING db_session_with_containers.commit() @@ -2941,7 +2941,7 @@ class TestRegisterService: interface_language="en-US", password=already_in_tenant_password, ) - existing_account.status = "active" + existing_account.status = AccountStatus.ACTIVE db_session_with_containers.commit() @@ -3331,7 +3331,7 @@ class TestRegisterService: TenantService.create_tenant_member(tenant, account, role="normal") # Change tenant status to non-normal - tenant.status = "archive" + tenant.status = TenantStatus.ARCHIVE db_session_with_containers.commit() diff --git a/api/tests/test_containers_integration_tests/services/test_agent_service.py b/api/tests/test_containers_integration_tests/services/test_agent_service.py index 4f3c0e4200..00a2f9a59f 100644 --- a/api/tests/test_containers_integration_tests/services/test_agent_service.py +++ b/api/tests/test_containers_integration_tests/services/test_agent_service.py @@ -842,7 +842,6 @@ class TestAgentService: conversation, message = self._create_test_conversation_and_message(db_session_with_containers, app, account) from graphon.file import FileTransferMethod, FileType - from models.enums import CreatorUserRole # Add files to message diff --git a/api/tests/test_containers_integration_tests/services/test_app_dsl_service.py b/api/tests/test_containers_integration_tests/services/test_app_dsl_service.py index 33955d5d84..77ce28b999 100644 --- a/api/tests/test_containers_integration_tests/services/test_app_dsl_service.py +++ b/api/tests/test_containers_integration_tests/services/test_app_dsl_service.py @@ -1,56 +1,104 @@ +from __future__ import annotations + +import base64 import json +from types import SimpleNamespace from unittest.mock import MagicMock, patch +from uuid import uuid4 import pytest import yaml from faker import Faker -from models.model import App, AppModelConfig +from core.trigger.constants import ( + TRIGGER_PLUGIN_NODE_TYPE, + TRIGGER_SCHEDULE_NODE_TYPE, + TRIGGER_WEBHOOK_NODE_TYPE, +) +from extensions.ext_redis import redis_client +from graphon.enums import BuiltinNodeTypes +from models import Account, AppMode +from models.model import AppModelConfig, IconType +from services import app_dsl_service from services.account_service import AccountService, TenantService -from services.app_dsl_service import AppDslService, ImportMode, ImportStatus +from services.app_dsl_service import ( + CHECK_DEPENDENCIES_REDIS_KEY_PREFIX, + CURRENT_DSL_VERSION, + DSL_MAX_SIZE, + IMPORT_INFO_REDIS_EXPIRY, + IMPORT_INFO_REDIS_KEY_PREFIX, + AppDslService, + CheckDependenciesPendingData, + ImportMode, + ImportStatus, + PendingData, + _check_version_compatibility, +) from services.app_service import AppService from tests.test_containers_integration_tests.helpers import generate_valid_password +_DEFAULT_TENANT_ID = "00000000-0000-0000-0000-000000000001" +_DEFAULT_ACCOUNT_ID = "00000000-0000-0000-0000-000000000002" + + +def _account_mock(*, tenant_id: str = _DEFAULT_TENANT_ID, account_id: str = _DEFAULT_ACCOUNT_ID) -> MagicMock: + account = MagicMock(spec=Account) + account.current_tenant_id = tenant_id + account.id = account_id + return account + + +def _yaml_dump(data: dict) -> str: + return yaml.safe_dump(data, allow_unicode=True) + + +def _workflow_yaml(*, version: str = CURRENT_DSL_VERSION) -> str: + return _yaml_dump( + { + "version": version, + "kind": "app", + "app": {"name": "My App", "mode": AppMode.WORKFLOW.value}, + "workflow": {"graph": {"nodes": []}, "features": {}}, + } + ) + + +def _pending_yaml_content(version: str = "99.0.0") -> bytes: + return (f'version: "{version}"\nkind: app\napp:\n name: Loop Test\n mode: workflow\n').encode() + class TestAppDslService: """Integration tests for AppDslService using testcontainers.""" + @pytest.fixture + def app(self, flask_app_with_containers): + return flask_app_with_containers + @pytest.fixture def mock_external_service_dependencies(self): """Mock setup for external service dependencies.""" with ( patch("services.app_dsl_service.WorkflowService") as mock_workflow_service, patch("services.app_dsl_service.DependenciesAnalysisService") as mock_dependencies_service, - patch("services.app_dsl_service.WorkflowDraftVariableService") as mock_draft_variable_service, - patch("services.app_dsl_service.ssrf_proxy") as mock_ssrf_proxy, - patch("services.app_dsl_service.redis_client") as mock_redis_client, patch("services.app_dsl_service.app_was_created") as mock_app_was_created, - patch("services.app_dsl_service.app_model_config_was_updated") as mock_app_model_config_was_updated, patch("services.app_service.ModelManager.for_tenant") as mock_model_manager, patch("services.app_service.FeatureService") as mock_feature_service, patch("services.app_service.EnterpriseService") as mock_enterprise_service, ): - # Setup default mock returns mock_workflow_service.return_value.get_draft_workflow.return_value = None mock_workflow_service.return_value.sync_draft_workflow.return_value = MagicMock() mock_dependencies_service.generate_latest_dependencies.return_value = [] mock_dependencies_service.get_leaked_dependencies.return_value = [] mock_dependencies_service.generate_dependencies.return_value = [] - mock_draft_variable_service.return_value.delete_workflow_variables.return_value = None - mock_ssrf_proxy.get.return_value.content = b"test content" - mock_ssrf_proxy.get.return_value.raise_for_status.return_value = None - mock_redis_client.setex.return_value = None - mock_redis_client.get.return_value = None - mock_redis_client.delete.return_value = None mock_app_was_created.send.return_value = None - mock_app_model_config_was_updated.send.return_value = None - # Mock ModelManager for app service mock_model_instance = mock_model_manager.return_value mock_model_instance.get_default_model_instance.return_value = None - mock_model_instance.get_default_provider_model_name.return_value = ("openai", "gpt-3.5-turbo") + mock_model_instance.get_default_provider_model_name.return_value = ( + "openai", + "gpt-3.5-turbo", + ) - # Mock FeatureService and EnterpriseService mock_feature_service.get_system_features.return_value.webapp_auth.enabled = False mock_enterprise_service.WebAppAuth.update_app_access_mode.return_value = None mock_enterprise_service.WebAppAuth.cleanup_webapp.return_value = None @@ -58,34 +106,16 @@ class TestAppDslService: yield { "workflow_service": mock_workflow_service, "dependencies_service": mock_dependencies_service, - "draft_variable_service": mock_draft_variable_service, - "ssrf_proxy": mock_ssrf_proxy, - "redis_client": mock_redis_client, "app_was_created": mock_app_was_created, - "app_model_config_was_updated": mock_app_model_config_was_updated, "model_manager": mock_model_manager, "feature_service": mock_feature_service, "enterprise_service": mock_enterprise_service, } def _create_test_app_and_account(self, db_session_with_containers, mock_external_service_dependencies): - """ - Helper method to create a test app and account for testing. - - Args: - db_session_with_containers: Database session from testcontainers infrastructure - mock_external_service_dependencies: Mock dependencies - - Returns: - tuple: (app, account) - Created app and account instances - """ fake = Faker() - - # Setup mocks for account creation with patch("services.account_service.FeatureService") as mock_account_feature_service: mock_account_feature_service.get_system_features.return_value.is_allow_register = True - - # Create account and tenant first account = AccountService.create_account( email=fake.email(), name=fake.name(), @@ -94,8 +124,6 @@ class TestAppDslService: ) TenantService.create_owner_tenant_if_not_exist(account, name=fake.company()) tenant = account.current_tenant - - # Setup app creation arguments app_args = { "name": fake.company(), "description": fake.text(max_nb_chars=100), @@ -106,17 +134,11 @@ class TestAppDslService: "api_rph": 100, "api_rpm": 10, } - - # Create app app_service = AppService() app = app_service.create_app(tenant.id, app_args, account) - return app, account - def _create_simple_yaml_content(self, app_name="Test App", app_mode="chat"): - """ - Helper method to create simple YAML content for testing. - """ + def _create_simple_yaml_content(self, app_name: str = "Test App", app_mode: str = "chat") -> str: yaml_data = { "version": "0.3.0", "kind": "app", @@ -145,88 +167,739 @@ class TestAppDslService: } return yaml.dump(yaml_data, allow_unicode=True) - def test_import_app_missing_yaml_content(self, db_session_with_containers, mock_external_service_dependencies): - """ - Test app import with missing YAML content. - """ - fake = Faker() - app, account = self._create_test_app_and_account(db_session_with_containers, mock_external_service_dependencies) + # ── Version Compatibility ───────────────────────────────────────── - # Import app without YAML content - dsl_service = AppDslService(db_session_with_containers) - result = dsl_service.import_app( - account=account, - import_mode=ImportMode.YAML_CONTENT, - name="Missing Content App", - ) + def test_check_version_compatibility_invalid_version_returns_failed(self): + assert _check_version_compatibility("not-a-version") == ImportStatus.FAILED - # Verify import failed - assert result.status == ImportStatus.FAILED - assert result.app_id is None - assert "yaml_content is required" in result.error - assert result.imported_dsl_version == "" + def test_check_version_compatibility_newer_version_returns_pending(self): + assert _check_version_compatibility("99.0.0") == ImportStatus.PENDING - # Verify no app was created in database - apps_count = db_session_with_containers.query(App).where(App.tenant_id == account.current_tenant_id).count() - assert apps_count == 1 # Only the original test app + def test_check_version_compatibility_major_older_returns_pending(self, monkeypatch): + monkeypatch.setattr(app_dsl_service, "CURRENT_DSL_VERSION", "1.0.0") + assert _check_version_compatibility("0.9.9") == ImportStatus.PENDING - def test_import_app_missing_yaml_url(self, db_session_with_containers, mock_external_service_dependencies): - """ - Test app import with missing YAML URL. - """ - fake = Faker() - app, account = self._create_test_app_and_account(db_session_with_containers, mock_external_service_dependencies) + def test_check_version_compatibility_minor_older_returns_completed_with_warnings( + self, + ): + assert _check_version_compatibility("0.5.0") == ImportStatus.COMPLETED_WITH_WARNINGS - # Import app without YAML URL - dsl_service = AppDslService(db_session_with_containers) - result = dsl_service.import_app( - account=account, - import_mode=ImportMode.YAML_URL, - name="Missing URL App", - ) + def test_check_version_compatibility_equal_returns_completed(self): + assert _check_version_compatibility(CURRENT_DSL_VERSION) == ImportStatus.COMPLETED - # Verify import failed - assert result.status == ImportStatus.FAILED - assert result.app_id is None - assert "yaml_url is required" in result.error - assert result.imported_dsl_version == "" + # ── Import: Validation ──────────────────────────────────────────── - # Verify no app was created in database - apps_count = db_session_with_containers.query(App).where(App.tenant_id == account.current_tenant_id).count() - assert apps_count == 1 # Only the original test app - - def test_import_app_invalid_import_mode(self, db_session_with_containers, mock_external_service_dependencies): - """ - Test app import with invalid import mode. - """ - fake = Faker() - app, account = self._create_test_app_and_account(db_session_with_containers, mock_external_service_dependencies) - - # Create YAML content - yaml_content = self._create_simple_yaml_content(fake.company(), "chat") - - # Import app with invalid mode should raise ValueError - dsl_service = AppDslService(db_session_with_containers) - with pytest.raises(ValueError, match="Invalid import_mode: invalid-mode"): - dsl_service.import_app( - account=account, + def test_import_app_invalid_import_mode_raises_value_error(self, db_session_with_containers): + service = AppDslService(db_session_with_containers) + with pytest.raises(ValueError, match="Invalid import_mode"): + service.import_app( + account=_account_mock(), import_mode="invalid-mode", - yaml_content=yaml_content, - name="Invalid Mode App", + yaml_content="version: '0.1.0'", ) - # Verify no app was created in database - apps_count = db_session_with_containers.query(App).where(App.tenant_id == account.current_tenant_id).count() - assert apps_count == 1 # Only the original test app + def test_import_app_missing_yaml_content(self, db_session_with_containers): + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_CONTENT, + yaml_content=None, + ) + assert result.status == ImportStatus.FAILED + assert "yaml_content is required" in result.error - def test_export_dsl_chat_app_success(self, db_session_with_containers, mock_external_service_dependencies): - """ - Test successful DSL export for chat app. - """ - fake = Faker() + def test_import_app_missing_yaml_url(self, db_session_with_containers): + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_URL, + yaml_url=None, + ) + assert result.status == ImportStatus.FAILED + assert "yaml_url is required" in result.error + + def test_import_app_yaml_not_mapping_returns_failed(self, db_session_with_containers): + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_CONTENT, + yaml_content="[]", + ) + assert result.status == ImportStatus.FAILED + assert "content must be a mapping" in result.error + + def test_import_app_version_not_str_returns_failed(self, db_session_with_containers): + service = AppDslService(db_session_with_containers) + yaml_content = _yaml_dump({"version": 1, "kind": "app", "app": {"name": "x", "mode": "workflow"}}) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_CONTENT, + yaml_content=yaml_content, + ) + assert result.status == ImportStatus.FAILED + assert "Invalid version type" in result.error + + def test_import_app_missing_app_data_returns_failed(self, db_session_with_containers): + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_CONTENT, + yaml_content=_yaml_dump({"version": "0.6.0", "kind": "app"}), + ) + assert result.status == ImportStatus.FAILED + assert "Missing app data" in result.error + + def test_import_app_yaml_error_returns_failed(self, db_session_with_containers, monkeypatch): + def bad_safe_load(_content: str): + raise yaml.YAMLError("bad") + + monkeypatch.setattr(app_dsl_service.yaml, "safe_load", bad_safe_load) + + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_CONTENT, + yaml_content="x: y", + ) + assert result.status == ImportStatus.FAILED + assert result.error.startswith("Invalid YAML format:") + + def test_import_app_unexpected_error_returns_failed(self, db_session_with_containers, monkeypatch): + monkeypatch.setattr( + AppDslService, + "_create_or_update_app", + lambda *_args, **_kwargs: (_ for _ in ()).throw(ValueError("oops")), + ) + + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_CONTENT, + yaml_content=_workflow_yaml(), + ) + assert result.status == ImportStatus.FAILED + assert result.error == "oops" + + # ── Import: YAML URL ────────────────────────────────────────────── + + def test_import_app_yaml_url_fetch_error_returns_failed(self, db_session_with_containers, monkeypatch): + monkeypatch.setattr( + app_dsl_service.ssrf_proxy, + "get", + lambda _url, **_kw: (_ for _ in ()).throw(RuntimeError("boom")), + ) + + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_URL, + yaml_url="https://example.com/a.yml", + ) + assert result.status == ImportStatus.FAILED + assert "Error fetching YAML from URL: boom" in result.error + + def test_import_app_yaml_url_empty_content_returns_failed(self, db_session_with_containers, monkeypatch): + response = MagicMock() + response.content = b"" + response.raise_for_status.return_value = None + monkeypatch.setattr(app_dsl_service.ssrf_proxy, "get", lambda _url, **_kw: response) + + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_URL, + yaml_url="https://example.com/a.yml", + ) + assert result.status == ImportStatus.FAILED + assert "Empty content" in result.error + + def test_import_app_yaml_url_file_too_large_returns_failed(self, db_session_with_containers, monkeypatch): + response = MagicMock() + response.content = b"x" * (DSL_MAX_SIZE + 1) + response.raise_for_status.return_value = None + monkeypatch.setattr(app_dsl_service.ssrf_proxy, "get", lambda _url, **_kw: response) + + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_URL, + yaml_url="https://example.com/a.yml", + ) + assert result.status == ImportStatus.FAILED + assert "File size exceeds" in result.error + + def test_import_app_yaml_url_user_attachments_keeps_original_url(self, db_session_with_containers, monkeypatch): + yaml_url = "https://github.com/user-attachments/files/24290802/loop-test.yml" + yaml_bytes = _pending_yaml_content() + + requested_urls: list[str] = [] + + def fake_get(url: str, **kwargs): + requested_urls.append(url) + response = MagicMock() + response.content = yaml_bytes + response.raise_for_status.return_value = None + return response + + monkeypatch.setattr(app_dsl_service.ssrf_proxy, "get", fake_get) + + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_URL, + yaml_url=yaml_url, + ) + + assert result.status == ImportStatus.PENDING + assert result.imported_dsl_version == "99.0.0" + assert requested_urls == [yaml_url] + + def test_import_app_yaml_url_github_blob_rewrites_to_raw(self, db_session_with_containers, monkeypatch): + yaml_url = "https://github.com/acme/repo/blob/main/app.yml" + raw_url = "https://raw.githubusercontent.com/acme/repo/main/app.yml" + yaml_bytes = _pending_yaml_content() + + requested_urls: list[str] = [] + + def fake_get(url: str, **kwargs): + requested_urls.append(url) + assert url == raw_url + response = MagicMock() + response.content = yaml_bytes + response.raise_for_status.return_value = None + return response + + monkeypatch.setattr(app_dsl_service.ssrf_proxy, "get", fake_get) + + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_URL, + yaml_url=yaml_url, + ) + + assert result.status == ImportStatus.PENDING + assert requested_urls == [raw_url] + + # ── Import: App ID checks ──────────────────────────────────────── + + def test_import_app_app_id_not_found_returns_failed(self, db_session_with_containers): + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_CONTENT, + yaml_content=_workflow_yaml(), + app_id=str(uuid4()), + ) + assert result.status == ImportStatus.FAILED + assert result.error == "App not found" + + def test_import_app_overwrite_only_allows_workflow_and_advanced_chat( + self, db_session_with_containers, mock_external_service_dependencies + ): + app, account = self._create_test_app_and_account(db_session_with_containers, mock_external_service_dependencies) + assert app.mode == "chat" + + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=account, + import_mode=ImportMode.YAML_CONTENT, + yaml_content=_workflow_yaml(), + app_id=app.id, + ) + assert result.status == ImportStatus.FAILED + assert "Only workflow or advanced chat apps" in result.error + + # ── Import: Flow ────────────────────────────────────────────────── + + def test_import_app_pending_stores_import_info_in_redis(self, db_session_with_containers): + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_CONTENT, + yaml_content=_workflow_yaml(version="99.0.0"), + name="n", + description="d", + icon_type="emoji", + icon="i", + icon_background="#000000", + ) + assert result.status == ImportStatus.PENDING + assert result.imported_dsl_version == "99.0.0" + + redis_key = f"{IMPORT_INFO_REDIS_KEY_PREFIX}{result.id}" + stored = redis_client.get(redis_key) + assert stored is not None + + def test_import_app_completed_uses_declared_dependencies( + self, db_session_with_containers, mock_external_service_dependencies + ): + _, account = self._create_test_app_and_account(db_session_with_containers, mock_external_service_dependencies) + + dependencies_payload = [ + { + "type": "package", + "value": { + "plugin_unique_identifier": "langgenius/google", + "version": "1.0.0", + }, + } + ] + + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=account, + import_mode=ImportMode.YAML_CONTENT, + yaml_content=_yaml_dump( + { + "version": CURRENT_DSL_VERSION, + "kind": "app", + "app": {"name": "My App", "mode": AppMode.WORKFLOW.value}, + "workflow": {"graph": {"nodes": []}, "features": {}}, + "dependencies": dependencies_payload, + } + ), + ) + + assert result.status == ImportStatus.COMPLETED + assert result.app_id is not None + + @pytest.mark.parametrize("has_workflow", [True, False]) + def test_import_app_legacy_versions_extract_dependencies( + self, db_session_with_containers, monkeypatch, has_workflow: bool + ): + monkeypatch.setattr( + AppDslService, + "_extract_dependencies_from_workflow_graph", + lambda *_args, **_kwargs: ["from-workflow"], + ) + monkeypatch.setattr( + AppDslService, + "_extract_dependencies_from_model_config", + lambda *_args, **_kwargs: ["from-model-config"], + ) + monkeypatch.setattr( + app_dsl_service.DependenciesAnalysisService, + "generate_latest_dependencies", + lambda deps: [SimpleNamespace(model_dump=lambda: {"dep": deps[0]})], + ) + + created_app = SimpleNamespace( + id=str(uuid4()), + mode=AppMode.WORKFLOW.value, + tenant_id=_DEFAULT_TENANT_ID, + ) + monkeypatch.setattr( + AppDslService, + "_create_or_update_app", + lambda *_args, **_kwargs: created_app, + ) + + draft_var_service = MagicMock() + monkeypatch.setattr( + app_dsl_service, + "WorkflowDraftVariableService", + lambda *args, **kwargs: draft_var_service, + ) + + data: dict = { + "version": "0.1.5", + "kind": "app", + "app": {"name": "Legacy", "mode": AppMode.WORKFLOW.value}, + } + if has_workflow: + data["workflow"] = {"graph": {"nodes": []}, "features": {}} + else: + data["model_config"] = {"model": {"provider": "openai"}} + + service = AppDslService(db_session_with_containers) + result = service.import_app( + account=_account_mock(), + import_mode=ImportMode.YAML_CONTENT, + yaml_content=_yaml_dump(data), + ) + assert result.status == ImportStatus.COMPLETED_WITH_WARNINGS + draft_var_service.delete_app_workflow_variables.assert_called_once_with(app_id=created_app.id) + + # ── Confirm Import ──────────────────────────────────────────────── + + def test_confirm_import_expired_returns_failed(self, db_session_with_containers): + service = AppDslService(db_session_with_containers) + result = service.confirm_import(import_id=str(uuid4()), account=_account_mock()) + assert result.status == ImportStatus.FAILED + assert "expired" in result.error + + def test_confirm_import_success_deletes_redis_key(self, db_session_with_containers, monkeypatch): + import_id = str(uuid4()) + redis_key = f"{IMPORT_INFO_REDIS_KEY_PREFIX}{import_id}" + + pending = PendingData( + import_mode=ImportMode.YAML_CONTENT, + yaml_content=_workflow_yaml(), + name="name", + description="desc", + icon_type="emoji", + icon="🤖", + icon_background="#fff", + app_id=None, + ) + redis_client.setex(redis_key, IMPORT_INFO_REDIS_EXPIRY, pending.model_dump_json()) + + created_app = SimpleNamespace( + id=str(uuid4()), + mode=AppMode.WORKFLOW.value, + tenant_id=_DEFAULT_TENANT_ID, + ) + monkeypatch.setattr( + AppDslService, + "_create_or_update_app", + lambda *_args, **_kwargs: created_app, + ) + + service = AppDslService(db_session_with_containers) + result = service.confirm_import(import_id=import_id, account=_account_mock()) + assert result.status == ImportStatus.COMPLETED + assert result.app_id == created_app.id + assert redis_client.get(redis_key) is None + + def test_confirm_import_invalid_pending_data_type_returns_failed(self, db_session_with_containers): + import_id = str(uuid4()) + redis_key = f"{IMPORT_INFO_REDIS_KEY_PREFIX}{import_id}" + redis_client.setex(redis_key, IMPORT_INFO_REDIS_EXPIRY, "123") + + service = AppDslService(db_session_with_containers) + result = service.confirm_import(import_id=import_id, account=_account_mock()) + assert result.status == ImportStatus.FAILED + assert "validation error" in result.error + + def test_confirm_import_exception_returns_failed(self, db_session_with_containers): + import_id = str(uuid4()) + redis_key = f"{IMPORT_INFO_REDIS_KEY_PREFIX}{import_id}" + redis_client.setex(redis_key, IMPORT_INFO_REDIS_EXPIRY, "not-valid-json") + + service = AppDslService(db_session_with_containers) + result = service.confirm_import(import_id=import_id, account=_account_mock()) + assert result.status == ImportStatus.FAILED + + # ── Check Dependencies ──────────────────────────────────────────── + + def test_check_dependencies_returns_empty_when_no_redis_data(self, db_session_with_containers): + service = AppDslService(db_session_with_containers) + app_model = SimpleNamespace(id=str(uuid4()), tenant_id=_DEFAULT_TENANT_ID) + result = service.check_dependencies(app_model=app_model) + assert result.leaked_dependencies == [] + + def test_check_dependencies_calls_analysis_service(self, db_session_with_containers, monkeypatch): + app_id = str(uuid4()) + pending = CheckDependenciesPendingData(dependencies=[], app_id=app_id) + redis_client.setex( + f"{CHECK_DEPENDENCIES_REDIS_KEY_PREFIX}{app_id}", + IMPORT_INFO_REDIS_EXPIRY, + pending.model_dump_json(), + ) + + dep = app_dsl_service.PluginDependency.model_validate( + { + "type": "package", + "value": { + "plugin_unique_identifier": "acme/foo", + "version": "1.0.0", + }, + } + ) + monkeypatch.setattr( + app_dsl_service.DependenciesAnalysisService, + "get_leaked_dependencies", + lambda *, tenant_id, dependencies: [dep], + ) + + service = AppDslService(db_session_with_containers) + result = service.check_dependencies(app_model=SimpleNamespace(id=app_id, tenant_id=_DEFAULT_TENANT_ID)) + assert len(result.leaked_dependencies) == 1 + + def test_check_dependencies_with_real_app(self, db_session_with_containers, mock_external_service_dependencies): + app, account = self._create_test_app_and_account(db_session_with_containers, mock_external_service_dependencies) + + mock_dependencies_json = '{"app_id": "' + app.id + '", "dependencies": []}' + redis_client.setex( + f"{CHECK_DEPENDENCIES_REDIS_KEY_PREFIX}{app.id}", + IMPORT_INFO_REDIS_EXPIRY, + mock_dependencies_json, + ) + + dsl_service = AppDslService(db_session_with_containers) + result = dsl_service.check_dependencies(app_model=app) + assert result.leaked_dependencies == [] + + # ── Create/Update App ───────────────────────────────────────────── + + def test_create_or_update_app_missing_mode_raises(self, db_session_with_containers): + service = AppDslService(db_session_with_containers) + with pytest.raises(ValueError, match="loss app mode"): + service._create_or_update_app(app=None, data={"app": {}}, account=_account_mock()) + + def test_create_or_update_app_existing_app_updates_fields(self, db_session_with_containers, monkeypatch): + fixed_now = object() + monkeypatch.setattr(app_dsl_service, "naive_utc_now", lambda: fixed_now) + + workflow_service = MagicMock() + workflow_service.get_draft_workflow.return_value = None + monkeypatch.setattr(app_dsl_service, "WorkflowService", lambda: workflow_service) + monkeypatch.setattr( + app_dsl_service.variable_factory, + "build_environment_variable_from_mapping", + lambda _m: SimpleNamespace(kind="env"), + ) + monkeypatch.setattr( + app_dsl_service.variable_factory, + "build_conversation_variable_from_mapping", + lambda _m: SimpleNamespace(kind="conv"), + ) + + app = SimpleNamespace( + id=str(uuid4()), + tenant_id=_DEFAULT_TENANT_ID, + mode=AppMode.WORKFLOW.value, + name="old", + description="old-desc", + icon_type=IconType.EMOJI, + icon="old-icon", + icon_background="#111111", + updated_by=None, + updated_at=None, + app_model_config=None, + ) + service = AppDslService(db_session_with_containers) + updated = service._create_or_update_app( + app=app, + data={ + "app": { + "mode": AppMode.WORKFLOW.value, + "name": "yaml-name", + "icon_type": IconType.IMAGE, + "icon": "X", + }, + "workflow": {"graph": {"nodes": []}, "features": {}}, + }, + account=_account_mock(), + name="override-name", + description=None, + icon_background="#222222", + ) + assert updated is app + assert app.name == "override-name" + assert app.icon_type == IconType.IMAGE + assert app.icon == "X" + assert app.icon_background == "#222222" + assert app.updated_at is fixed_now + + def test_create_or_update_app_new_app_requires_tenant(self, db_session_with_containers): + account = _account_mock() + account.current_tenant_id = None + service = AppDslService(db_session_with_containers) + with pytest.raises(ValueError, match="Current tenant is not set"): + service._create_or_update_app( + app=None, + data={"app": {"mode": AppMode.WORKFLOW.value, "name": "n"}}, + account=account, + ) + + def test_create_or_update_app_creates_workflow_app_and_saves_dependencies( + self, db_session_with_containers, mock_external_service_dependencies + ): + _, account = self._create_test_app_and_account(db_session_with_containers, mock_external_service_dependencies) + + mock_wf_svc = mock_external_service_dependencies["workflow_service"] + mock_wf_svc.return_value.get_draft_workflow.return_value = MagicMock(unique_hash="uh") + + service = AppDslService(db_session_with_containers) + deps = [ + app_dsl_service.PluginDependency.model_validate( + { + "type": "package", + "value": { + "plugin_unique_identifier": "acme/foo", + "version": "1.0.0", + }, + } + ) + ] + data = { + "app": {"mode": AppMode.WORKFLOW.value, "name": "n"}, + "workflow": { + "graph": {"nodes": []}, + "features": {}, + }, + } + + app = service._create_or_update_app(app=None, data=data, account=account, dependencies=deps) + + assert app.tenant_id == account.current_tenant_id + mock_external_service_dependencies["app_was_created"].send.assert_called_once() + mock_wf_svc.return_value.sync_draft_workflow.assert_called_once() + + stored = redis_client.get(f"{CHECK_DEPENDENCIES_REDIS_KEY_PREFIX}{app.id}") + assert stored is not None + + def test_create_or_update_app_workflow_missing_workflow_data_raises(self, db_session_with_containers): + service = AppDslService(db_session_with_containers) + with pytest.raises(ValueError, match="Missing workflow data"): + service._create_or_update_app( + app=SimpleNamespace( + id=str(uuid4()), + tenant_id=_DEFAULT_TENANT_ID, + mode=AppMode.WORKFLOW.value, + name="n", + description="d", + icon_background="#fff", + app_model_config=None, + ), + data={"app": {"mode": AppMode.WORKFLOW.value}}, + account=_account_mock(), + ) + + def test_create_or_update_app_chat_requires_model_config(self, db_session_with_containers): + service = AppDslService(db_session_with_containers) + with pytest.raises(ValueError, match="Missing model_config"): + service._create_or_update_app( + app=SimpleNamespace( + id=str(uuid4()), + tenant_id=_DEFAULT_TENANT_ID, + mode=AppMode.CHAT.value, + name="n", + description="d", + icon_background="#fff", + app_model_config=None, + ), + data={"app": {"mode": AppMode.CHAT.value}}, + account=_account_mock(), + ) + + def test_create_or_update_app_chat_creates_model_config_and_sends_event( + self, db_session_with_containers, mock_external_service_dependencies + ): + app, account = self._create_test_app_and_account(db_session_with_containers, mock_external_service_dependencies) + app.app_model_config_id = None + db_session_with_containers.commit() + + service = AppDslService(db_session_with_containers) + service._create_or_update_app( + app=app, + data={ + "app": {"mode": AppMode.CHAT.value}, + "model_config": {"model": {"provider": "openai"}}, + }, + account=account, + ) + + db_session_with_containers.expire_all() + assert app.app_model_config_id is not None + + def test_create_or_update_app_invalid_mode_raises(self, db_session_with_containers): + service = AppDslService(db_session_with_containers) + with pytest.raises(ValueError, match="Invalid app mode"): + service._create_or_update_app( + app=SimpleNamespace( + id=str(uuid4()), + tenant_id=_DEFAULT_TENANT_ID, + mode=AppMode.RAG_PIPELINE.value, + name="n", + description="d", + icon_background="#fff", + app_model_config=None, + ), + data={"app": {"mode": AppMode.RAG_PIPELINE.value}}, + account=_account_mock(), + ) + + # ── Export ───────────────────────────────────────────────────────── + + def test_export_dsl_delegates_by_mode(self, monkeypatch): + workflow_calls: list[bool] = [] + model_calls: list[bool] = [] + monkeypatch.setattr( + AppDslService, + "_append_workflow_export_data", + lambda **_kwargs: workflow_calls.append(True), + ) + monkeypatch.setattr( + AppDslService, + "_append_model_config_export_data", + lambda *_args, **_kwargs: model_calls.append(True), + ) + + workflow_app = SimpleNamespace( + mode=AppMode.WORKFLOW.value, + tenant_id=_DEFAULT_TENANT_ID, + name="n", + icon="i", + icon_type="emoji", + icon_background="#fff", + description="d", + use_icon_as_answer_icon=False, + app_model_config=None, + ) + AppDslService.export_dsl(workflow_app) + assert workflow_calls == [True] + + chat_app = SimpleNamespace( + mode=AppMode.CHAT.value, + tenant_id=_DEFAULT_TENANT_ID, + name="n", + icon="i", + icon_type="emoji", + icon_background="#fff", + description="d", + use_icon_as_answer_icon=False, + app_model_config=SimpleNamespace(to_dict=lambda: {"agent_mode": {"tools": []}}), + ) + AppDslService.export_dsl(chat_app) + assert model_calls == [True] + + def test_export_dsl_preserves_icon_and_icon_type(self, monkeypatch): + monkeypatch.setattr( + AppDslService, + "_append_workflow_export_data", + lambda **_kwargs: None, + ) + + emoji_app = SimpleNamespace( + mode=AppMode.WORKFLOW.value, + tenant_id=_DEFAULT_TENANT_ID, + name="Emoji App", + icon="🎨", + icon_type=IconType.EMOJI, + icon_background="#FF5733", + description="App with emoji icon", + use_icon_as_answer_icon=True, + app_model_config=None, + ) + yaml_output = AppDslService.export_dsl(emoji_app) + data = yaml.safe_load(yaml_output) + assert data["app"]["icon"] == "🎨" + assert data["app"]["icon_type"] == "emoji" + assert data["app"]["icon_background"] == "#FF5733" + + image_app = SimpleNamespace( + mode=AppMode.WORKFLOW.value, + tenant_id=_DEFAULT_TENANT_ID, + name="Image App", + icon="https://example.com/icon.png", + icon_type=IconType.IMAGE, + icon_background="#FFEAD5", + description="App with image icon", + use_icon_as_answer_icon=False, + app_model_config=None, + ) + yaml_output = AppDslService.export_dsl(image_app) + data = yaml.safe_load(yaml_output) + assert data["app"]["icon"] == "https://example.com/icon.png" + assert data["app"]["icon_type"] == "image" + assert data["app"]["icon_background"] == "#FFEAD5" + + def test_export_dsl_chat_app_success(self, db_session_with_containers, mock_external_service_dependencies): app, account = self._create_test_app_and_account(db_session_with_containers, mock_external_service_dependencies) - # Create model config for the app model_config = AppModelConfig( app_id=app.id, provider="openai", @@ -247,53 +920,38 @@ class TestAppDslService: created_by=account.id, updated_by=account.id, ) - model_config.id = fake.uuid4() - - # Set the app_model_config_id to link the config + model_config.id = str(uuid4()) app.app_model_config_id = model_config.id db_session_with_containers.add(model_config) db_session_with_containers.commit() - # Export DSL exported_dsl = AppDslService.export_dsl(app, include_secret=False) - - # Parse exported YAML exported_data = yaml.safe_load(exported_dsl) - # Verify exported data structure assert exported_data["kind"] == "app" assert exported_data["app"]["name"] == app.name assert exported_data["app"]["mode"] == app.mode - assert exported_data["app"]["icon"] == app.icon - assert exported_data["app"]["icon_background"] == app.icon_background - assert exported_data["app"]["description"] == app.description - - # Verify model config was exported assert "model_config" in exported_data - # The exported model_config structure may be different from the database structure - # Check that the model config exists and has the expected content - assert exported_data["model_config"] is not None - - # Verify dependencies were exported assert "dependencies" in exported_data - assert isinstance(exported_data["dependencies"], list) def test_export_dsl_workflow_app_success(self, db_session_with_containers, mock_external_service_dependencies): - """ - Test successful DSL export for workflow app. - """ - fake = Faker() app, account = self._create_test_app_and_account(db_session_with_containers, mock_external_service_dependencies) - - # Update app to workflow mode app.mode = "workflow" db_session_with_containers.commit() - # Mock workflow service to return a workflow mock_workflow = MagicMock() mock_workflow.to_dict.return_value = { - "graph": {"nodes": [{"id": "start", "type": "start", "data": {"type": "start"}}], "edges": []}, + "graph": { + "nodes": [ + { + "id": "start", + "type": "start", + "data": {"type": "start"}, + } + ], + "edges": [], + }, "features": {}, "environment_variables": [], "conversation_variables": [], @@ -302,54 +960,40 @@ class TestAppDslService: "workflow_service" ].return_value.get_draft_workflow.return_value = mock_workflow - # Export DSL exported_dsl = AppDslService.export_dsl(app, include_secret=False) - - # Parse exported YAML exported_data = yaml.safe_load(exported_dsl) - # Verify exported data structure assert exported_data["kind"] == "app" - assert exported_data["app"]["name"] == app.name assert exported_data["app"]["mode"] == "workflow" - - # Verify workflow was exported assert "workflow" in exported_data - assert "graph" in exported_data["workflow"] - assert "nodes" in exported_data["workflow"]["graph"] - - # Verify dependencies were exported assert "dependencies" in exported_data - assert isinstance(exported_data["dependencies"], list) - - # Verify workflow service was called - mock_external_service_dependencies["workflow_service"].return_value.get_draft_workflow.assert_called_once_with( - app, None - ) def test_export_dsl_with_workflow_id_success(self, db_session_with_containers, mock_external_service_dependencies): - """ - Test successful DSL export with specific workflow ID. - """ - fake = Faker() app, account = self._create_test_app_and_account(db_session_with_containers, mock_external_service_dependencies) - - # Update app to workflow mode app.mode = "workflow" db_session_with_containers.commit() - # Mock workflow service to return a workflow when specific workflow_id is provided mock_workflow = MagicMock() mock_workflow.to_dict.return_value = { - "graph": {"nodes": [{"id": "start", "type": "start", "data": {"type": "start"}}], "edges": []}, + "graph": { + "nodes": [ + { + "id": "start", + "type": "start", + "data": {"type": "start"}, + } + ], + "edges": [], + }, "features": {}, "environment_variables": [], "conversation_variables": [], } - # Mock the get_draft_workflow method to return different workflows based on workflow_id - def mock_get_draft_workflow(app_model, workflow_id=None): - if workflow_id == "specific-workflow-id": + workflow_id = str(uuid4()) + + def mock_get_draft_workflow(app_model, wf_id=None): + if wf_id == workflow_id: return mock_workflow return None @@ -357,78 +1001,351 @@ class TestAppDslService: "workflow_service" ].return_value.get_draft_workflow.side_effect = mock_get_draft_workflow - # Export DSL with specific workflow ID - exported_dsl = AppDslService.export_dsl(app, include_secret=False, workflow_id="specific-workflow-id") - - # Parse exported YAML + exported_dsl = AppDslService.export_dsl(app, include_secret=False, workflow_id=workflow_id) exported_data = yaml.safe_load(exported_dsl) - # Verify exported data structure assert exported_data["kind"] == "app" - assert exported_data["app"]["name"] == app.name - assert exported_data["app"]["mode"] == "workflow" - - # Verify workflow was exported assert "workflow" in exported_data - assert "graph" in exported_data["workflow"] - assert "nodes" in exported_data["workflow"]["graph"] - - # Verify dependencies were exported - assert "dependencies" in exported_data - assert isinstance(exported_data["dependencies"], list) - - # Verify workflow service was called with specific workflow ID - mock_external_service_dependencies["workflow_service"].return_value.get_draft_workflow.assert_called_once_with( - app, "specific-workflow-id" - ) def test_export_dsl_with_invalid_workflow_id_raises_error( self, db_session_with_containers, mock_external_service_dependencies ): - """ - Test that export_dsl raises error when invalid workflow ID is provided. - """ - fake = Faker() app, account = self._create_test_app_and_account(db_session_with_containers, mock_external_service_dependencies) - - # Update app to workflow mode app.mode = "workflow" db_session_with_containers.commit() - # Mock workflow service to return None when invalid workflow ID is provided mock_external_service_dependencies["workflow_service"].return_value.get_draft_workflow.return_value = None - # Export DSL with invalid workflow ID should raise ValueError - with pytest.raises(ValueError, match="Missing draft workflow configuration, please check."): - AppDslService.export_dsl(app, include_secret=False, workflow_id="invalid-workflow-id") + with pytest.raises( + ValueError, + match="Missing draft workflow configuration, please check.", + ): + AppDslService.export_dsl(app, include_secret=False, workflow_id=str(uuid4())) - # Verify workflow service was called with the invalid workflow ID - mock_external_service_dependencies["workflow_service"].return_value.get_draft_workflow.assert_called_once_with( - app, "invalid-workflow-id" + # ── Workflow Export Data ─────────────────────────────────────────── + + def test_append_workflow_export_data_filters_and_overrides(self, monkeypatch): + workflow_dict = { + "graph": { + "nodes": [ + { + "data": { + "type": BuiltinNodeTypes.KNOWLEDGE_RETRIEVAL, + "dataset_ids": ["d1", "d2"], + } + }, + { + "data": { + "type": BuiltinNodeTypes.TOOL, + "credential_id": "secret", + } + }, + { + "data": { + "type": BuiltinNodeTypes.AGENT, + "agent_parameters": {"tools": {"value": [{"credential_id": "secret"}]}}, + } + }, + { + "data": { + "type": TRIGGER_SCHEDULE_NODE_TYPE, + "config": {"x": 1}, + } + }, + { + "data": { + "type": TRIGGER_WEBHOOK_NODE_TYPE, + "webhook_url": "x", + "webhook_debug_url": "y", + } + }, + { + "data": { + "type": TRIGGER_PLUGIN_NODE_TYPE, + "subscription_id": "s", + } + }, + ] + } + } + + workflow = SimpleNamespace(to_dict=lambda *, include_secret: workflow_dict) + workflow_service = MagicMock() + workflow_service.get_draft_workflow.return_value = workflow + monkeypatch.setattr(app_dsl_service, "WorkflowService", lambda: workflow_service) + + monkeypatch.setattr( + AppDslService, + "encrypt_dataset_id", + lambda *, dataset_id, tenant_id: f"enc:{tenant_id}:{dataset_id}", + ) + monkeypatch.setattr( + app_dsl_service.TriggerScheduleNode, + "get_default_config", + lambda: {"config": {"default": True}}, + ) + monkeypatch.setattr( + AppDslService, + "_extract_dependencies_from_workflow", + lambda *_args, **_kwargs: ["dep-1"], + ) + monkeypatch.setattr( + app_dsl_service.DependenciesAnalysisService, + "generate_dependencies", + lambda *, tenant_id, dependencies: [ + SimpleNamespace( + model_dump=lambda: { + "tenant": tenant_id, + "dep": dependencies[0], + } + ) + ], + ) + monkeypatch.setattr(app_dsl_service, "jsonable_encoder", lambda x: x) + + export_data: dict = {} + AppDslService._append_workflow_export_data( + export_data=export_data, + app_model=SimpleNamespace(tenant_id=_DEFAULT_TENANT_ID), + include_secret=False, + workflow_id=None, ) - def test_check_dependencies_success(self, db_session_with_containers, mock_external_service_dependencies): - """ - Test successful dependency checking. - """ - fake = Faker() - app, account = self._create_test_app_and_account(db_session_with_containers, mock_external_service_dependencies) + nodes = export_data["workflow"]["graph"]["nodes"] + assert nodes[0]["data"]["dataset_ids"] == [ + f"enc:{_DEFAULT_TENANT_ID}:d1", + f"enc:{_DEFAULT_TENANT_ID}:d2", + ] + assert "credential_id" not in nodes[1]["data"] + assert "credential_id" not in nodes[2]["data"]["agent_parameters"]["tools"]["value"][0] + assert nodes[3]["data"]["config"] == {"default": True} + assert nodes[4]["data"]["webhook_url"] == "" + assert nodes[4]["data"]["webhook_debug_url"] == "" + assert nodes[5]["data"]["subscription_id"] == "" + assert export_data["dependencies"] == [{"tenant": _DEFAULT_TENANT_ID, "dep": "dep-1"}] - # Mock Redis to return dependencies - mock_dependencies_json = '{"app_id": "' + app.id + '", "dependencies": []}' - mock_external_service_dependencies["redis_client"].get.return_value = mock_dependencies_json + def test_append_workflow_export_data_missing_workflow_raises(self, monkeypatch): + workflow_service = MagicMock() + workflow_service.get_draft_workflow.return_value = None + monkeypatch.setattr(app_dsl_service, "WorkflowService", lambda: workflow_service) - # Check dependencies - dsl_service = AppDslService(db_session_with_containers) - result = dsl_service.check_dependencies(app_model=app) + with pytest.raises(ValueError, match="Missing draft workflow configuration"): + AppDslService._append_workflow_export_data( + export_data={}, + app_model=SimpleNamespace(tenant_id=_DEFAULT_TENANT_ID), + include_secret=False, + workflow_id=None, + ) - # Verify result - assert result.leaked_dependencies == [] + # ── Model Config Export Data ────────────────────────────────────── - # Verify Redis was queried - mock_external_service_dependencies["redis_client"].get.assert_called_once_with( - f"app_check_dependencies:{app.id}" + def test_append_model_config_export_data_filters_credential_id(self, monkeypatch): + monkeypatch.setattr( + AppDslService, + "_extract_dependencies_from_model_config", + lambda *_args, **_kwargs: ["dep-1"], + ) + monkeypatch.setattr( + app_dsl_service.DependenciesAnalysisService, + "generate_dependencies", + lambda *, tenant_id, dependencies: [ + SimpleNamespace( + model_dump=lambda: { + "tenant": tenant_id, + "dep": dependencies[0], + } + ) + ], + ) + monkeypatch.setattr(app_dsl_service, "jsonable_encoder", lambda x: x) + + app_model_config = SimpleNamespace(to_dict=lambda: {"agent_mode": {"tools": [{"credential_id": "secret"}]}}) + app_model = SimpleNamespace(tenant_id=_DEFAULT_TENANT_ID, app_model_config=app_model_config) + export_data: dict = {} + + AppDslService._append_model_config_export_data(export_data, app_model) + assert export_data["model_config"]["agent_mode"]["tools"] == [{}] + assert export_data["dependencies"] == [{"tenant": _DEFAULT_TENANT_ID, "dep": "dep-1"}] + + def test_append_model_config_export_data_requires_app_config(self): + with pytest.raises(ValueError, match="Missing app configuration"): + AppDslService._append_model_config_export_data({}, SimpleNamespace(app_model_config=None)) + + # ── Dependency Extraction ───────────────────────────────────────── + + def test_extract_dependencies_from_workflow_graph_covers_all_node_types(self, monkeypatch): + monkeypatch.setattr( + app_dsl_service.DependenciesAnalysisService, + "analyze_tool_dependency", + lambda provider_id: f"tool:{provider_id}", + ) + monkeypatch.setattr( + app_dsl_service.DependenciesAnalysisService, + "analyze_model_provider_dependency", + lambda provider: f"model:{provider}", ) - # Verify dependencies service was called - mock_external_service_dependencies["dependencies_service"].get_leaked_dependencies.assert_called_once() + monkeypatch.setattr( + app_dsl_service.ToolNodeData, + "model_validate", + lambda _d: SimpleNamespace(provider_id="p1"), + ) + monkeypatch.setattr( + app_dsl_service.LLMNodeData, + "model_validate", + lambda _d: SimpleNamespace(model=SimpleNamespace(provider="m1")), + ) + monkeypatch.setattr( + app_dsl_service.QuestionClassifierNodeData, + "model_validate", + lambda _d: SimpleNamespace(model=SimpleNamespace(provider="m2")), + ) + monkeypatch.setattr( + app_dsl_service.ParameterExtractorNodeData, + "model_validate", + lambda _d: SimpleNamespace(model=SimpleNamespace(provider="m3")), + ) + + def kr_validate(_d): + return SimpleNamespace( + retrieval_mode="multiple", + multiple_retrieval_config=SimpleNamespace( + reranking_mode="weighted_score", + weights=SimpleNamespace(vector_setting=SimpleNamespace(embedding_provider_name="m4")), + reranking_model=None, + ), + single_retrieval_config=None, + ) + + monkeypatch.setattr( + app_dsl_service.KnowledgeRetrievalNodeData, + "model_validate", + kr_validate, + ) + + graph = { + "nodes": [ + {"data": {"type": BuiltinNodeTypes.TOOL}}, + {"data": {"type": BuiltinNodeTypes.LLM}}, + {"data": {"type": BuiltinNodeTypes.QUESTION_CLASSIFIER}}, + {"data": {"type": BuiltinNodeTypes.PARAMETER_EXTRACTOR}}, + {"data": {"type": BuiltinNodeTypes.KNOWLEDGE_RETRIEVAL}}, + {"data": {"type": "unknown"}}, + ] + } + + deps = AppDslService._extract_dependencies_from_workflow_graph(graph) + assert deps == [ + "tool:p1", + "model:m1", + "model:m2", + "model:m3", + "model:m4", + ] + + def test_extract_dependencies_from_workflow_graph_handles_exceptions(self, monkeypatch): + monkeypatch.setattr( + app_dsl_service.ToolNodeData, + "model_validate", + lambda _d: (_ for _ in ()).throw(ValueError("bad")), + ) + deps = AppDslService._extract_dependencies_from_workflow_graph( + {"nodes": [{"data": {"type": BuiltinNodeTypes.TOOL}}]} + ) + assert deps == [] + + def test_extract_dependencies_from_model_config_parses_providers(self, monkeypatch): + monkeypatch.setattr( + app_dsl_service.DependenciesAnalysisService, + "analyze_model_provider_dependency", + lambda provider: f"model:{provider}", + ) + monkeypatch.setattr( + app_dsl_service.DependenciesAnalysisService, + "analyze_tool_dependency", + lambda provider_id: f"tool:{provider_id}", + ) + + deps = AppDslService._extract_dependencies_from_model_config( + { + "model": {"provider": "p1"}, + "dataset_configs": { + "datasets": {"datasets": [{"reranking_model": {"reranking_provider_name": {"provider": "p2"}}}]} + }, + "agent_mode": {"tools": [{"provider_id": "t1"}]}, + } + ) + assert deps == ["model:p1", "model:p2", "tool:t1"] + + def test_extract_dependencies_from_model_config_handles_exceptions(self, monkeypatch): + monkeypatch.setattr( + app_dsl_service.DependenciesAnalysisService, + "analyze_model_provider_dependency", + lambda _p: (_ for _ in ()).throw(ValueError("bad")), + ) + deps = AppDslService._extract_dependencies_from_model_config({"model": {"provider": "p1"}}) + assert deps == [] + + # ── Leaked Dependencies ─────────────────────────────────────────── + + def test_get_leaked_dependencies_empty_returns_empty(self): + assert AppDslService.get_leaked_dependencies(_DEFAULT_TENANT_ID, []) == [] + + def test_get_leaked_dependencies_delegates(self, monkeypatch): + monkeypatch.setattr( + app_dsl_service.DependenciesAnalysisService, + "get_leaked_dependencies", + lambda *, tenant_id, dependencies: [SimpleNamespace(tenant_id=tenant_id, deps=dependencies)], + ) + res = AppDslService.get_leaked_dependencies(_DEFAULT_TENANT_ID, [SimpleNamespace(id="x")]) + assert len(res) == 1 + + # ── Encryption/Decryption ───────────────────────────────────────── + + def test_encrypt_decrypt_dataset_id_respects_config(self, monkeypatch): + tenant_id = _DEFAULT_TENANT_ID + dataset_uuid = "00000000-0000-0000-0000-000000000000" + + monkeypatch.setattr( + app_dsl_service.dify_config, + "DSL_EXPORT_ENCRYPT_DATASET_ID", + False, + ) + assert AppDslService.encrypt_dataset_id(dataset_id=dataset_uuid, tenant_id=tenant_id) == dataset_uuid + + monkeypatch.setattr( + app_dsl_service.dify_config, + "DSL_EXPORT_ENCRYPT_DATASET_ID", + True, + ) + encrypted = AppDslService.encrypt_dataset_id(dataset_id=dataset_uuid, tenant_id=tenant_id) + assert encrypted != dataset_uuid + assert base64.b64decode(encrypted.encode()) + assert AppDslService.decrypt_dataset_id(encrypted_data=encrypted, tenant_id=tenant_id) == dataset_uuid + + def test_decrypt_dataset_id_returns_plain_uuid_unchanged(self): + value = "00000000-0000-0000-0000-000000000000" + assert AppDslService.decrypt_dataset_id(encrypted_data=value, tenant_id=_DEFAULT_TENANT_ID) == value + + def test_decrypt_dataset_id_returns_none_on_invalid_data(self, monkeypatch): + monkeypatch.setattr( + app_dsl_service.dify_config, + "DSL_EXPORT_ENCRYPT_DATASET_ID", + True, + ) + assert AppDslService.decrypt_dataset_id(encrypted_data="not-base64", tenant_id=_DEFAULT_TENANT_ID) is None + + def test_decrypt_dataset_id_returns_none_when_decrypted_is_not_uuid(self, monkeypatch): + monkeypatch.setattr( + app_dsl_service.dify_config, + "DSL_EXPORT_ENCRYPT_DATASET_ID", + True, + ) + encrypted = AppDslService.encrypt_dataset_id(dataset_id="not-a-uuid", tenant_id=_DEFAULT_TENANT_ID) + assert AppDslService.decrypt_dataset_id(encrypted_data=encrypted, tenant_id=_DEFAULT_TENANT_ID) is None + + # ── Utility ─────────────────────────────────────────────────────── + + def test_is_valid_uuid_handles_bad_inputs(self): + assert AppDslService._is_valid_uuid("00000000-0000-0000-0000-000000000000") is True + assert AppDslService._is_valid_uuid("nope") is False diff --git a/api/tests/test_containers_integration_tests/services/test_audio_service_db.py b/api/tests/test_containers_integration_tests/services/test_audio_service_db.py new file mode 100644 index 0000000000..2593b53fe8 --- /dev/null +++ b/api/tests/test_containers_integration_tests/services/test_audio_service_db.py @@ -0,0 +1,211 @@ +""" +Integration tests for AudioService.transcript_tts message-ID path. + +Migrated from unit_tests/services/test_audio_service.py, replacing +db.session.get mock patches with real Message rows persisted in PostgreSQL. + +Covers: +- transcript_tts with valid message_id that resolves to a real Message +- transcript_tts returns None for invalid (non-UUID) message_id +- transcript_tts returns None when message_id is a valid UUID but no row exists +- transcript_tts returns None when message exists but has an empty answer +""" + +from collections.abc import Generator +from decimal import Decimal +from unittest.mock import MagicMock, patch +from uuid import uuid4 + +import pytest +from sqlalchemy import delete, select +from sqlalchemy.orm import Session + +from core.app.entities.app_invoke_entities import InvokeFrom +from models.account import TenantAccountJoin +from models.enums import ConversationFromSource, MessageStatus +from models.model import App, AppMode, Conversation, Message +from services.audio_service import AudioService +from tests.test_containers_integration_tests.controllers.console.helpers import ( + create_console_account_and_tenant, + create_console_app, +) + + +def _create_conversation(db_session: Session, app: App, account_id: str) -> Conversation: + """Create a Conversation row via flush() so the rollback-based teardown can remove it.""" + conversation = Conversation( + app_id=app.id, + app_model_config_id=None, + model_provider=None, + model_id="", + override_model_configs=None, + mode=app.mode, + name=f"Conversation {uuid4()}", + summary="", + inputs={}, + introduction="", + system_instruction="", + system_instruction_tokens=0, + status="normal", + invoke_from=InvokeFrom.WEB_APP.value, + from_source=ConversationFromSource.CONSOLE, + from_end_user_id=None, + from_account_id=account_id, + dialogue_count=0, + is_deleted=False, + ) + db_session.add(conversation) + db_session.flush() + return conversation + + +def _create_message( + db_session: Session, + app: App, + conversation: Conversation, + account_id: str, + *, + answer: str = "Message answer text", + status: MessageStatus | str = MessageStatus.NORMAL, +) -> Message: + """Create a Message row via flush() so the rollback-based teardown can remove it.""" + message = Message( + app_id=app.id, + model_provider=None, + model_id="", + override_model_configs=None, + conversation_id=conversation.id, + inputs={}, + query="Test query", + message={"messages": [{"role": "user", "content": "Test query"}]}, + message_tokens=0, + message_unit_price=Decimal(0), + message_price_unit=Decimal("0.001"), + answer=answer, + answer_tokens=0, + answer_unit_price=Decimal(0), + answer_price_unit=Decimal("0.001"), + parent_message_id=None, + provider_response_latency=0, + total_price=Decimal(0), + currency="USD", + status=status, + invoke_from=InvokeFrom.WEB_APP.value, + from_source=ConversationFromSource.CONSOLE, + from_end_user_id=None, + from_account_id=account_id, + ) + db_session.add(message) + db_session.flush() + return message + + +class TestAudioServiceTranscriptTTSMessageLookup: + """Integration tests for AudioService.transcript_tts message-ID lookup via real DB.""" + + @pytest.fixture(autouse=True) + def _setup_cleanup(self, db_session_with_containers: Session) -> Generator[None, None, None]: + """Track rows created by shared helpers that commit, then clean up after the test. + + The shared console helpers (create_console_account_and_tenant, create_console_app) + commit their inserts so the rows survive a simple rollback. This fixture records + the app/account/tenant created per test and explicitly deletes them after the test + so the DB does not accumulate state across tests. Conversation/Message rows are + created via flush() only, so the trailing rollback removes them. + """ + self._committed_rows: list = [] + yield + db_session_with_containers.rollback() + for entity in reversed(self._committed_rows): + db_session_with_containers.execute(delete(type(entity)).where(type(entity).id == entity.id)) + db_session_with_containers.commit() + + def _setup_app_and_account(self, db_session: Session) -> tuple[App, str, str]: + """Create committed app/account/tenant using shared helpers and track them for cleanup.""" + account, tenant = create_console_account_and_tenant(db_session) + app = create_console_app(db_session, tenant_id=tenant.id, account_id=account.id, mode=AppMode.CHAT) + + # Track rows in the order they must be deleted (FK-safe: app and join before account/tenant) + self._committed_rows.append(app) + join = db_session.scalar( + select(TenantAccountJoin).where( + TenantAccountJoin.account_id == account.id, + TenantAccountJoin.tenant_id == tenant.id, + ) + ) + if join is not None: + self._committed_rows.append(join) + self._committed_rows.extend([account, tenant]) + return app, account.id, tenant.id + + def test_transcript_tts_with_message_id_success(self, db_session_with_containers: Session) -> None: + """transcript_tts invokes TTS with the message answer when message_id resolves to a real row.""" + app, account_id, _ = self._setup_app_and_account(db_session_with_containers) + conversation = _create_conversation(db_session_with_containers, app, account_id) + message = _create_message( + db_session_with_containers, + app, + conversation, + account_id, + answer="Hello from message", + ) + + mock_model_instance = MagicMock() + mock_model_instance.invoke_tts.return_value = b"audio from message" + mock_model_manager = MagicMock() + mock_model_manager.get_default_model_instance.return_value = mock_model_instance + + with patch("services.audio_service.ModelManager.for_tenant", return_value=mock_model_manager): + result = AudioService.transcript_tts( + app_model=app, + message_id=message.id, + voice="en-US-Neural", + ) + + assert result == b"audio from message" + mock_model_instance.invoke_tts.assert_called_once_with( + content_text="Hello from message", + voice="en-US-Neural", + ) + + def test_transcript_tts_returns_none_for_invalid_message_id(self, db_session_with_containers: Session) -> None: + """transcript_tts returns None immediately when message_id is not a valid UUID.""" + app, _, _ = self._setup_app_and_account(db_session_with_containers) + + result = AudioService.transcript_tts( + app_model=app, + message_id="invalid-uuid", + ) + + assert result is None + + def test_transcript_tts_returns_none_for_nonexistent_message(self, db_session_with_containers: Session) -> None: + """transcript_tts returns None when message_id is a valid UUID but no Message row exists.""" + app, _, _ = self._setup_app_and_account(db_session_with_containers) + + result = AudioService.transcript_tts( + app_model=app, + message_id=str(uuid4()), + ) + + assert result is None + + def test_transcript_tts_returns_none_for_empty_message_answer(self, db_session_with_containers: Session) -> None: + """transcript_tts returns None when the resolved message has an empty answer.""" + app, account_id, _ = self._setup_app_and_account(db_session_with_containers) + conversation = _create_conversation(db_session_with_containers, app, account_id) + message = _create_message( + db_session_with_containers, + app, + conversation, + account_id, + answer="", + status=MessageStatus.NORMAL, + ) + + result = AudioService.transcript_tts( + app_model=app, + message_id=message.id, + ) + + assert result is None diff --git a/api/tests/test_containers_integration_tests/services/test_billing_service.py b/api/tests/test_containers_integration_tests/services/test_billing_service.py index 76708b36b1..8092c7ad75 100644 --- a/api/tests/test_containers_integration_tests/services/test_billing_service.py +++ b/api/tests/test_containers_integration_tests/services/test_billing_service.py @@ -1,9 +1,13 @@ import json +from collections.abc import Generator from unittest.mock import patch +from uuid import uuid4 import pytest +from sqlalchemy.orm import Session from extensions.ext_redis import redis_client +from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole from services.billing_service import BillingService @@ -363,3 +367,62 @@ class TestBillingServiceGetPlanBulkWithCache: assert ttl_1_new <= 600 assert ttl_2 > 0 assert ttl_2 <= 600 + + +class TestBillingServiceIsTenantOwnerOrAdmin: + """ + Integration tests for BillingService.is_tenant_owner_or_admin. + + Verifies that non-privileged roles (EDITOR, DATASET_OPERATOR) raise ValueError + when checked against real TenantAccountJoin rows in PostgreSQL. + """ + + @pytest.fixture(autouse=True) + def _auto_rollback(self, db_session_with_containers: Session) -> Generator[None, None, None]: + yield + db_session_with_containers.rollback() + + def _create_account_with_tenant_role(self, db_session: Session, role: TenantAccountRole) -> tuple[Account, Tenant]: + tenant = Tenant(name=f"Tenant {uuid4()}") + db_session.add(tenant) + db_session.flush() + + account = Account( + name=f"Account {uuid4()}", + email=f"billing_{uuid4()}@example.com", + password="hashed-password", + password_salt="salt", + interface_language="en-US", + timezone="UTC", + ) + db_session.add(account) + db_session.flush() + + join = TenantAccountJoin( + tenant_id=tenant.id, + account_id=account.id, + role=role, + current=True, + ) + db_session.add(join) + db_session.flush() + + # Wire up in-memory reference so current_tenant_id resolves + account._current_tenant = tenant + return account, tenant + + def test_is_tenant_owner_or_admin_editor_role_raises_error(self, db_session_with_containers: Session) -> None: + """is_tenant_owner_or_admin raises ValueError for EDITOR role.""" + account, _ = self._create_account_with_tenant_role(db_session_with_containers, TenantAccountRole.EDITOR) + + with pytest.raises(ValueError, match="Only team owner or team admin can perform this action"): + BillingService.is_tenant_owner_or_admin(account) + + def test_is_tenant_owner_or_admin_dataset_operator_raises_error(self, db_session_with_containers: Session) -> None: + """is_tenant_owner_or_admin raises ValueError for DATASET_OPERATOR role.""" + account, _ = self._create_account_with_tenant_role( + db_session_with_containers, TenantAccountRole.DATASET_OPERATOR + ) + + with pytest.raises(ValueError, match="Only team owner or team admin can perform this action"): + BillingService.is_tenant_owner_or_admin(account) diff --git a/api/tests/test_containers_integration_tests/services/test_conversation_service.py b/api/tests/test_containers_integration_tests/services/test_conversation_service.py index 6180d98b1e..98c38f2b5f 100644 --- a/api/tests/test_containers_integration_tests/services/test_conversation_service.py +++ b/api/tests/test_containers_integration_tests/services/test_conversation_service.py @@ -637,6 +637,40 @@ class TestConversationServiceSummarization: assert conversation.name == new_name assert conversation.updated_at == mock_time + @patch("services.conversation_service.LLMGenerator.generate_conversation_name") + def test_rename_with_auto_generate(self, mock_llm_generator, db_session_with_containers): + """ + Test rename delegates to auto_generate_name when auto_generate is True. + + When auto_generate is True, the service should call auto_generate_name + which uses an LLM to create a descriptive conversation title. + """ + # Arrange + app_model, user = ConversationServiceIntegrationTestDataFactory.create_app_and_account( + db_session_with_containers + ) + conversation = ConversationServiceIntegrationTestDataFactory.create_conversation( + db_session_with_containers, app_model, user + ) + ConversationServiceIntegrationTestDataFactory.create_message( + db_session_with_containers, app_model, conversation, user + ) + generated_name = "Auto Generated Name" + mock_llm_generator.return_value = generated_name + + # Act + result = ConversationService.rename( + app_model=app_model, + conversation_id=conversation.id, + user=user, + name=None, + auto_generate=True, + ) + + # Assert + assert result == conversation + assert conversation.name == generated_name + class TestConversationServiceMessageAnnotation: """ @@ -1066,3 +1100,32 @@ class TestConversationServiceExport: not_deleted = db_session_with_containers.scalar(select(Conversation).where(Conversation.id == conversation.id)) assert not_deleted is not None mock_delete_task.delay.assert_not_called() + + @patch("services.conversation_service.delete_conversation_related_data") + def test_delete_handles_exception_and_rollback(self, mock_delete_task, db_session_with_containers): + """ + Test that delete propagates exceptions and does not trigger the cleanup task. + + When a DB error occurs during deletion, the service must rollback the + transaction and re-raise the exception without scheduling async cleanup. + """ + # Arrange + app_model, user = ConversationServiceIntegrationTestDataFactory.create_app_and_account( + db_session_with_containers + ) + conversation = ConversationServiceIntegrationTestDataFactory.create_conversation( + db_session_with_containers, app_model, user + ) + conversation_id = conversation.id + + # Act — force an error during the delete to exercise the rollback path + with patch("services.conversation_service.db.session.delete", side_effect=Exception("DB error")): + with pytest.raises(Exception, match="DB error"): + ConversationService.delete(app_model=app_model, conversation_id=conversation_id, user=user) + + # Assert — async cleanup must NOT have been scheduled + mock_delete_task.delay.assert_not_called() + + # Conversation is still present because the deletion was never committed + still_there = db_session_with_containers.scalar(select(Conversation).where(Conversation.id == conversation_id)) + assert still_there is not None diff --git a/api/tests/test_containers_integration_tests/services/test_conversation_service_variables.py b/api/tests/test_containers_integration_tests/services/test_conversation_service_variables.py new file mode 100644 index 0000000000..0b7bd9ca64 --- /dev/null +++ b/api/tests/test_containers_integration_tests/services/test_conversation_service_variables.py @@ -0,0 +1,524 @@ +from __future__ import annotations + +from datetime import datetime, timedelta +from unittest.mock import patch +from uuid import uuid4 + +import pytest +from sqlalchemy.orm import sessionmaker + +from core.app.entities.app_invoke_entities import InvokeFrom +from extensions.ext_database import db +from graphon.variables import FloatVariable, IntegerVariable, StringVariable +from models.account import Account, Tenant, TenantAccountJoin +from models.enums import ConversationFromSource +from models.model import App, Conversation, EndUser +from models.workflow import ConversationVariable +from services.conversation_service import ConversationService +from services.errors.conversation import ( + ConversationVariableNotExistsError, + ConversationVariableTypeMismatchError, + LastConversationNotExistsError, +) + + +class ConversationServiceVariableIntegrationFactory: + @staticmethod + def create_app_and_account(db_session_with_containers): + tenant = Tenant(name=f"Tenant {uuid4()}") + db_session_with_containers.add(tenant) + db_session_with_containers.flush() + + account = Account( + name=f"Account {uuid4()}", + email=f"conversation-variable-{uuid4()}@example.com", + password="hashed-password", + password_salt="salt", + interface_language="en-US", + timezone="UTC", + ) + db_session_with_containers.add(account) + db_session_with_containers.flush() + + tenant_join = TenantAccountJoin( + tenant_id=tenant.id, + account_id=account.id, + role="owner", + current=True, + ) + db_session_with_containers.add(tenant_join) + db_session_with_containers.flush() + + app = App( + tenant_id=tenant.id, + name=f"App {uuid4()}", + description="", + mode="chat", + icon_type="emoji", + icon="bot", + icon_background="#FFFFFF", + enable_site=False, + enable_api=True, + api_rpm=100, + api_rph=100, + is_demo=False, + is_public=False, + is_universal=False, + created_by=account.id, + updated_by=account.id, + ) + db_session_with_containers.add(app) + db_session_with_containers.commit() + + return app, account + + @staticmethod + def create_end_user(db_session_with_containers, app: App): + end_user = EndUser( + tenant_id=app.tenant_id, + app_id=app.id, + type=InvokeFrom.SERVICE_API.value, + external_user_id=f"external-{uuid4()}", + name=f"End User {uuid4()}", + is_anonymous=False, + session_id=f"session-{uuid4()}", + ) + db_session_with_containers.add(end_user) + db_session_with_containers.commit() + return end_user + + @staticmethod + def create_conversation( + db_session_with_containers, + app: App, + user: Account | EndUser, + *, + name: str | None = None, + invoke_from: InvokeFrom = InvokeFrom.WEB_APP, + created_at: datetime | None = None, + updated_at: datetime | None = None, + ) -> Conversation: + conversation = Conversation( + app_id=app.id, + app_model_config_id=None, + model_provider=None, + model_id="", + override_model_configs=None, + mode=app.mode, + name=name or f"Conversation {uuid4()}", + summary="", + inputs={}, + introduction="", + system_instruction="", + system_instruction_tokens=0, + status="normal", + invoke_from=invoke_from.value, + from_source=ConversationFromSource.API if isinstance(user, EndUser) else ConversationFromSource.CONSOLE, + from_end_user_id=user.id if isinstance(user, EndUser) else None, + from_account_id=user.id if isinstance(user, Account) else None, + dialogue_count=0, + is_deleted=False, + ) + conversation.inputs = {} + if created_at is not None: + conversation.created_at = created_at + if updated_at is not None: + conversation.updated_at = updated_at + + db_session_with_containers.add(conversation) + db_session_with_containers.commit() + return conversation + + @staticmethod + def create_variable( + db_session_with_containers, + *, + app: App, + conversation: Conversation, + variable: StringVariable | FloatVariable | IntegerVariable, + created_at: datetime | None = None, + ) -> ConversationVariable: + row = ConversationVariable.from_variable(app_id=app.id, conversation_id=conversation.id, variable=variable) + if created_at is not None: + row.created_at = created_at + row.updated_at = created_at + + db_session_with_containers.add(row) + db_session_with_containers.commit() + return row + + +@pytest.fixture +def real_conversation_service_session_factory(flask_app_with_containers): + del flask_app_with_containers + real_session_maker = sessionmaker(bind=db.engine, expire_on_commit=False) + + with ( + patch("services.conversation_service.session_factory.create_session", side_effect=lambda: real_session_maker()), + patch("services.conversation_service.session_factory.get_session_maker", return_value=real_session_maker), + ): + yield + + +class TestConversationServiceVariables: + def test_get_conversational_variable_success( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + older_time = datetime(2024, 1, 1, 12, 0, 0) + newer_time = older_time + timedelta(minutes=5) + + first_variable = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=StringVariable(id=str(uuid4()), name="topic", value="billing"), + created_at=older_time, + ) + second_variable = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=StringVariable(id=str(uuid4()), name="priority", value="high"), + created_at=newer_time, + ) + + result = ConversationService.get_conversational_variable( + app_model=app, + conversation_id=conversation.id, + user=account, + limit=10, + last_id=None, + ) + + assert [item["id"] for item in result.data] == [first_variable.id, second_variable.id] + assert [item["name"] for item in result.data] == ["topic", "priority"] + assert result.limit == 10 + assert result.has_more is False + + def test_get_conversational_variable_with_last_id( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + base_time = datetime(2024, 1, 1, 9, 0, 0) + + first_variable = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=StringVariable(id=str(uuid4()), name="topic", value="billing"), + created_at=base_time, + ) + second_variable = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=StringVariable(id=str(uuid4()), name="priority", value="high"), + created_at=base_time + timedelta(minutes=1), + ) + third_variable = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=StringVariable(id=str(uuid4()), name="owner", value="alice"), + created_at=base_time + timedelta(minutes=2), + ) + + result = ConversationService.get_conversational_variable( + app_model=app, + conversation_id=conversation.id, + user=account, + limit=10, + last_id=first_variable.id, + ) + + assert [item["id"] for item in result.data] == [second_variable.id, third_variable.id] + assert result.has_more is False + + def test_get_conversational_variable_last_id_not_found_raises_error( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + + with pytest.raises(ConversationVariableNotExistsError): + ConversationService.get_conversational_variable( + app_model=app, + conversation_id=conversation.id, + user=account, + limit=10, + last_id=str(uuid4()), + ) + + def test_get_conversational_variable_sets_has_more( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + + for index in range(3): + factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=StringVariable(id=str(uuid4()), name=f"var_{index}", value=f"value_{index}"), + created_at=datetime(2024, 1, 1, 10, 0, index), + ) + + result = ConversationService.get_conversational_variable( + app_model=app, + conversation_id=conversation.id, + user=account, + limit=2, + last_id=None, + ) + + assert len(result.data) == 2 + assert result.has_more is True + + def test_update_conversation_variable_success( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + existing = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=StringVariable(id=str(uuid4()), name="topic", value="billing"), + ) + updated_at = datetime(2024, 1, 1, 15, 0, 0) + + with patch("services.conversation_service.naive_utc_now", return_value=updated_at): + result = ConversationService.update_conversation_variable( + app_model=app, + conversation_id=conversation.id, + variable_id=existing.id, + user=account, + new_value="support", + ) + + db_session_with_containers.expire_all() + persisted = db_session_with_containers.get(ConversationVariable, (existing.id, conversation.id)) + + assert persisted is not None + assert persisted.to_variable().value == "support" + assert result["id"] == existing.id + assert result["value"] == "support" + assert result["updated_at"] == updated_at + + def test_update_conversation_variable_not_found_raises_error( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + + with pytest.raises(ConversationVariableNotExistsError): + ConversationService.update_conversation_variable( + app_model=app, + conversation_id=conversation.id, + variable_id=str(uuid4()), + user=account, + new_value="support", + ) + + def test_update_conversation_variable_type_mismatch_raises_error( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + existing = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=FloatVariable(id=str(uuid4()), name="score", value=1.5), + ) + + with pytest.raises(ConversationVariableTypeMismatchError, match="expects float"): + ConversationService.update_conversation_variable( + app_model=app, + conversation_id=conversation.id, + variable_id=existing.id, + user=account, + new_value="wrong-type", + ) + + def test_update_conversation_variable_integer_number_compatibility( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + existing = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=IntegerVariable(id=str(uuid4()), name="attempts", value=1), + ) + + result = ConversationService.update_conversation_variable( + app_model=app, + conversation_id=conversation.id, + variable_id=existing.id, + user=account, + new_value=42, + ) + + db_session_with_containers.expire_all() + persisted = db_session_with_containers.get(ConversationVariable, (existing.id, conversation.id)) + + assert persisted is not None + assert persisted.to_variable().value == 42 + assert result["value"] == 42 + + +class TestConversationServicePaginationWithContainers: + def test_pagination_by_last_id_raises_error_when_last_id_missing(self, db_session_with_containers): + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + + with pytest.raises(LastConversationNotExistsError): + ConversationService.pagination_by_last_id( + session=db_session_with_containers, + app_model=app, + user=account, + last_id=str(uuid4()), + limit=20, + invoke_from=InvokeFrom.WEB_APP, + ) + + def test_pagination_by_last_id_with_default_desc_updated_at(self, db_session_with_containers): + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + base_time = datetime(2024, 1, 1, 8, 0, 0) + newest = factory.create_conversation( + db_session_with_containers, + app, + account, + name="Newest", + updated_at=base_time + timedelta(minutes=2), + ) + middle = factory.create_conversation( + db_session_with_containers, + app, + account, + name="Middle", + updated_at=base_time + timedelta(minutes=1), + ) + oldest = factory.create_conversation( + db_session_with_containers, + app, + account, + name="Oldest", + updated_at=base_time, + ) + + result = ConversationService.pagination_by_last_id( + session=db_session_with_containers, + app_model=app, + user=account, + last_id=middle.id, + limit=10, + invoke_from=InvokeFrom.WEB_APP, + ) + + assert newest.id != middle.id + assert [conversation.id for conversation in result.data] == [oldest.id] + + def test_pagination_by_last_id_with_name_sort(self, db_session_with_containers): + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + alpha = factory.create_conversation(db_session_with_containers, app, account, name="Alpha") + beta = factory.create_conversation(db_session_with_containers, app, account, name="Beta") + gamma = factory.create_conversation(db_session_with_containers, app, account, name="Gamma") + + result = ConversationService.pagination_by_last_id( + session=db_session_with_containers, + app_model=app, + user=account, + last_id=beta.id, + limit=10, + invoke_from=InvokeFrom.WEB_APP, + sort_by="name", + ) + + assert alpha.id != beta.id + assert [conversation.id for conversation in result.data] == [gamma.id] + + def test_pagination_filters_to_end_user_api_source(self, db_session_with_containers): + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + end_user = factory.create_end_user(db_session_with_containers, app) + account_conversation = factory.create_conversation( + db_session_with_containers, + app, + account, + name="Console Conversation", + invoke_from=InvokeFrom.WEB_APP, + ) + end_user_conversation = factory.create_conversation( + db_session_with_containers, + app, + end_user, + name="API Conversation", + invoke_from=InvokeFrom.SERVICE_API, + ) + + result = ConversationService.pagination_by_last_id( + session=db_session_with_containers, + app_model=app, + user=end_user, + last_id=None, + limit=20, + invoke_from=InvokeFrom.SERVICE_API, + ) + + assert account_conversation.id != end_user_conversation.id + assert [conversation.id for conversation in result.data] == [end_user_conversation.id] + + def test_pagination_filters_to_account_console_source(self, db_session_with_containers): + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + end_user = factory.create_end_user(db_session_with_containers, app) + account_conversation = factory.create_conversation( + db_session_with_containers, + app, + account, + name="Console Conversation", + invoke_from=InvokeFrom.WEB_APP, + ) + factory.create_conversation( + db_session_with_containers, + app, + end_user, + name="API Conversation", + invoke_from=InvokeFrom.SERVICE_API, + ) + + result = ConversationService.pagination_by_last_id( + session=db_session_with_containers, + app_model=app, + user=account, + last_id=None, + limit=20, + invoke_from=InvokeFrom.WEB_APP, + ) + + assert [conversation.id for conversation in result.data] == [account_conversation.id] diff --git a/api/tests/test_containers_integration_tests/services/test_conversation_variable_updater.py b/api/tests/test_containers_integration_tests/services/test_conversation_variable_updater.py index fb0adbbcc2..02ab3f8314 100644 --- a/api/tests/test_containers_integration_tests/services/test_conversation_variable_updater.py +++ b/api/tests/test_containers_integration_tests/services/test_conversation_variable_updater.py @@ -3,10 +3,10 @@ from uuid import uuid4 import pytest -from graphon.variables import StringVariable from sqlalchemy.orm import sessionmaker from extensions.ext_database import db +from graphon.variables import StringVariable from models.workflow import ConversationVariable from services.conversation_variable_updater import ConversationVariableNotFoundError, ConversationVariableUpdater diff --git a/api/tests/test_containers_integration_tests/services/test_dataset_service.py b/api/tests/test_containers_integration_tests/services/test_dataset_service.py index f9bfa570cb..0de3c64c4f 100644 --- a/api/tests/test_containers_integration_tests/services/test_dataset_service.py +++ b/api/tests/test_containers_integration_tests/services/test_dataset_service.py @@ -9,11 +9,11 @@ from unittest.mock import Mock, patch from uuid import uuid4 import pytest -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType from core.rag.retrieval.retrieval_methods import RetrievalMethod +from graphon.model_runtime.entities.model_entities import ModelType from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole from models.dataset import Dataset, DatasetPermissionEnum, Document, ExternalKnowledgeBindings, Pipeline from models.enums import DatasetRuntimeMode, DataSourceType, DocumentCreatedFrom, IndexingStatus diff --git a/api/tests/test_containers_integration_tests/services/test_dataset_service_permissions.py b/api/tests/test_containers_integration_tests/services/test_dataset_service_permissions.py new file mode 100644 index 0000000000..1b4179c9c7 --- /dev/null +++ b/api/tests/test_containers_integration_tests/services/test_dataset_service_permissions.py @@ -0,0 +1,613 @@ +"""Testcontainers integration tests for DatasetService permission and lifecycle SQL paths.""" + +from datetime import datetime +from types import SimpleNamespace +from unittest.mock import patch +from uuid import uuid4 + +import pytest +from sqlalchemy.orm import Session +from werkzeug.exceptions import NotFound + +from core.rag.index_processor.constant.index_type import IndexTechniqueType +from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole +from models.dataset import ( + AppDatasetJoin, + Dataset, + DatasetAutoDisableLog, + DatasetCollectionBinding, + DatasetPermission, + DatasetPermissionEnum, +) +from models.enums import DataSourceType +from services.dataset_service import DatasetCollectionBindingService, DatasetPermissionService, DatasetService +from services.errors.account import NoPermissionError + + +class DatasetPermissionIntegrationFactory: + @staticmethod + def create_account_with_tenant( + db_session_with_containers: Session, + role: TenantAccountRole = TenantAccountRole.OWNER, + ) -> tuple[Account, Tenant]: + account = Account( + email=f"{uuid4()}@example.com", + name=f"user-{uuid4()}", + interface_language="en-US", + status="active", + ) + tenant = Tenant(name=f"tenant-{uuid4()}", status="normal") + db_session_with_containers.add_all([account, tenant]) + db_session_with_containers.flush() + + join = TenantAccountJoin( + tenant_id=tenant.id, + account_id=account.id, + role=role, + current=True, + ) + db_session_with_containers.add(join) + db_session_with_containers.commit() + + account.role = role + account._current_tenant = tenant + return account, tenant + + @staticmethod + def create_account_in_tenant( + db_session_with_containers: Session, + tenant: Tenant, + role: TenantAccountRole = TenantAccountRole.EDITOR, + ) -> Account: + account = Account( + email=f"{uuid4()}@example.com", + name=f"user-{uuid4()}", + interface_language="en-US", + status="active", + ) + db_session_with_containers.add(account) + db_session_with_containers.flush() + + join = TenantAccountJoin( + tenant_id=tenant.id, + account_id=account.id, + role=role, + current=True, + ) + db_session_with_containers.add(join) + db_session_with_containers.commit() + + account.role = role + account._current_tenant = tenant + return account + + @staticmethod + def create_dataset( + db_session_with_containers: Session, + *, + tenant_id: str, + created_by: str, + name: str | None = None, + permission: DatasetPermissionEnum = DatasetPermissionEnum.ONLY_ME, + indexing_technique: str | None = IndexTechniqueType.HIGH_QUALITY, + enable_api: bool = True, + ) -> Dataset: + dataset = Dataset( + tenant_id=tenant_id, + name=name or f"dataset-{uuid4()}", + description="desc", + data_source_type=DataSourceType.UPLOAD_FILE, + indexing_technique=indexing_technique, + created_by=created_by, + provider="vendor", + permission=permission, + retrieval_model={"top_k": 2}, + ) + dataset.enable_api = enable_api + db_session_with_containers.add(dataset) + db_session_with_containers.commit() + return dataset + + @staticmethod + def create_dataset_permission( + db_session_with_containers: Session, + *, + dataset_id: str, + tenant_id: str, + account_id: str, + ) -> DatasetPermission: + permission = DatasetPermission( + dataset_id=dataset_id, + tenant_id=tenant_id, + account_id=account_id, + has_permission=True, + ) + db_session_with_containers.add(permission) + db_session_with_containers.commit() + return permission + + @staticmethod + def create_app_dataset_join( + db_session_with_containers: Session, + *, + dataset_id: str, + ) -> AppDatasetJoin: + join = AppDatasetJoin( + app_id=str(uuid4()), + dataset_id=dataset_id, + ) + db_session_with_containers.add(join) + db_session_with_containers.commit() + return join + + @staticmethod + def create_collection_binding( + db_session_with_containers: Session, + *, + provider_name: str, + model_name: str, + collection_type: str = "dataset", + ) -> DatasetCollectionBinding: + binding = DatasetCollectionBinding( + provider_name=provider_name, + model_name=model_name, + collection_name=f"collection_{uuid4().hex}", + type=collection_type, + ) + db_session_with_containers.add(binding) + db_session_with_containers.commit() + return binding + + @staticmethod + def create_auto_disable_log( + db_session_with_containers: Session, + *, + tenant_id: str, + dataset_id: str, + document_id: str, + ) -> DatasetAutoDisableLog: + log = DatasetAutoDisableLog( + tenant_id=tenant_id, + dataset_id=dataset_id, + document_id=document_id, + ) + db_session_with_containers.add(log) + db_session_with_containers.commit() + return log + + +class TestDatasetServicePermissionsAndLifecycle: + def test_delete_dataset_returns_false_when_dataset_is_missing(self, db_session_with_containers: Session): + owner, _tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + + result = DatasetService.delete_dataset(str(uuid4()), user=owner) + + assert result is False + + def test_delete_dataset_checks_permission_and_deletes_dataset(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + ) + + with patch("services.dataset_service.dataset_was_deleted.send") as send_deleted_signal: + result = DatasetService.delete_dataset(dataset.id, user=owner) + + assert result is True + assert db_session_with_containers.get(Dataset, dataset.id) is None + send_deleted_signal.assert_called_once_with(dataset) + + def test_dataset_use_check_returns_true_when_join_exists(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + ) + DatasetPermissionIntegrationFactory.create_app_dataset_join( + db_session_with_containers, + dataset_id=dataset.id, + ) + + assert DatasetService.dataset_use_check(dataset.id) is True + + def test_dataset_use_check_returns_false_when_join_missing(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + ) + + assert DatasetService.dataset_use_check(dataset.id) is False + + def test_check_dataset_permission_rejects_cross_tenant_access(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + outsider, _other_tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant( + db_session_with_containers + ) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + ) + + with pytest.raises(NoPermissionError, match="do not have permission"): + DatasetService.check_dataset_permission(dataset, outsider) + + def test_check_dataset_permission_rejects_only_me_dataset_for_non_creator( + self, db_session_with_containers: Session + ): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + member = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.ONLY_ME, + ) + + with pytest.raises(NoPermissionError, match="do not have permission"): + DatasetService.check_dataset_permission(dataset, member) + + def test_check_dataset_permission_rejects_partial_team_user_without_binding( + self, db_session_with_containers: Session + ): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + member = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + + with pytest.raises(NoPermissionError, match="do not have permission"): + DatasetService.check_dataset_permission(dataset, member) + + def test_check_dataset_permission_allows_partial_team_creator(self, db_session_with_containers: Session): + creator, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant( + db_session_with_containers, + role=TenantAccountRole.EDITOR, + ) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=creator.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + + DatasetService.check_dataset_permission(dataset, creator) + + def test_check_dataset_permission_allows_partial_team_member_with_binding( + self, db_session_with_containers: Session + ): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + member = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + DatasetPermissionIntegrationFactory.create_dataset_permission( + db_session_with_containers, + dataset_id=dataset.id, + tenant_id=tenant.id, + account_id=member.id, + ) + + DatasetService.check_dataset_permission(dataset, member) + + def test_check_dataset_operator_permission_rejects_only_me_for_non_creator( + self, db_session_with_containers: Session + ): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + operator = DatasetPermissionIntegrationFactory.create_account_in_tenant( + db_session_with_containers, + tenant, + role=TenantAccountRole.EDITOR, + ) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.ONLY_ME, + ) + + with pytest.raises(NoPermissionError, match="do not have permission"): + DatasetService.check_dataset_operator_permission(user=operator, dataset=dataset) + + def test_check_dataset_operator_permission_rejects_partial_team_without_binding( + self, db_session_with_containers: Session + ): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + operator = DatasetPermissionIntegrationFactory.create_account_in_tenant( + db_session_with_containers, + tenant, + role=TenantAccountRole.EDITOR, + ) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + + with pytest.raises(NoPermissionError, match="do not have permission"): + DatasetService.check_dataset_operator_permission(user=operator, dataset=dataset) + + def test_check_dataset_operator_permission_allows_partial_team_with_binding( + self, db_session_with_containers: Session + ): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + operator = DatasetPermissionIntegrationFactory.create_account_in_tenant( + db_session_with_containers, + tenant, + role=TenantAccountRole.EDITOR, + ) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + DatasetPermissionIntegrationFactory.create_dataset_permission( + db_session_with_containers, + dataset_id=dataset.id, + tenant_id=tenant.id, + account_id=operator.id, + ) + + DatasetService.check_dataset_operator_permission(user=operator, dataset=dataset) + + def test_update_dataset_api_status_raises_not_found_for_missing_dataset(self, flask_app_with_containers): + with flask_app_with_containers.app_context(): + with pytest.raises(NotFound, match="Dataset not found"): + DatasetService.update_dataset_api_status(str(uuid4()), True) + + def test_update_dataset_api_status_requires_current_user_id(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + enable_api=False, + ) + + with patch("services.dataset_service.current_user", SimpleNamespace(id=None)): + with pytest.raises(ValueError, match="Current user or current user id not found"): + DatasetService.update_dataset_api_status(dataset.id, True) + + def test_update_dataset_api_status_updates_fields_and_commits(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + enable_api=False, + ) + now = datetime(2026, 4, 14, 18, 0, 0) + + with ( + patch("services.dataset_service.current_user", owner), + patch("services.dataset_service.naive_utc_now", return_value=now), + ): + DatasetService.update_dataset_api_status(dataset.id, True) + + db_session_with_containers.refresh(dataset) + assert dataset.enable_api is True + assert dataset.updated_by == owner.id + assert dataset.updated_at == now + + def test_get_dataset_auto_disable_logs_returns_empty_when_billing_is_disabled( + self, db_session_with_containers: Session + ): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + features = SimpleNamespace( + billing=SimpleNamespace(enabled=False, subscription=SimpleNamespace(plan="professional")) + ) + + with ( + patch("services.dataset_service.current_user", owner), + patch("services.dataset_service.FeatureService.get_features", return_value=features), + ): + result = DatasetService.get_dataset_auto_disable_logs(str(uuid4())) + + assert result == {"document_ids": [], "count": 0} + + def test_get_dataset_auto_disable_logs_returns_recent_document_ids(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + ) + DatasetPermissionIntegrationFactory.create_auto_disable_log( + db_session_with_containers, + tenant_id=tenant.id, + dataset_id=dataset.id, + document_id=str(uuid4()), + ) + DatasetPermissionIntegrationFactory.create_auto_disable_log( + db_session_with_containers, + tenant_id=tenant.id, + dataset_id=dataset.id, + document_id=str(uuid4()), + ) + features = SimpleNamespace( + billing=SimpleNamespace(enabled=True, subscription=SimpleNamespace(plan="professional")) + ) + + with ( + patch("services.dataset_service.current_user", owner), + patch("services.dataset_service.FeatureService.get_features", return_value=features), + ): + result = DatasetService.get_dataset_auto_disable_logs(dataset.id) + + assert result["count"] == 2 + assert len(result["document_ids"]) == 2 + + +class TestDatasetCollectionBindingServiceIntegration: + def test_get_dataset_collection_binding_returns_existing_binding(self, db_session_with_containers: Session): + binding = DatasetPermissionIntegrationFactory.create_collection_binding( + db_session_with_containers, + provider_name="provider", + model_name="model", + ) + + result = DatasetCollectionBindingService.get_dataset_collection_binding("provider", "model") + + assert result.id == binding.id + + def test_get_dataset_collection_binding_creates_binding_when_missing(self, db_session_with_containers: Session): + result = DatasetCollectionBindingService.get_dataset_collection_binding("provider", "missing-model") + + persisted = db_session_with_containers.get(DatasetCollectionBinding, result.id) + assert persisted is not None + assert persisted.provider_name == "provider" + assert persisted.model_name == "missing-model" + assert persisted.type == "dataset" + assert persisted.collection_name + + def test_get_dataset_collection_binding_by_id_and_type_raises_when_missing(self, flask_app_with_containers): + with flask_app_with_containers.app_context(): + with pytest.raises(ValueError, match="Dataset collection binding not found"): + DatasetCollectionBindingService.get_dataset_collection_binding_by_id_and_type(str(uuid4())) + + def test_get_dataset_collection_binding_by_id_and_type_returns_binding(self, db_session_with_containers: Session): + binding = DatasetPermissionIntegrationFactory.create_collection_binding( + db_session_with_containers, + provider_name="provider", + model_name="model", + ) + + result = DatasetCollectionBindingService.get_dataset_collection_binding_by_id_and_type(binding.id) + + assert result.id == binding.id + + +class TestDatasetPermissionServiceIntegration: + def test_get_dataset_partial_member_list_returns_scalar_results(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + member_a = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + member_b = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + DatasetPermissionIntegrationFactory.create_dataset_permission( + db_session_with_containers, + dataset_id=dataset.id, + tenant_id=tenant.id, + account_id=member_a.id, + ) + DatasetPermissionIntegrationFactory.create_dataset_permission( + db_session_with_containers, + dataset_id=dataset.id, + tenant_id=tenant.id, + account_id=member_b.id, + ) + + result = DatasetPermissionService.get_dataset_partial_member_list(dataset.id) + + assert set(result) == {member_a.id, member_b.id} + + def test_update_partial_member_list_replaces_permissions_and_commits(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + member_a = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + member_b = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + stale_member = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + DatasetPermissionIntegrationFactory.create_dataset_permission( + db_session_with_containers, + dataset_id=dataset.id, + tenant_id=tenant.id, + account_id=stale_member.id, + ) + + DatasetPermissionService.update_partial_member_list( + tenant.id, + dataset.id, + [{"user_id": member_a.id}, {"user_id": member_b.id}], + ) + + permissions = db_session_with_containers.query(DatasetPermission).filter_by(dataset_id=dataset.id).all() + assert {permission.account_id for permission in permissions} == {member_a.id, member_b.id} + + def test_check_permission_requires_dataset_editor(self): + user = SimpleNamespace(is_dataset_editor=False, is_dataset_operator=False) + dataset = SimpleNamespace(id="dataset-1", permission=DatasetPermissionEnum.ALL_TEAM) + + with pytest.raises(NoPermissionError, match="does not have permission"): + DatasetPermissionService.check_permission(user, dataset, DatasetPermissionEnum.ALL_TEAM, []) + + def test_check_permission_prevents_dataset_operator_from_changing_permission_mode(self): + user = SimpleNamespace(is_dataset_editor=True, is_dataset_operator=True) + dataset = SimpleNamespace(id="dataset-1", permission=DatasetPermissionEnum.ALL_TEAM) + + with pytest.raises(NoPermissionError, match="cannot change the dataset permissions"): + DatasetPermissionService.check_permission(user, dataset, DatasetPermissionEnum.ONLY_ME, []) + + def test_check_permission_requires_partial_member_list_for_partial_members_mode(self): + user = SimpleNamespace(is_dataset_editor=True, is_dataset_operator=True) + dataset = SimpleNamespace(id="dataset-1", permission=DatasetPermissionEnum.PARTIAL_TEAM) + + with pytest.raises(ValueError, match="Partial member list is required"): + DatasetPermissionService.check_permission(user, dataset, DatasetPermissionEnum.PARTIAL_TEAM, []) + + def test_check_permission_rejects_dataset_operator_member_list_changes(self): + user = SimpleNamespace(is_dataset_editor=True, is_dataset_operator=True) + dataset = SimpleNamespace(id="dataset-1", permission=DatasetPermissionEnum.PARTIAL_TEAM) + + with patch.object(DatasetPermissionService, "get_dataset_partial_member_list", return_value=["user-1"]): + with pytest.raises(ValueError, match="cannot change the dataset permissions"): + DatasetPermissionService.check_permission( + user, + dataset, + DatasetPermissionEnum.PARTIAL_TEAM, + [{"user_id": "user-2"}], + ) + + def test_check_permission_allows_dataset_operator_when_member_list_is_unchanged(self): + user = SimpleNamespace(is_dataset_editor=True, is_dataset_operator=True) + dataset = SimpleNamespace(id="dataset-1", permission=DatasetPermissionEnum.PARTIAL_TEAM) + + with patch.object(DatasetPermissionService, "get_dataset_partial_member_list", return_value=["user-1"]): + DatasetPermissionService.check_permission( + user, + dataset, + DatasetPermissionEnum.PARTIAL_TEAM, + [{"user_id": "user-1"}], + ) + + def test_clear_partial_member_list_deletes_permissions_and_commits(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + member = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + DatasetPermissionIntegrationFactory.create_dataset_permission( + db_session_with_containers, + dataset_id=dataset.id, + tenant_id=tenant.id, + account_id=member.id, + ) + + DatasetPermissionService.clear_partial_member_list(dataset.id) + + remaining = db_session_with_containers.query(DatasetPermission).filter_by(dataset_id=dataset.id).all() + assert remaining == [] diff --git a/api/tests/test_containers_integration_tests/services/test_dataset_service_update_dataset.py b/api/tests/test_containers_integration_tests/services/test_dataset_service_update_dataset.py index a814466e14..ac0483a45d 100644 --- a/api/tests/test_containers_integration_tests/services/test_dataset_service_update_dataset.py +++ b/api/tests/test_containers_integration_tests/services/test_dataset_service_update_dataset.py @@ -1,13 +1,21 @@ +import json from unittest.mock import Mock, patch from uuid import uuid4 import pytest -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexTechniqueType -from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole -from models.dataset import Dataset, ExternalKnowledgeBindings +from graphon.model_runtime.entities.model_entities import ModelType +from models.account import ( + Account, + AccountStatus, + Tenant, + TenantAccountJoin, + TenantAccountRole, + TenantStatus, +) +from models.dataset import Dataset, ExternalKnowledgeApis, ExternalKnowledgeBindings from models.enums import DataSourceType from services.dataset_service import DatasetService from services.errors.account import NoPermissionError @@ -25,12 +33,12 @@ class DatasetUpdateTestDataFactory: email=f"{uuid4()}@example.com", name=f"user-{uuid4()}", interface_language="en-US", - status="active", + status=AccountStatus.ACTIVE, ) db_session_with_containers.add(account) db_session_with_containers.commit() - tenant = Tenant(name=f"tenant-{account.id}", status="normal") + tenant = Tenant(name=f"tenant-{account.id}", status=TenantStatus.NORMAL) db_session_with_containers.add(tenant) db_session_with_containers.commit() @@ -103,6 +111,34 @@ class DatasetUpdateTestDataFactory: db_session_with_containers.commit() return binding + @staticmethod + def create_external_knowledge_api( + db_session_with_containers: Session, + tenant_id: str, + created_by: str, + api_id: str | None = None, + name: str = "test-api", + ) -> ExternalKnowledgeApis: + """Create a real external knowledge API template for tenant-scoped update validation.""" + external_api = ExternalKnowledgeApis( + tenant_id=tenant_id, + created_by=created_by, + updated_by=created_by, + name=name, + description="test description", + settings=json.dumps( + { + "endpoint": "https://example.com", + "api_key": "test-api-key", + } + ), + ) + if api_id is not None: + external_api.id = api_id + db_session_with_containers.add(external_api) + db_session_with_containers.commit() + return external_api + class TestDatasetServiceUpdateDataset: """ @@ -138,6 +174,11 @@ class TestDatasetServiceUpdateDataset: ) binding_id = binding.id db_session_with_containers.expunge(binding) + external_api = DatasetUpdateTestDataFactory.create_external_knowledge_api( + db_session_with_containers, + tenant_id=tenant.id, + created_by=user.id, + ) update_data = { "name": "new_name", @@ -145,7 +186,7 @@ class TestDatasetServiceUpdateDataset: "external_retrieval_model": "new_model", "permission": "only_me", "external_knowledge_id": "new_knowledge_id", - "external_knowledge_api_id": str(uuid4()), + "external_knowledge_api_id": external_api.id, } result = DatasetService.update_dataset(dataset.id, update_data, user) @@ -218,11 +259,16 @@ class TestDatasetServiceUpdateDataset: created_by=user.id, provider="external", ) + external_api = DatasetUpdateTestDataFactory.create_external_knowledge_api( + db_session_with_containers, + tenant_id=tenant.id, + created_by=user.id, + ) update_data = { "name": "new_name", "external_knowledge_id": "knowledge_id", - "external_knowledge_api_id": str(uuid4()), + "external_knowledge_api_id": external_api.id, } with pytest.raises(ValueError) as context: diff --git a/api/tests/test_containers_integration_tests/services/test_delete_archived_workflow_run.py b/api/tests/test_containers_integration_tests/services/test_delete_archived_workflow_run.py index c8f04e9215..fe426ae516 100644 --- a/api/tests/test_containers_integration_tests/services/test_delete_archived_workflow_run.py +++ b/api/tests/test_containers_integration_tests/services/test_delete_archived_workflow_run.py @@ -5,9 +5,9 @@ Testcontainers integration tests for archived workflow run deletion service. from datetime import UTC, datetime, timedelta from uuid import uuid4 -from graphon.enums import WorkflowExecutionStatus from sqlalchemy import select +from graphon.enums import WorkflowExecutionStatus from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom from models.workflow import WorkflowArchiveLog, WorkflowRun from services.retention.workflow_run.delete_archived_workflow_run import ArchivedWorkflowRunDeletion diff --git a/api/tests/test_containers_integration_tests/services/test_feature_service.py b/api/tests/test_containers_integration_tests/services/test_feature_service.py index b3e7dd2a59..315936d721 100644 --- a/api/tests/test_containers_integration_tests/services/test_feature_service.py +++ b/api/tests/test_containers_integration_tests/services/test_feature_service.py @@ -274,6 +274,7 @@ class TestFeatureService: mock_config.ENABLE_EMAIL_CODE_LOGIN = True mock_config.ENABLE_EMAIL_PASSWORD_LOGIN = True mock_config.ENABLE_SOCIAL_OAUTH_LOGIN = False + mock_config.ENABLE_COLLABORATION_MODE = True mock_config.ALLOW_REGISTER = False mock_config.ALLOW_CREATE_WORKSPACE = False mock_config.MAIL_TYPE = "smtp" @@ -298,6 +299,7 @@ class TestFeatureService: # Verify authentication settings assert result.enable_email_code_login is True assert result.enable_email_password_login is False + assert result.enable_collaboration_mode is True assert result.is_allow_register is False assert result.is_allow_create_workspace is False @@ -401,6 +403,7 @@ class TestFeatureService: mock_config.ENABLE_EMAIL_CODE_LOGIN = True mock_config.ENABLE_EMAIL_PASSWORD_LOGIN = True mock_config.ENABLE_SOCIAL_OAUTH_LOGIN = False + mock_config.ENABLE_COLLABORATION_MODE = False mock_config.ALLOW_REGISTER = True mock_config.ALLOW_CREATE_WORKSPACE = True mock_config.MAIL_TYPE = "smtp" @@ -422,6 +425,7 @@ class TestFeatureService: assert result.enable_email_code_login is True assert result.enable_email_password_login is True assert result.enable_social_oauth_login is False + assert result.enable_collaboration_mode is False assert result.is_allow_register is True assert result.is_allow_create_workspace is True assert result.is_email_setup is True diff --git a/api/tests/unit_tests/services/test_hit_testing_service.py b/api/tests/test_containers_integration_tests/services/test_hit_testing_service.py similarity index 51% rename from api/tests/unit_tests/services/test_hit_testing_service.py rename to api/tests/test_containers_integration_tests/services/test_hit_testing_service.py index 80e9729f5b..f332ba05ec 100644 --- a/api/tests/unit_tests/services/test_hit_testing_service.py +++ b/api/tests/test_containers_integration_tests/services/test_hit_testing_service.py @@ -1,239 +1,193 @@ +from __future__ import annotations + import json from typing import Any, cast from unittest.mock import ANY, MagicMock, patch +from uuid import uuid4 import pytest +from sqlalchemy import func, select +from sqlalchemy.orm import Session from core.rag.models.document import Document -from models.dataset import Dataset +from models.dataset import Dataset, DatasetQuery from services.hit_testing_service import HitTestingService -class TestHitTestingService: - """Test suite for HitTestingService""" +def _create_dataset(db_session: Session, *, provider: str = "vendor", **kwargs: Any) -> Dataset: + tenant_id = str(uuid4()) + created_by = str(uuid4()) + ds = Dataset( + tenant_id=kwargs.get("tenant_id", tenant_id), + name=kwargs.get("name", "test-dataset"), + created_by=kwargs.get("created_by", created_by), + provider=provider, + ) + db_session.add(ds) + db_session.commit() + db_session.refresh(ds) + return ds - # ===== Utility Method Tests ===== + +class TestHitTestingService: + # ── Utility methods (pure logic, no DB) ──────────────────────────── def test_escape_query_for_search_should_escape_double_quotes(self): - """Test that escape_query_for_search escapes double quotes correctly""" - # Arrange query = 'test "query" with quotes' - expected = 'test \\"query\\" with quotes' - - # Act result = HitTestingService.escape_query_for_search(query) - - # Assert - assert result == expected + assert result == 'test \\"query\\" with quotes' def test_hit_testing_args_check_should_pass_with_valid_query(self): - """Test that hit_testing_args_check passes with a valid query""" - # Arrange - args = {"query": "valid query"} - - # Act & Assert (should not raise) - HitTestingService.hit_testing_args_check(args) + HitTestingService.hit_testing_args_check({"query": "valid query"}) def test_hit_testing_args_check_should_pass_with_valid_attachments(self): - """Test that hit_testing_args_check passes with valid attachment_ids""" - # Arrange - args = {"attachment_ids": ["id1", "id2"]} - - # Act & Assert (should not raise) - HitTestingService.hit_testing_args_check(args) + HitTestingService.hit_testing_args_check({"attachment_ids": ["id1", "id2"]}) def test_hit_testing_args_check_should_raise_error_when_no_query_or_attachments(self): - """Test that hit_testing_args_check raises ValueError if both query and attachment_ids are missing""" - # Arrange - args = {} - - # Act & Assert - with pytest.raises(ValueError) as exc_info: - HitTestingService.hit_testing_args_check(args) - assert "Query or attachment_ids is required" in str(exc_info.value) + with pytest.raises(ValueError, match="Query or attachment_ids is required"): + HitTestingService.hit_testing_args_check({}) def test_hit_testing_args_check_should_raise_error_when_query_too_long(self): - """Test that hit_testing_args_check raises ValueError if query exceeds 250 characters""" - # Arrange - args = {"query": "a" * 251} - - # Act & Assert - with pytest.raises(ValueError) as exc_info: - HitTestingService.hit_testing_args_check(args) - assert "Query cannot exceed 250 characters" in str(exc_info.value) + with pytest.raises(ValueError, match="Query cannot exceed 250 characters"): + HitTestingService.hit_testing_args_check({"query": "a" * 251}) def test_hit_testing_args_check_should_raise_error_when_attachments_not_list(self): - """Test that hit_testing_args_check raises ValueError if attachment_ids is not a list""" - # Arrange - args = {"attachment_ids": "not a list"} + with pytest.raises(ValueError, match="Attachment_ids must be a list"): + HitTestingService.hit_testing_args_check({"attachment_ids": "not a list"}) - # Act & Assert - with pytest.raises(ValueError) as exc_info: - HitTestingService.hit_testing_args_check(args) - assert "Attachment_ids must be a list" in str(exc_info.value) - - # ===== Response Formatting Tests ===== + # ── Response formatting ──────────────────────────────────────────── @patch("core.rag.datasource.retrieval_service.RetrievalService.format_retrieval_documents") def test_compact_retrieve_response_should_format_correctly(self, mock_format): - """Test that compact_retrieve_response formats the response correctly""" - # Arrange query = "test query" mock_doc = MagicMock(spec=Document) - documents = [mock_doc] mock_record = MagicMock() mock_record.model_dump.return_value = {"content": "formatted content"} mock_format.return_value = [mock_record] - # Act - result = cast(dict[str, Any], HitTestingService.compact_retrieve_response(query, documents)) + result = cast(dict[str, Any], HitTestingService.compact_retrieve_response(query, [mock_doc])) - # Assert assert cast(dict[str, Any], result["query"])["content"] == query assert len(result["records"]) == 1 assert cast(dict[str, Any], result["records"][0])["content"] == "formatted content" - mock_format.assert_called_once_with(documents) + mock_format.assert_called_once_with([mock_doc]) - def test_compact_external_retrieve_response_should_return_records_for_external_provider(self): - """Test that compact_external_retrieve_response returns records when dataset provider is external""" - # Arrange - dataset = MagicMock(spec=Dataset) - dataset.provider = "external" - query = "test query" + def test_compact_external_retrieve_response_should_return_records_for_external_provider( + self, db_session_with_containers: Session + ): + dataset = _create_dataset(db_session_with_containers, provider="external") documents = [ {"content": "c1", "title": "t1", "score": 0.9, "metadata": {"m1": "v1"}}, {"content": "c2", "title": "t2", "score": 0.8, "metadata": {"m2": "v2"}}, ] - # Act - result = cast(dict[str, Any], HitTestingService.compact_external_retrieve_response(dataset, query, documents)) + result = cast( + dict[str, Any], HitTestingService.compact_external_retrieve_response(dataset, "test query", documents) + ) - # Assert - assert cast(dict[str, Any], result["query"])["content"] == query + assert cast(dict[str, Any], result["query"])["content"] == "test query" assert len(result["records"]) == 2 assert cast(dict[str, Any], result["records"][0])["content"] == "c1" assert cast(dict[str, Any], result["records"][1])["title"] == "t2" - def test_compact_external_retrieve_response_should_return_empty_for_non_external_provider(self): - """Test that compact_external_retrieve_response returns empty records for non-external provider""" - # Arrange - dataset = MagicMock(spec=Dataset) - dataset.provider = "not_external" - query = "test query" - documents = [{"content": "c1"}] + def test_compact_external_retrieve_response_should_return_empty_for_non_external_provider( + self, db_session_with_containers: Session + ): + dataset = _create_dataset(db_session_with_containers, provider="vendor") - # Act - result = cast(dict[str, Any], HitTestingService.compact_external_retrieve_response(dataset, query, documents)) + result = cast( + dict[str, Any], + HitTestingService.compact_external_retrieve_response(dataset, "test query", [{"content": "c1"}]), + ) - # Assert - assert cast(dict[str, Any], result["query"])["content"] == query + assert cast(dict[str, Any], result["query"])["content"] == "test query" assert result["records"] == [] - # ===== External Retrieve Tests ===== + # ── External retrieve (real DB) ──────────────────────────────────── @patch("core.rag.datasource.retrieval_service.RetrievalService.external_retrieve") - @patch("extensions.ext_database.db.session.add") - @patch("extensions.ext_database.db.session.commit") - def test_external_retrieve_should_succeed_for_external_provider(self, mock_commit, mock_add, mock_ext_retrieve): - """Test that external_retrieve successfully retrieves from external provider and commits query""" - # Arrange - dataset = MagicMock(spec=Dataset) - dataset.id = "dataset_id" - dataset.provider = "external" - query = 'test "query"' + def test_external_retrieve_should_succeed_for_external_provider( + self, mock_ext_retrieve, db_session_with_containers: Session + ): + dataset = _create_dataset(db_session_with_containers, provider="external") + account_id = str(uuid4()) account = MagicMock() - account.id = "account_id" - + account.id = account_id mock_ext_retrieve.return_value = [{"content": "ext content", "score": 1.0}] - # Act + before_count = db_session_with_containers.scalar(select(func.count()).select_from(DatasetQuery)) or 0 + result = cast( dict[str, Any], HitTestingService.external_retrieve( dataset=dataset, - query=query, + query='test "query"', account=account, external_retrieval_model={"model": "test"}, metadata_filtering_conditions={"key": "val"}, ), ) - # Assert - assert cast(dict[str, Any], result["query"])["content"] == query + assert cast(dict[str, Any], result["query"])["content"] == 'test "query"' assert cast(dict[str, Any], result["records"][0])["content"] == "ext content" - - # Verify call to RetrievalService.external_retrieve with escaped query mock_ext_retrieve.assert_called_once_with( - dataset_id="dataset_id", + dataset_id=dataset.id, query='test \\"query\\"', external_retrieval_model={"model": "test"}, metadata_filtering_conditions={"key": "val"}, ) - # Verify DatasetQuery record was added and committed - mock_add.assert_called_once() - mock_commit.assert_called_once() + db_session_with_containers.expire_all() + after_count = db_session_with_containers.scalar(select(func.count()).select_from(DatasetQuery)) or 0 + assert after_count == before_count + 1 - def test_external_retrieve_should_return_empty_for_non_external_provider(self): - """Test that external_retrieve returns empty results immediately if provider is not external""" - # Arrange - dataset = MagicMock(spec=Dataset) - dataset.provider = "not_external" - query = "test query" + def test_external_retrieve_should_return_empty_for_non_external_provider(self, db_session_with_containers: Session): + dataset = _create_dataset(db_session_with_containers, provider="vendor") account = MagicMock() - # Act - result = cast(dict[str, Any], HitTestingService.external_retrieve(dataset, query, account)) + result = cast(dict[str, Any], HitTestingService.external_retrieve(dataset, "test query", account)) - # Assert - assert cast(dict[str, Any], result["query"])["content"] == query + assert cast(dict[str, Any], result["query"])["content"] == "test query" assert result["records"] == [] - # ===== Retrieve Tests ===== + # ── Retrieve (real DB) ───────────────────────────────────────────── @patch("core.rag.datasource.retrieval_service.RetrievalService.retrieve") - @patch("extensions.ext_database.db.session.add") - @patch("extensions.ext_database.db.session.commit") - def test_retrieve_should_use_default_model_when_none_provided(self, mock_commit, mock_add, mock_retrieve): - """Test that retrieve uses default model when retrieval_model is not provided""" - # Arrange - dataset = MagicMock(spec=Dataset) - dataset.id = "dataset_id" + def test_retrieve_should_use_default_model_when_none_provided( + self, mock_retrieve, db_session_with_containers: Session + ): + dataset = _create_dataset(db_session_with_containers) dataset.retrieval_model = None - query = "test query" account = MagicMock() - account.id = "account_id" - + account.id = str(uuid4()) mock_retrieve.return_value = [] - # Act + before_count = db_session_with_containers.scalar(select(func.count()).select_from(DatasetQuery)) or 0 + result = cast( dict[str, Any], HitTestingService.retrieve( - dataset=dataset, query=query, account=account, retrieval_model=None, external_retrieval_model={} + dataset=dataset, query="test query", account=account, retrieval_model=None, external_retrieval_model={} ), ) - # Assert - assert cast(dict[str, Any], result["query"])["content"] == query + assert cast(dict[str, Any], result["query"])["content"] == "test query" mock_retrieve.assert_called_once() - # Verify top_k from default_retrieval_model (4) assert mock_retrieve.call_args.kwargs["top_k"] == 4 - mock_commit.assert_called_once() + + db_session_with_containers.expire_all() + after_count = db_session_with_containers.scalar(select(func.count()).select_from(DatasetQuery)) or 0 + assert after_count == before_count + 1 @patch("core.rag.datasource.retrieval_service.RetrievalService.retrieve") @patch("core.rag.retrieval.dataset_retrieval.DatasetRetrieval.get_metadata_filter_condition") - @patch("extensions.ext_database.db.session.add") - @patch("extensions.ext_database.db.session.commit") - def test_retrieve_should_handle_metadata_filtering(self, mock_commit, mock_add, mock_get_meta, mock_retrieve): - """Test that retrieve correctly calls metadata filtering when conditions are present""" - # Arrange - dataset = MagicMock(spec=Dataset) - dataset.id = "dataset_id" - query = "test query" + def test_retrieve_should_handle_metadata_filtering( + self, mock_get_meta, mock_retrieve, db_session_with_containers: Session + ): + dataset = _create_dataset(db_session_with_containers) account = MagicMock() - account.id = "account_id" + account.id = str(uuid4()) retrieval_model = { "search_method": "semantic_search", @@ -242,29 +196,27 @@ class TestHitTestingService: "reranking_enable": False, "score_threshold_enabled": False, } - - # Mock metadata filtering response - mock_get_meta.return_value = ({"dataset_id": ["doc_id1"]}, "condition_string") + mock_get_meta.return_value = ({dataset.id: ["doc_id1"]}, "condition_string") mock_retrieve.return_value = [] - # Act HitTestingService.retrieve( - dataset=dataset, query=query, account=account, retrieval_model=retrieval_model, external_retrieval_model={} + dataset=dataset, + query="test query", + account=account, + retrieval_model=retrieval_model, + external_retrieval_model={}, ) - # Assert mock_get_meta.assert_called_once() mock_retrieve.assert_called_once() assert mock_retrieve.call_args.kwargs["document_ids_filter"] == ["doc_id1"] @patch("core.rag.datasource.retrieval_service.RetrievalService.retrieve") @patch("core.rag.retrieval.dataset_retrieval.DatasetRetrieval.get_metadata_filter_condition") - def test_retrieve_should_return_empty_if_metadata_filtering_fails(self, mock_get_meta, mock_retrieve): - """Test that retrieve returns empty response if metadata filtering returns condition but no document IDs""" - # Arrange - dataset = MagicMock(spec=Dataset) - dataset.id = "dataset_id" - query = "test query" + def test_retrieve_should_return_empty_if_metadata_filtering_fails( + self, mock_get_meta, mock_retrieve, db_session_with_containers: Session + ): + dataset = _create_dataset(db_session_with_containers) account = MagicMock() retrieval_model = { @@ -274,37 +226,27 @@ class TestHitTestingService: "reranking_enable": False, "score_threshold_enabled": False, } - - # Mock metadata filtering response: condition returned but no IDs mock_get_meta.return_value = ({}, "condition_string") - # Act result = cast( dict[str, Any], HitTestingService.retrieve( dataset=dataset, - query=query, + query="test query", account=account, retrieval_model=retrieval_model, external_retrieval_model={}, ), ) - # Assert assert result["records"] == [] mock_retrieve.assert_not_called() @patch("core.rag.datasource.retrieval_service.RetrievalService.retrieve") - @patch("extensions.ext_database.db.session.add") - @patch("extensions.ext_database.db.session.commit") - def test_retrieve_should_handle_attachments(self, mock_commit, mock_add, mock_retrieve): - """Test that retrieve handles attachment_ids and adds them to DatasetQuery""" - # Arrange - dataset = MagicMock(spec=Dataset) - dataset.id = "dataset_id" - query = "test query" + def test_retrieve_should_handle_attachments(self, mock_retrieve, db_session_with_containers: Session): + dataset = _create_dataset(db_session_with_containers) account = MagicMock() - account.id = "account_id" + account.id = str(uuid4()) attachment_ids = ["att1", "att2"] retrieval_model = { @@ -315,21 +257,19 @@ class TestHitTestingService: } mock_retrieve.return_value = [] - # Act HitTestingService.retrieve( dataset=dataset, - query=query, + query="test query", account=account, retrieval_model=retrieval_model, external_retrieval_model={}, attachment_ids=attachment_ids, ) - # Assert mock_retrieve.assert_called_once_with( retrieval_method=ANY, - dataset_id="dataset_id", - query=query, + dataset_id=dataset.id, + query="test query", attachment_ids=attachment_ids, top_k=4, score_threshold=0.0, @@ -338,26 +278,27 @@ class TestHitTestingService: weights=None, document_ids_filter=None, ) - # Verify DatasetQuery record (there should be 2 queries: 1 text, 2 images) - # The content is json.dumps([{"content_type": "text_query", ...}, {"content_type": "image_query", ...}]) - called_query = mock_add.call_args[0][0] - query_content = json.loads(called_query.content) + + # Verify DatasetQuery was persisted with correct content structure + db_session_with_containers.expire_all() + latest = db_session_with_containers.scalar( + select(DatasetQuery) + .where(DatasetQuery.dataset_id == dataset.id) + .order_by(DatasetQuery.created_at.desc()) + .limit(1) + ) + assert latest is not None + query_content = json.loads(latest.content) assert len(query_content) == 3 # 1 text + 2 images assert query_content[0]["content_type"] == "text_query" assert query_content[1]["content_type"] == "image_query" assert query_content[1]["content"] == "att1" @patch("core.rag.datasource.retrieval_service.RetrievalService.retrieve") - @patch("extensions.ext_database.db.session.add") - @patch("extensions.ext_database.db.session.commit") - def test_retrieve_should_handle_reranking_and_threshold(self, mock_commit, mock_add, mock_retrieve): - """Test that retrieve passes reranking and threshold parameters correctly""" - # Arrange - dataset = MagicMock(spec=Dataset) - dataset.id = "dataset_id" - query = "test query" + def test_retrieve_should_handle_reranking_and_threshold(self, mock_retrieve, db_session_with_containers: Session): + dataset = _create_dataset(db_session_with_containers) account = MagicMock() - account.id = "account_id" + account.id = str(uuid4()) retrieval_model = { "search_method": "hybrid_search", @@ -371,12 +312,14 @@ class TestHitTestingService: } mock_retrieve.return_value = [] - # Act HitTestingService.retrieve( - dataset=dataset, query=query, account=account, retrieval_model=retrieval_model, external_retrieval_model={} + dataset=dataset, + query="test query", + account=account, + retrieval_model=retrieval_model, + external_retrieval_model={}, ) - # Assert mock_retrieve.assert_called_once() kwargs = mock_retrieve.call_args.kwargs assert kwargs["score_threshold"] == 0.5 diff --git a/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test.py b/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test.py index c46b8fba0b..18c5320d0a 100644 --- a/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test.py +++ b/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test.py @@ -3,8 +3,6 @@ import uuid from unittest.mock import MagicMock import pytest -from graphon.enums import BuiltinNodeTypes -from graphon.nodes.human_input.entities import HumanInputNodeData from core.workflow.human_input_compat import ( EmailDeliveryConfig, @@ -12,6 +10,8 @@ from core.workflow.human_input_compat import ( EmailRecipients, ExternalRecipient, ) +from graphon.enums import BuiltinNodeTypes +from graphon.nodes.human_input.entities import HumanInputNodeData from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole from models.model import App, AppMode from models.workflow import Workflow, WorkflowType diff --git a/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test_service.py b/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test_service.py index 0f252515f7..21a54e909e 100644 --- a/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test_service.py +++ b/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test_service.py @@ -5,7 +5,6 @@ from unittest.mock import MagicMock, patch from uuid import uuid4 import pytest -from graphon.runtime import VariablePool from sqlalchemy.engine import Engine from configs import dify_config @@ -16,6 +15,7 @@ from core.workflow.human_input_compat import ( ExternalRecipient, MemberRecipient, ) +from graphon.runtime import VariablePool from models.account import Account, TenantAccountJoin from services import human_input_delivery_test_service as service_module from services.human_input_delivery_test_service import ( diff --git a/api/tests/test_containers_integration_tests/services/test_messages_clean_service.py b/api/tests/test_containers_integration_tests/services/test_messages_clean_service.py index 2340dd2a03..cd63d3ad6c 100644 --- a/api/tests/test_containers_integration_tests/services/test_messages_clean_service.py +++ b/api/tests/test_containers_integration_tests/services/test_messages_clean_service.py @@ -8,11 +8,11 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker -from graphon.file import FileType from sqlalchemy.orm import Session from enums.cloud_plan import CloudPlan from extensions.ext_redis import redis_client +from graphon.file import FileType from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole from models.enums import ( ConversationFromSource, diff --git a/api/tests/test_containers_integration_tests/services/test_model_provider_service.py b/api/tests/test_containers_integration_tests/services/test_model_provider_service.py index ba926bf675..8955a3b5f2 100644 --- a/api/tests/test_containers_integration_tests/services/test_model_provider_service.py +++ b/api/tests/test_containers_integration_tests/services/test_model_provider_service.py @@ -2,10 +2,10 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker -from graphon.model_runtime.entities.model_entities import FetchFrom, ModelType from sqlalchemy.orm import Session from core.entities.model_entities import ModelStatus +from graphon.model_runtime.entities.model_entities import FetchFrom, ModelType from models import Account, Tenant, TenantAccountJoin, TenantAccountRole from models.provider import Provider, ProviderModel, ProviderModelSetting, ProviderType from services.model_provider_service import ModelProviderService @@ -405,11 +405,10 @@ class TestModelProviderService: mock_provider_manager = mock_external_service_dependencies["provider_manager"].return_value # Create mock models + from core.entities.model_entities import ModelWithProviderEntity, SimpleModelProviderEntity from graphon.model_runtime.entities.common_entities import I18nObject from graphon.model_runtime.entities.provider_entities import ProviderEntity - from core.entities.model_entities import ModelWithProviderEntity, SimpleModelProviderEntity - # Create real model objects instead of mocks provider_entity_1 = SimpleModelProviderEntity( ProviderEntity( @@ -644,9 +643,8 @@ class TestModelProviderService: mock_provider_manager = mock_external_service_dependencies["provider_manager"].return_value # Create mock default model response - from graphon.model_runtime.entities.common_entities import I18nObject - from core.entities.model_entities import DefaultModelEntity, DefaultModelProviderEntity + from graphon.model_runtime.entities.common_entities import I18nObject mock_default_model = DefaultModelEntity( model="gpt-3.5-turbo", diff --git a/api/tests/test_containers_integration_tests/services/test_ops_service.py b/api/tests/test_containers_integration_tests/services/test_ops_service.py new file mode 100644 index 0000000000..e2e1a228b2 --- /dev/null +++ b/api/tests/test_containers_integration_tests/services/test_ops_service.py @@ -0,0 +1,363 @@ +from __future__ import annotations + +import uuid +from unittest.mock import patch + +import pytest +from faker import Faker +from sqlalchemy import select +from sqlalchemy.orm import Session + +from core.ops.entities.config_entity import TracingProviderEnum +from models.model import TraceAppConfig +from services.account_service import AccountService, TenantService +from services.app_service import AppService +from services.ops_service import OpsService +from tests.test_containers_integration_tests.helpers import generate_valid_password + + +class TestOpsService: + @pytest.fixture + def mock_external_service_dependencies(self): + with ( + patch("services.app_service.FeatureService") as mock_feature_service, + patch("services.app_service.EnterpriseService") as mock_enterprise_service, + patch("services.app_service.ModelManager.for_tenant") as mock_model_manager, + patch("services.account_service.FeatureService") as mock_account_feature_service, + ): + mock_feature_service.get_system_features.return_value.webapp_auth.enabled = False + mock_enterprise_service.WebAppAuth.update_app_access_mode.return_value = None + mock_enterprise_service.WebAppAuth.cleanup_webapp.return_value = None + mock_account_feature_service.get_system_features.return_value.is_allow_register = True + mock_model_instance = mock_model_manager.return_value + mock_model_instance.get_default_model_instance.return_value = None + mock_model_instance.get_default_provider_model_name.return_value = ("openai", "gpt-3.5-turbo") + yield { + "feature_service": mock_feature_service, + "enterprise_service": mock_enterprise_service, + "model_manager": mock_model_manager, + "account_feature_service": mock_account_feature_service, + } + + @pytest.fixture + def mock_ops_trace_manager(self): + with patch("services.ops_service.OpsTraceManager") as mock: + yield mock + + def _create_app(self, db_session_with_containers: Session, mock_external_service_dependencies): + fake = Faker() + account = AccountService.create_account( + email=fake.email(), + name=fake.name(), + interface_language="en-US", + password=generate_valid_password(fake), + ) + TenantService.create_owner_tenant_if_not_exist(account, name=fake.company()) + tenant = account.current_tenant + app_service = AppService() + app = app_service.create_app( + tenant.id, + { + "name": fake.company(), + "description": fake.text(max_nb_chars=100), + "mode": "chat", + "icon_type": "emoji", + "icon": "🤖", + "icon_background": "#FF6B6B", + }, + account, + ) + return app, account + + _SENTINEL = object() + + def _insert_trace_config( + self, + db_session: Session, + app_id: str, + provider: str, + tracing_config: dict | None | object = _SENTINEL, + ) -> TraceAppConfig: + trace_config = TraceAppConfig( + app_id=app_id, + tracing_provider=provider, + tracing_config=tracing_config if tracing_config is not self._SENTINEL else {"some": "config"}, + ) + db_session.add(trace_config) + db_session.commit() + return trace_config + + # ── get_tracing_app_config ───────────────────────────────────────── + + def test_get_tracing_app_config_no_config(self, db_session_with_containers: Session, mock_ops_trace_manager): + result = OpsService.get_tracing_app_config(str(uuid.uuid4()), "arize") + assert result is None + + def test_get_tracing_app_config_no_app(self, db_session_with_containers: Session, mock_ops_trace_manager): + fake_app_id = str(uuid.uuid4()) + self._insert_trace_config(db_session_with_containers, fake_app_id, "arize") + result = OpsService.get_tracing_app_config(fake_app_id, "arize") + assert result is None + + def test_get_tracing_app_config_none_config( + self, db_session_with_containers: Session, mock_external_service_dependencies, mock_ops_trace_manager + ): + app, _ = self._create_app(db_session_with_containers, mock_external_service_dependencies) + self._insert_trace_config(db_session_with_containers, app.id, "arize", tracing_config=None) + + with pytest.raises(ValueError, match="Tracing config cannot be None."): + OpsService.get_tracing_app_config(app.id, "arize") + + @pytest.mark.parametrize( + ("provider", "default_url"), + [ + ("arize", "https://app.arize.com/"), + ("phoenix", "https://app.phoenix.arize.com/projects/"), + ("langsmith", "https://smith.langchain.com/"), + ("opik", "https://www.comet.com/opik/"), + ("weave", "https://wandb.ai/"), + ("aliyun", "https://arms.console.aliyun.com/"), + ("tencent", "https://console.cloud.tencent.com/apm"), + ("mlflow", "http://localhost:5000/"), + ("databricks", "https://www.databricks.com/"), + ], + ) + def test_get_tracing_app_config_providers_exception( + self, db_session_with_containers: Session, mock_external_service_dependencies, provider, default_url + ): + with patch("services.ops_service.OpsTraceManager") as mock_otm: + mock_otm.decrypt_tracing_config.return_value = {} + mock_otm.obfuscated_decrypt_token.return_value = {} + mock_otm.get_trace_config_project_url.side_effect = Exception("error") + mock_otm.get_trace_config_project_key.side_effect = Exception("error") + + app, _ = self._create_app(db_session_with_containers, mock_external_service_dependencies) + self._insert_trace_config(db_session_with_containers, app.id, provider) + + result = OpsService.get_tracing_app_config(app.id, provider) + + assert result is not None + assert result["tracing_config"]["project_url"] == default_url + + @pytest.mark.parametrize( + "provider", + ["arize", "phoenix", "langsmith", "opik", "weave", "aliyun", "tencent", "mlflow", "databricks"], + ) + def test_get_tracing_app_config_providers_success( + self, db_session_with_containers: Session, mock_external_service_dependencies, provider + ): + with patch("services.ops_service.OpsTraceManager") as mock_otm: + mock_otm.decrypt_tracing_config.return_value = {} + mock_otm.obfuscated_decrypt_token.return_value = {"project_url": "success_url"} + mock_otm.get_trace_config_project_url.return_value = "success_url" + + app, _ = self._create_app(db_session_with_containers, mock_external_service_dependencies) + self._insert_trace_config(db_session_with_containers, app.id, provider) + + result = OpsService.get_tracing_app_config(app.id, provider) + + assert result is not None + assert result["tracing_config"]["project_url"] == "success_url" + + def test_get_tracing_app_config_langfuse_success( + self, db_session_with_containers: Session, mock_external_service_dependencies + ): + with patch("services.ops_service.OpsTraceManager") as mock_otm: + mock_otm.decrypt_tracing_config.return_value = {"host": "https://api.langfuse.com"} + mock_otm.obfuscated_decrypt_token.return_value = {"host": "https://api.langfuse.com"} + mock_otm.get_trace_config_project_key.return_value = "key" + + app, _ = self._create_app(db_session_with_containers, mock_external_service_dependencies) + self._insert_trace_config(db_session_with_containers, app.id, "langfuse") + + result = OpsService.get_tracing_app_config(app.id, "langfuse") + + assert result is not None + assert result["tracing_config"]["project_url"] == "https://api.langfuse.com/project/key" + + def test_get_tracing_app_config_langfuse_exception( + self, db_session_with_containers: Session, mock_external_service_dependencies + ): + with patch("services.ops_service.OpsTraceManager") as mock_otm: + mock_otm.decrypt_tracing_config.return_value = {"host": "https://api.langfuse.com"} + mock_otm.obfuscated_decrypt_token.return_value = {"host": "https://api.langfuse.com"} + mock_otm.get_trace_config_project_key.side_effect = Exception("error") + + app, _ = self._create_app(db_session_with_containers, mock_external_service_dependencies) + self._insert_trace_config(db_session_with_containers, app.id, "langfuse") + + result = OpsService.get_tracing_app_config(app.id, "langfuse") + + assert result is not None + assert result["tracing_config"]["project_url"] == "https://api.langfuse.com/" + + # ── create_tracing_app_config ────────────────────────────────────── + + def test_create_tracing_app_config_invalid_provider(self, db_session_with_containers: Session): + result = OpsService.create_tracing_app_config(str(uuid.uuid4()), "invalid_provider", {}) + assert result == {"error": "Invalid tracing provider: invalid_provider"} + + def test_create_tracing_app_config_invalid_credentials( + self, db_session_with_containers: Session, mock_ops_trace_manager + ): + mock_ops_trace_manager.check_trace_config_is_effective.return_value = False + result = OpsService.create_tracing_app_config( + str(uuid.uuid4()), TracingProviderEnum.LANGFUSE, {"public_key": "p", "secret_key": "s"} + ) + assert result == {"error": "Invalid Credentials"} + + @pytest.mark.parametrize( + ("provider", "config"), + [ + (TracingProviderEnum.ARIZE, {}), + (TracingProviderEnum.LANGFUSE, {"public_key": "p", "secret_key": "s"}), + (TracingProviderEnum.LANGSMITH, {"api_key": "k", "project": "p"}), + (TracingProviderEnum.ALIYUN, {"license_key": "k", "endpoint": "https://aliyun.com"}), + ], + ) + def test_create_tracing_app_config_project_url_exception( + self, db_session_with_containers: Session, mock_external_service_dependencies, provider, config + ): + # Existing config causes the service to return None before reaching the DB insert + with patch("services.ops_service.OpsTraceManager") as mock_otm: + mock_otm.check_trace_config_is_effective.return_value = True + mock_otm.get_trace_config_project_url.side_effect = Exception("error") + mock_otm.get_trace_config_project_key.side_effect = Exception("error") + + app, _ = self._create_app(db_session_with_containers, mock_external_service_dependencies) + self._insert_trace_config(db_session_with_containers, app.id, str(provider)) + + result = OpsService.create_tracing_app_config(app.id, provider, config) + + assert result is None + + def test_create_tracing_app_config_langfuse_success( + self, db_session_with_containers: Session, mock_external_service_dependencies + ): + with patch("services.ops_service.OpsTraceManager") as mock_otm: + mock_otm.check_trace_config_is_effective.return_value = True + mock_otm.get_trace_config_project_key.return_value = "key" + mock_otm.encrypt_tracing_config.return_value = {} + + app, _ = self._create_app(db_session_with_containers, mock_external_service_dependencies) + result = OpsService.create_tracing_app_config( + app.id, + TracingProviderEnum.LANGFUSE, + {"public_key": "p", "secret_key": "s", "host": "https://api.langfuse.com"}, + ) + + assert result == {"result": "success"} + + def test_create_tracing_app_config_already_exists( + self, db_session_with_containers: Session, mock_external_service_dependencies + ): + with patch("services.ops_service.OpsTraceManager") as mock_otm: + mock_otm.check_trace_config_is_effective.return_value = True + + app, _ = self._create_app(db_session_with_containers, mock_external_service_dependencies) + self._insert_trace_config(db_session_with_containers, app.id, str(TracingProviderEnum.ARIZE)) + + result = OpsService.create_tracing_app_config(app.id, TracingProviderEnum.ARIZE, {}) + + assert result is None + + def test_create_tracing_app_config_no_app(self, db_session_with_containers: Session, mock_ops_trace_manager): + mock_ops_trace_manager.check_trace_config_is_effective.return_value = True + result = OpsService.create_tracing_app_config(str(uuid.uuid4()), TracingProviderEnum.ARIZE, {}) + assert result is None + + def test_create_tracing_app_config_with_empty_other_keys( + self, db_session_with_containers: Session, mock_external_service_dependencies + ): + # "project" is in other_keys for Arize; providing "" triggers default substitution + with patch("services.ops_service.OpsTraceManager") as mock_otm: + mock_otm.check_trace_config_is_effective.return_value = True + mock_otm.get_trace_config_project_url.side_effect = Exception("no url") + mock_otm.encrypt_tracing_config.return_value = {} + + app, _ = self._create_app(db_session_with_containers, mock_external_service_dependencies) + result = OpsService.create_tracing_app_config(app.id, TracingProviderEnum.ARIZE, {"project": ""}) + + assert result == {"result": "success"} + + def test_create_tracing_app_config_success( + self, db_session_with_containers: Session, mock_external_service_dependencies + ): + with patch("services.ops_service.OpsTraceManager") as mock_otm: + mock_otm.check_trace_config_is_effective.return_value = True + mock_otm.get_trace_config_project_url.return_value = "http://project_url" + mock_otm.encrypt_tracing_config.return_value = {"encrypted": "config"} + + app, _ = self._create_app(db_session_with_containers, mock_external_service_dependencies) + result = OpsService.create_tracing_app_config(app.id, TracingProviderEnum.ARIZE, {}) + + assert result == {"result": "success"} + + # ── update_tracing_app_config ────────────────────────────────────── + + def test_update_tracing_app_config_invalid_provider(self, db_session_with_containers: Session): + with pytest.raises(ValueError, match="Invalid tracing provider: invalid_provider"): + OpsService.update_tracing_app_config(str(uuid.uuid4()), "invalid_provider", {}) + + def test_update_tracing_app_config_no_config(self, db_session_with_containers: Session, mock_ops_trace_manager): + result = OpsService.update_tracing_app_config(str(uuid.uuid4()), TracingProviderEnum.ARIZE, {}) + assert result is None + + def test_update_tracing_app_config_no_app(self, db_session_with_containers: Session, mock_ops_trace_manager): + fake_app_id = str(uuid.uuid4()) + self._insert_trace_config(db_session_with_containers, fake_app_id, str(TracingProviderEnum.ARIZE)) + mock_ops_trace_manager.encrypt_tracing_config.return_value = {} + result = OpsService.update_tracing_app_config(fake_app_id, TracingProviderEnum.ARIZE, {}) + assert result is None + + def test_update_tracing_app_config_invalid_credentials( + self, db_session_with_containers: Session, mock_external_service_dependencies + ): + with patch("services.ops_service.OpsTraceManager") as mock_otm: + mock_otm.encrypt_tracing_config.return_value = {} + mock_otm.decrypt_tracing_config.return_value = {} + mock_otm.check_trace_config_is_effective.return_value = False + + app, _ = self._create_app(db_session_with_containers, mock_external_service_dependencies) + self._insert_trace_config(db_session_with_containers, app.id, str(TracingProviderEnum.ARIZE)) + + with pytest.raises(ValueError, match="Invalid Credentials"): + OpsService.update_tracing_app_config(app.id, TracingProviderEnum.ARIZE, {}) + + def test_update_tracing_app_config_success( + self, db_session_with_containers: Session, mock_external_service_dependencies + ): + with patch("services.ops_service.OpsTraceManager") as mock_otm: + mock_otm.encrypt_tracing_config.return_value = {"updated": "config"} + mock_otm.decrypt_tracing_config.return_value = {} + mock_otm.check_trace_config_is_effective.return_value = True + + app, _ = self._create_app(db_session_with_containers, mock_external_service_dependencies) + self._insert_trace_config(db_session_with_containers, app.id, str(TracingProviderEnum.ARIZE)) + + result = OpsService.update_tracing_app_config(app.id, TracingProviderEnum.ARIZE, {}) + + assert result is not None + assert result["app_id"] == app.id + + # ── delete_tracing_app_config ────────────────────────────────────── + + def test_delete_tracing_app_config_no_config(self, db_session_with_containers: Session): + result = OpsService.delete_tracing_app_config(str(uuid.uuid4()), "arize") + assert result is None + + def test_delete_tracing_app_config_success( + self, db_session_with_containers: Session, mock_external_service_dependencies + ): + app, _ = self._create_app(db_session_with_containers, mock_external_service_dependencies) + self._insert_trace_config(db_session_with_containers, app.id, "arize") + + result = OpsService.delete_tracing_app_config(app.id, "arize") + + assert result is True + remaining = db_session_with_containers.scalar( + select(TraceAppConfig) + .where(TraceAppConfig.app_id == app.id, TraceAppConfig.tracing_provider == "arize") + .limit(1) + ) + assert remaining is None diff --git a/api/tests/test_containers_integration_tests/services/test_recommended_app_service.py b/api/tests/test_containers_integration_tests/services/test_recommended_app_service.py new file mode 100644 index 0000000000..ccc4188dbf --- /dev/null +++ b/api/tests/test_containers_integration_tests/services/test_recommended_app_service.py @@ -0,0 +1,388 @@ +from __future__ import annotations + +import uuid +from types import SimpleNamespace +from typing import Any, cast +from unittest.mock import MagicMock, patch + +import pytest +from sqlalchemy import select +from sqlalchemy.orm import Session + +from models.model import AccountTrialAppRecord, TrialApp +from services import recommended_app_service as service_module +from services.recommended_app_service import RecommendedAppService + +# ── Helpers ──────────────────────────────────────────────────────────── + + +def _apps_response( + recommended_apps: list[dict] | None = None, + categories: list[str] | None = None, +) -> dict: + if recommended_apps is None: + recommended_apps = [ + {"id": "app-1", "name": "Test App 1", "description": "d1", "category": "productivity"}, + {"id": "app-2", "name": "Test App 2", "description": "d2", "category": "communication"}, + ] + if categories is None: + categories = ["productivity", "communication", "utilities"] + return {"recommended_apps": recommended_apps, "categories": categories} + + +def _app_detail( + app_id: str = "app-123", + name: str = "Test App", + description: str = "Test description", + **kwargs: Any, +) -> dict: + detail: dict[str, Any] = { + "id": app_id, + "name": name, + "description": description, + "category": kwargs.get("category", "productivity"), + "icon": kwargs.get("icon", "🚀"), + "model_config": kwargs.get("model_config", {}), + } + detail.update(kwargs) + return detail + + +def _recommendation_detail(result: dict[str, Any] | None) -> dict[str, Any] | None: + return cast("dict[str, Any] | None", result) + + +def _mock_factory_for_apps( + monkeypatch: pytest.MonkeyPatch, + *, + mode: str, + result: dict[str, Any], + fallback_result: dict[str, Any] | None = None, +) -> tuple[MagicMock, MagicMock]: + retrieval_instance = MagicMock() + retrieval_instance.get_recommended_apps_and_categories.return_value = result + retrieval_factory = MagicMock(return_value=retrieval_instance) + monkeypatch.setattr(service_module.dify_config, "HOSTED_FETCH_APP_TEMPLATES_MODE", mode, raising=False) + monkeypatch.setattr( + service_module.RecommendAppRetrievalFactory, + "get_recommend_app_factory", + MagicMock(return_value=retrieval_factory), + ) + builtin_instance = MagicMock() + if fallback_result is not None: + builtin_instance.fetch_recommended_apps_from_builtin.return_value = fallback_result + monkeypatch.setattr( + service_module.RecommendAppRetrievalFactory, + "get_buildin_recommend_app_retrieval", + MagicMock(return_value=builtin_instance), + ) + return retrieval_instance, builtin_instance + + +# ── Pure logic tests: get_recommended_apps_and_categories ────────────── + + +class TestRecommendedAppServiceGetApps: + @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) + @patch("services.recommended_app_service.dify_config", autospec=True) + def test_success_with_apps(self, mock_config, mock_factory_class): + mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "remote" + expected = _apps_response() + + mock_instance = MagicMock() + mock_instance.get_recommended_apps_and_categories.return_value = expected + mock_factory = MagicMock(return_value=mock_instance) + mock_factory_class.get_recommend_app_factory.return_value = mock_factory + + result = RecommendedAppService.get_recommended_apps_and_categories("en-US") + + assert result == expected + assert len(result["recommended_apps"]) == 2 + assert len(result["categories"]) == 3 + mock_factory_class.get_recommend_app_factory.assert_called_once_with("remote") + mock_instance.get_recommended_apps_and_categories.assert_called_once_with("en-US") + + @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) + @patch("services.recommended_app_service.dify_config", autospec=True) + def test_fallback_to_builtin_when_empty(self, mock_config, mock_factory_class): + mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "remote" + empty_response = {"recommended_apps": [], "categories": []} + builtin_response = _apps_response( + recommended_apps=[{"id": "builtin-1", "name": "Builtin App", "category": "default"}] + ) + + mock_remote_instance = MagicMock() + mock_remote_instance.get_recommended_apps_and_categories.return_value = empty_response + mock_factory_class.get_recommend_app_factory.return_value = MagicMock(return_value=mock_remote_instance) + + mock_builtin_instance = MagicMock() + mock_builtin_instance.fetch_recommended_apps_from_builtin.return_value = builtin_response + mock_factory_class.get_buildin_recommend_app_retrieval.return_value = mock_builtin_instance + + result = RecommendedAppService.get_recommended_apps_and_categories("zh-CN") + + assert result == builtin_response + assert result["recommended_apps"][0]["id"] == "builtin-1" + mock_builtin_instance.fetch_recommended_apps_from_builtin.assert_called_once_with("en-US") + + @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) + @patch("services.recommended_app_service.dify_config", autospec=True) + def test_fallback_when_none_recommended_apps(self, mock_config, mock_factory_class): + mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "db" + none_response = {"recommended_apps": None, "categories": ["test"]} + builtin_response = _apps_response() + + mock_db_instance = MagicMock() + mock_db_instance.get_recommended_apps_and_categories.return_value = none_response + mock_factory_class.get_recommend_app_factory.return_value = MagicMock(return_value=mock_db_instance) + + mock_builtin_instance = MagicMock() + mock_builtin_instance.fetch_recommended_apps_from_builtin.return_value = builtin_response + mock_factory_class.get_buildin_recommend_app_retrieval.return_value = mock_builtin_instance + + result = RecommendedAppService.get_recommended_apps_and_categories("en-US") + + assert result == builtin_response + mock_builtin_instance.fetch_recommended_apps_from_builtin.assert_called_once() + + @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) + @patch("services.recommended_app_service.dify_config", autospec=True) + def test_different_languages(self, mock_config, mock_factory_class): + mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "builtin" + + for language in ["en-US", "zh-CN", "ja-JP", "fr-FR"]: + lang_response = _apps_response( + recommended_apps=[{"id": f"app-{language}", "name": f"App {language}", "category": "test"}] + ) + mock_instance = MagicMock() + mock_instance.get_recommended_apps_and_categories.return_value = lang_response + mock_factory_class.get_recommend_app_factory.return_value = MagicMock(return_value=mock_instance) + + result = RecommendedAppService.get_recommended_apps_and_categories(language) + + assert result["recommended_apps"][0]["id"] == f"app-{language}" + mock_instance.get_recommended_apps_and_categories.assert_called_with(language) + + @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) + @patch("services.recommended_app_service.dify_config", autospec=True) + def test_uses_correct_factory_mode(self, mock_config, mock_factory_class): + for mode in ["remote", "builtin", "db"]: + mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = mode + response = _apps_response() + mock_instance = MagicMock() + mock_instance.get_recommended_apps_and_categories.return_value = response + mock_factory_class.get_recommend_app_factory.return_value = MagicMock(return_value=mock_instance) + + RecommendedAppService.get_recommended_apps_and_categories("en-US") + + mock_factory_class.get_recommend_app_factory.assert_called_with(mode) + + +# ── Pure logic tests: get_recommend_app_detail ───────────────────────── + + +class TestRecommendedAppServiceGetDetail: + @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) + @patch("services.recommended_app_service.dify_config", autospec=True) + def test_success(self, mock_config, mock_factory_class): + mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "remote" + expected = _app_detail(app_id="app-123", name="Productivity App", description="A great app") + + mock_instance = MagicMock() + mock_instance.get_recommend_app_detail.return_value = expected + mock_factory_class.get_recommend_app_factory.return_value = MagicMock(return_value=mock_instance) + + result = _recommendation_detail(RecommendedAppService.get_recommend_app_detail("app-123")) + + assert result == expected + assert result["id"] == "app-123" + mock_instance.get_recommend_app_detail.assert_called_once_with("app-123") + + @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) + @patch("services.recommended_app_service.dify_config", autospec=True) + def test_different_modes(self, mock_config, mock_factory_class): + for mode in ["remote", "builtin", "db"]: + mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = mode + detail = _app_detail(app_id="test-app", name=f"App from {mode}") + mock_instance = MagicMock() + mock_instance.get_recommend_app_detail.return_value = detail + mock_factory_class.get_recommend_app_factory.return_value = MagicMock(return_value=mock_instance) + + result = _recommendation_detail(RecommendedAppService.get_recommend_app_detail("test-app")) + + assert result["name"] == f"App from {mode}" + mock_factory_class.get_recommend_app_factory.assert_called_with(mode) + + @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) + @patch("services.recommended_app_service.dify_config", autospec=True) + def test_returns_none_when_not_found(self, mock_config, mock_factory_class): + mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "remote" + mock_instance = MagicMock() + mock_instance.get_recommend_app_detail.return_value = None + mock_factory_class.get_recommend_app_factory.return_value = MagicMock(return_value=mock_instance) + + result = _recommendation_detail(RecommendedAppService.get_recommend_app_detail("nonexistent")) + + assert result is None + mock_instance.get_recommend_app_detail.assert_called_once_with("nonexistent") + + @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) + @patch("services.recommended_app_service.dify_config", autospec=True) + def test_returns_empty_dict(self, mock_config, mock_factory_class): + mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "builtin" + mock_instance = MagicMock() + mock_instance.get_recommend_app_detail.return_value = {} + mock_factory_class.get_recommend_app_factory.return_value = MagicMock(return_value=mock_instance) + + result = _recommendation_detail(RecommendedAppService.get_recommend_app_detail("app-empty")) + + assert result == {} + + @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) + @patch("services.recommended_app_service.dify_config", autospec=True) + def test_complex_model_config(self, mock_config, mock_factory_class): + mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "remote" + complex_config = { + "provider": "openai", + "model": "gpt-4", + "parameters": {"temperature": 0.7, "max_tokens": 2000, "top_p": 1.0}, + } + expected = _app_detail( + app_id="complex-app", + name="Complex App", + model_config=complex_config, + workflows=["workflow-1", "workflow-2"], + tools=["tool-1", "tool-2", "tool-3"], + ) + mock_instance = MagicMock() + mock_instance.get_recommend_app_detail.return_value = expected + mock_factory_class.get_recommend_app_factory.return_value = MagicMock(return_value=mock_instance) + + result = _recommendation_detail(RecommendedAppService.get_recommend_app_detail("complex-app")) + + assert result["model_config"] == complex_config + assert len(result["workflows"]) == 2 + assert len(result["tools"]) == 3 + + +# ── Integration tests: trial app features (real DB) ──────────────────── + + +class TestRecommendedAppServiceTrialFeatures: + def test_get_apps_should_not_query_trial_table_when_disabled( + self, db_session_with_containers: Session, monkeypatch: pytest.MonkeyPatch + ): + expected = {"recommended_apps": [{"app_id": "app-1"}], "categories": ["all"]} + retrieval_instance, builtin_instance = _mock_factory_for_apps(monkeypatch, mode="remote", result=expected) + monkeypatch.setattr( + service_module.FeatureService, + "get_system_features", + MagicMock(return_value=SimpleNamespace(enable_trial_app=False)), + ) + + result = RecommendedAppService.get_recommended_apps_and_categories("en-US") + + assert result == expected + retrieval_instance.get_recommended_apps_and_categories.assert_called_once_with("en-US") + builtin_instance.fetch_recommended_apps_from_builtin.assert_not_called() + + def test_get_apps_should_enrich_can_trial_when_enabled( + self, db_session_with_containers: Session, monkeypatch: pytest.MonkeyPatch + ): + app_id_1 = str(uuid.uuid4()) + app_id_2 = str(uuid.uuid4()) + tenant_id = str(uuid.uuid4()) + + # app_id_1 has a TrialApp record; app_id_2 does not + db_session_with_containers.add(TrialApp(app_id=app_id_1, tenant_id=tenant_id)) + db_session_with_containers.commit() + + remote_result = {"recommended_apps": [], "categories": []} + fallback_result = { + "recommended_apps": [{"app_id": app_id_1}, {"app_id": app_id_2}], + "categories": ["all"], + } + _, builtin_instance = _mock_factory_for_apps( + monkeypatch, mode="remote", result=remote_result, fallback_result=fallback_result + ) + monkeypatch.setattr( + service_module.FeatureService, + "get_system_features", + MagicMock(return_value=SimpleNamespace(enable_trial_app=True)), + ) + + result = RecommendedAppService.get_recommended_apps_and_categories("ja-JP") + + builtin_instance.fetch_recommended_apps_from_builtin.assert_called_once_with("en-US") + assert result["recommended_apps"][0]["can_trial"] is True + assert result["recommended_apps"][1]["can_trial"] is False + + @pytest.mark.parametrize("has_trial_app", [True, False]) + def test_get_detail_should_set_can_trial_when_enabled( + self, + db_session_with_containers: Session, + monkeypatch: pytest.MonkeyPatch, + has_trial_app: bool, + ): + app_id = str(uuid.uuid4()) + tenant_id = str(uuid.uuid4()) + + if has_trial_app: + db_session_with_containers.add(TrialApp(app_id=app_id, tenant_id=tenant_id)) + db_session_with_containers.commit() + + detail = {"id": app_id, "name": "Test App"} + retrieval_instance = MagicMock() + retrieval_instance.get_recommend_app_detail.return_value = detail + retrieval_factory = MagicMock(return_value=retrieval_instance) + monkeypatch.setattr(service_module.dify_config, "HOSTED_FETCH_APP_TEMPLATES_MODE", "remote", raising=False) + monkeypatch.setattr( + service_module.RecommendAppRetrievalFactory, + "get_recommend_app_factory", + MagicMock(return_value=retrieval_factory), + ) + monkeypatch.setattr( + service_module.FeatureService, + "get_system_features", + MagicMock(return_value=SimpleNamespace(enable_trial_app=True)), + ) + + result = cast(dict[str, Any], RecommendedAppService.get_recommend_app_detail(app_id)) + + assert result["id"] == app_id + assert result["can_trial"] is has_trial_app + + def test_add_trial_app_record_increments_count_for_existing(self, db_session_with_containers: Session): + app_id = str(uuid.uuid4()) + account_id = str(uuid.uuid4()) + + db_session_with_containers.add(AccountTrialAppRecord(app_id=app_id, account_id=account_id, count=3)) + db_session_with_containers.commit() + + RecommendedAppService.add_trial_app_record(app_id, account_id) + + db_session_with_containers.expire_all() + record = db_session_with_containers.scalar( + select(AccountTrialAppRecord) + .where(AccountTrialAppRecord.app_id == app_id, AccountTrialAppRecord.account_id == account_id) + .limit(1) + ) + assert record is not None + assert record.count == 4 + + def test_add_trial_app_record_creates_new_record(self, db_session_with_containers: Session): + app_id = str(uuid.uuid4()) + account_id = str(uuid.uuid4()) + + RecommendedAppService.add_trial_app_record(app_id, account_id) + + db_session_with_containers.expire_all() + record = db_session_with_containers.scalar( + select(AccountTrialAppRecord) + .where(AccountTrialAppRecord.app_id == app_id, AccountTrialAppRecord.account_id == account_id) + .limit(1) + ) + assert record is not None + assert record.app_id == app_id + assert record.account_id == account_id + assert record.count == 1 diff --git a/api/tests/test_containers_integration_tests/services/test_schedule_service.py b/api/tests/test_containers_integration_tests/services/test_schedule_service.py new file mode 100644 index 0000000000..87f3306258 --- /dev/null +++ b/api/tests/test_containers_integration_tests/services/test_schedule_service.py @@ -0,0 +1,387 @@ +"""Testcontainers integration tests for schedule service SQL-backed behavior.""" + +from datetime import datetime +from types import SimpleNamespace +from uuid import uuid4 + +import pytest +from sqlalchemy import delete, select +from sqlalchemy.orm import Session + +from core.workflow.nodes.trigger_schedule.entities import ScheduleConfig, SchedulePlanUpdate +from core.workflow.nodes.trigger_schedule.exc import ScheduleNotFoundError +from events.event_handlers.sync_workflow_schedule_when_app_published import sync_schedule_from_workflow +from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole +from models.trigger import WorkflowSchedulePlan +from services.errors.account import AccountNotFoundError +from services.trigger.schedule_service import ScheduleService + + +class ScheduleServiceIntegrationFactory: + @staticmethod + def create_account_with_tenant( + db_session_with_containers: Session, + role: TenantAccountRole = TenantAccountRole.OWNER, + ) -> tuple[Account, Tenant]: + account = Account( + email=f"{uuid4()}@example.com", + name=f"user-{uuid4()}", + interface_language="en-US", + status="active", + ) + tenant = Tenant(name=f"tenant-{uuid4()}", status="normal") + db_session_with_containers.add_all([account, tenant]) + db_session_with_containers.flush() + + join = TenantAccountJoin( + tenant_id=tenant.id, + account_id=account.id, + role=role, + current=True, + ) + db_session_with_containers.add(join) + db_session_with_containers.commit() + + account.current_tenant = tenant + return account, tenant + + @staticmethod + def create_schedule_plan( + db_session_with_containers: Session, + *, + tenant_id: str, + app_id: str | None = None, + node_id: str = "start", + cron_expression: str = "30 10 * * *", + timezone: str = "UTC", + next_run_at: datetime | None = None, + ) -> WorkflowSchedulePlan: + schedule = WorkflowSchedulePlan( + tenant_id=tenant_id, + app_id=app_id or str(uuid4()), + node_id=node_id, + cron_expression=cron_expression, + timezone=timezone, + next_run_at=next_run_at, + ) + db_session_with_containers.add(schedule) + db_session_with_containers.commit() + return schedule + + +def _cron_workflow( + *, + node_id: str = "start", + cron_expression: str = "30 10 * * *", + timezone: str = "UTC", +): + return SimpleNamespace( + graph_dict={ + "nodes": [ + { + "id": node_id, + "data": { + "type": "trigger-schedule", + "mode": "cron", + "cron_expression": cron_expression, + "timezone": timezone, + }, + } + ] + } + ) + + +def _no_schedule_workflow(): + return SimpleNamespace( + graph_dict={ + "nodes": [ + { + "id": "node-1", + "data": {"type": "llm"}, + } + ] + } + ) + + +class TestScheduleServiceIntegration: + def test_create_schedule_persists_schedule(self, db_session_with_containers: Session): + account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + expected_next_run = datetime(2026, 1, 1, 10, 30, 0) + config = ScheduleConfig( + node_id="start", + cron_expression="30 10 * * *", + timezone="UTC", + ) + + with pytest.MonkeyPatch.context() as monkeypatch: + monkeypatch.setattr( + "services.trigger.schedule_service.calculate_next_run_at", + lambda *_args, **_kwargs: expected_next_run, + ) + schedule = ScheduleService.create_schedule( + session=db_session_with_containers, + tenant_id=tenant.id, + app_id=str(uuid4()), + config=config, + ) + + persisted = db_session_with_containers.get(WorkflowSchedulePlan, schedule.id) + assert persisted is not None + assert persisted.tenant_id == tenant.id + assert persisted.node_id == "start" + assert persisted.cron_expression == "30 10 * * *" + assert persisted.timezone == "UTC" + assert persisted.next_run_at == expected_next_run + + def test_update_schedule_updates_fields_and_recomputes_next_run(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + schedule = ScheduleServiceIntegrationFactory.create_schedule_plan( + db_session_with_containers, + tenant_id=tenant.id, + cron_expression="30 10 * * *", + timezone="UTC", + ) + expected_next_run = datetime(2026, 1, 2, 12, 0, 0) + + with pytest.MonkeyPatch.context() as monkeypatch: + monkeypatch.setattr( + "services.trigger.schedule_service.calculate_next_run_at", + lambda *_args, **_kwargs: expected_next_run, + ) + updated = ScheduleService.update_schedule( + session=db_session_with_containers, + schedule_id=schedule.id, + updates=SchedulePlanUpdate( + cron_expression="0 12 * * *", + timezone="America/New_York", + ), + ) + + db_session_with_containers.refresh(updated) + assert updated.cron_expression == "0 12 * * *" + assert updated.timezone == "America/New_York" + assert updated.next_run_at == expected_next_run + + def test_update_schedule_updates_only_node_id_without_recomputing_time(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + initial_next_run = datetime(2026, 1, 1, 10, 0, 0) + schedule = ScheduleServiceIntegrationFactory.create_schedule_plan( + db_session_with_containers, + tenant_id=tenant.id, + next_run_at=initial_next_run, + ) + + with pytest.MonkeyPatch.context() as monkeypatch: + calls: list[tuple] = [] + + def _track(*args, **kwargs): + calls.append((args, kwargs)) + return datetime(2026, 1, 9, 10, 0, 0) + + monkeypatch.setattr("services.trigger.schedule_service.calculate_next_run_at", _track) + updated = ScheduleService.update_schedule( + session=db_session_with_containers, + schedule_id=schedule.id, + updates=SchedulePlanUpdate(node_id="node-new"), + ) + + db_session_with_containers.refresh(updated) + assert updated.node_id == "node-new" + assert updated.next_run_at == initial_next_run + assert calls == [] + + def test_update_schedule_not_found_raises(self, db_session_with_containers: Session): + with pytest.raises(ScheduleNotFoundError, match="Schedule not found"): + ScheduleService.update_schedule( + session=db_session_with_containers, + schedule_id=str(uuid4()), + updates=SchedulePlanUpdate(node_id="node-new"), + ) + + def test_delete_schedule_removes_row(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + schedule = ScheduleServiceIntegrationFactory.create_schedule_plan( + db_session_with_containers, + tenant_id=tenant.id, + ) + + ScheduleService.delete_schedule( + session=db_session_with_containers, + schedule_id=schedule.id, + ) + db_session_with_containers.commit() + + assert db_session_with_containers.get(WorkflowSchedulePlan, schedule.id) is None + + def test_delete_schedule_not_found_raises(self, db_session_with_containers: Session): + with pytest.raises(ScheduleNotFoundError, match="Schedule not found"): + ScheduleService.delete_schedule( + session=db_session_with_containers, + schedule_id=str(uuid4()), + ) + + def test_get_tenant_owner_returns_owner_account(self, db_session_with_containers: Session): + owner, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant( + db_session_with_containers, + role=TenantAccountRole.OWNER, + ) + + result = ScheduleService.get_tenant_owner( + session=db_session_with_containers, + tenant_id=tenant.id, + ) + + assert result.id == owner.id + + def test_get_tenant_owner_falls_back_to_admin(self, db_session_with_containers: Session): + admin, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant( + db_session_with_containers, + role=TenantAccountRole.ADMIN, + ) + + result = ScheduleService.get_tenant_owner( + session=db_session_with_containers, + tenant_id=tenant.id, + ) + + assert result.id == admin.id + + def test_get_tenant_owner_raises_when_account_record_missing(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + db_session_with_containers.execute(delete(TenantAccountJoin)) + missing_account_id = str(uuid4()) + join = TenantAccountJoin( + tenant_id=tenant.id, + account_id=missing_account_id, + role=TenantAccountRole.OWNER, + current=True, + ) + db_session_with_containers.add(join) + db_session_with_containers.commit() + + with pytest.raises(AccountNotFoundError, match=missing_account_id): + ScheduleService.get_tenant_owner(session=db_session_with_containers, tenant_id=tenant.id) + + def test_get_tenant_owner_raises_when_no_owner_or_admin_found(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + db_session_with_containers.execute(delete(TenantAccountJoin)) + db_session_with_containers.commit() + + with pytest.raises(AccountNotFoundError, match=tenant.id): + ScheduleService.get_tenant_owner(session=db_session_with_containers, tenant_id=tenant.id) + + def test_update_next_run_at_updates_persisted_value(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + schedule = ScheduleServiceIntegrationFactory.create_schedule_plan( + db_session_with_containers, + tenant_id=tenant.id, + ) + expected_next_run = datetime(2026, 1, 3, 10, 30, 0) + + with pytest.MonkeyPatch.context() as monkeypatch: + monkeypatch.setattr( + "services.trigger.schedule_service.calculate_next_run_at", + lambda *_args, **_kwargs: expected_next_run, + ) + result = ScheduleService.update_next_run_at( + session=db_session_with_containers, + schedule_id=schedule.id, + ) + + db_session_with_containers.refresh(schedule) + assert result == expected_next_run + assert schedule.next_run_at == expected_next_run + + def test_update_next_run_at_raises_when_schedule_not_found(self, db_session_with_containers: Session): + with pytest.raises(ScheduleNotFoundError, match="Schedule not found"): + ScheduleService.update_next_run_at( + session=db_session_with_containers, + schedule_id=str(uuid4()), + ) + + +class TestSyncScheduleFromWorkflowIntegration: + def test_sync_schedule_create_new(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + app_id = str(uuid4()) + expected_next_run = datetime(2026, 1, 4, 10, 30, 0) + + with pytest.MonkeyPatch.context() as monkeypatch: + monkeypatch.setattr( + "services.trigger.schedule_service.calculate_next_run_at", + lambda *_args, **_kwargs: expected_next_run, + ) + result = sync_schedule_from_workflow( + tenant_id=tenant.id, + app_id=app_id, + workflow=_cron_workflow(), + ) + + assert result is not None + persisted = db_session_with_containers.execute( + select(WorkflowSchedulePlan).where(WorkflowSchedulePlan.app_id == app_id) + ).scalar_one() + assert persisted.node_id == "start" + assert persisted.cron_expression == "30 10 * * *" + assert persisted.timezone == "UTC" + assert persisted.next_run_at == expected_next_run + + def test_sync_schedule_update_existing(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + app_id = str(uuid4()) + existing = ScheduleServiceIntegrationFactory.create_schedule_plan( + db_session_with_containers, + tenant_id=tenant.id, + app_id=app_id, + node_id="old-start", + cron_expression="30 10 * * *", + timezone="UTC", + ) + existing_id = existing.id + expected_next_run = datetime(2026, 1, 5, 12, 0, 0) + + with pytest.MonkeyPatch.context() as monkeypatch: + monkeypatch.setattr( + "services.trigger.schedule_service.calculate_next_run_at", + lambda *_args, **_kwargs: expected_next_run, + ) + result = sync_schedule_from_workflow( + tenant_id=tenant.id, + app_id=app_id, + workflow=_cron_workflow( + node_id="start", + cron_expression="0 12 * * *", + timezone="America/New_York", + ), + ) + + assert result is not None + db_session_with_containers.expire_all() + persisted = db_session_with_containers.get(WorkflowSchedulePlan, existing_id) + assert persisted is not None + assert persisted.node_id == "start" + assert persisted.cron_expression == "0 12 * * *" + assert persisted.timezone == "America/New_York" + assert persisted.next_run_at == expected_next_run + + def test_sync_schedule_remove_when_no_config(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + app_id = str(uuid4()) + existing = ScheduleServiceIntegrationFactory.create_schedule_plan( + db_session_with_containers, + tenant_id=tenant.id, + app_id=app_id, + ) + existing_id = existing.id + + result = sync_schedule_from_workflow( + tenant_id=tenant.id, + app_id=app_id, + workflow=_no_schedule_workflow(), + ) + + assert result is None + db_session_with_containers.expire_all() + assert db_session_with_containers.get(WorkflowSchedulePlan, existing_id) is None diff --git a/api/tests/test_containers_integration_tests/services/test_webapp_auth_service.py b/api/tests/test_containers_integration_tests/services/test_webapp_auth_service.py index 4fe65d5803..7825f502f7 100644 --- a/api/tests/test_containers_integration_tests/services/test_webapp_auth_service.py +++ b/api/tests/test_containers_integration_tests/services/test_webapp_auth_service.py @@ -233,11 +233,10 @@ class TestWebAppAuthService: assert result.status == AccountStatus.ACTIVE # Verify database state - - db_session_with_containers.refresh(result) - assert result.id is not None - assert result.password is not None - assert result.password_salt is not None + refreshed = db_session_with_containers.get(Account, result.id) + assert refreshed is not None + assert refreshed.password is not None + assert refreshed.password_salt is not None def test_authenticate_account_not_found( self, db_session_with_containers: Session, mock_external_service_dependencies @@ -414,9 +413,8 @@ class TestWebAppAuthService: assert result.status == AccountStatus.ACTIVE # Verify database state - - db_session_with_containers.refresh(result) - assert result.id is not None + refreshed = db_session_with_containers.get(Account, result.id) + assert refreshed is not None def test_get_user_through_email_not_found( self, db_session_with_containers: Session, mock_external_service_dependencies diff --git a/api/tests/test_containers_integration_tests/services/test_webhook_service_relationships.py b/api/tests/test_containers_integration_tests/services/test_webhook_service_relationships.py new file mode 100644 index 0000000000..ec10c51e04 --- /dev/null +++ b/api/tests/test_containers_integration_tests/services/test_webhook_service_relationships.py @@ -0,0 +1,507 @@ +from __future__ import annotations + +import json +from types import SimpleNamespace +from unittest.mock import MagicMock, patch +from uuid import uuid4 + +import pytest +from sqlalchemy import select +from sqlalchemy.orm import Session + +from core.trigger.constants import TRIGGER_WEBHOOK_NODE_TYPE +from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole +from models.enums import AppTriggerStatus, AppTriggerType +from models.model import App +from models.trigger import AppTrigger, WorkflowWebhookTrigger +from models.workflow import Workflow +from services.errors.app import QuotaExceededError +from services.trigger.webhook_service import WebhookService + + +class WebhookServiceRelationshipFactory: + @staticmethod + def create_account_and_tenant(db_session_with_containers: Session) -> tuple[Account, Tenant]: + account = Account( + name=f"Account {uuid4()}", + email=f"webhook-{uuid4()}@example.com", + password="hashed-password", + password_salt="salt", + interface_language="en-US", + timezone="UTC", + ) + db_session_with_containers.add(account) + db_session_with_containers.commit() + + tenant = Tenant(name=f"Tenant {uuid4()}", plan="basic", status="normal") + db_session_with_containers.add(tenant) + db_session_with_containers.commit() + + join = TenantAccountJoin( + tenant_id=tenant.id, + account_id=account.id, + role=TenantAccountRole.OWNER, + current=True, + ) + db_session_with_containers.add(join) + db_session_with_containers.commit() + + account.current_tenant = tenant + return account, tenant + + @staticmethod + def create_app(db_session_with_containers: Session, tenant: Tenant, account: Account) -> App: + app = App( + tenant_id=tenant.id, + name=f"Webhook App {uuid4()}", + description="", + mode="workflow", + icon_type="emoji", + icon="bot", + icon_background="#FFFFFF", + enable_site=False, + enable_api=True, + api_rpm=100, + api_rph=100, + is_demo=False, + is_public=False, + is_universal=False, + created_by=account.id, + updated_by=account.id, + ) + db_session_with_containers.add(app) + db_session_with_containers.commit() + return app + + @staticmethod + def create_workflow( + db_session_with_containers: Session, + *, + app: App, + account: Account, + node_ids: list[str], + version: str, + ) -> Workflow: + graph = { + "nodes": [ + { + "id": node_id, + "data": { + "type": TRIGGER_WEBHOOK_NODE_TYPE, + "title": f"Webhook {node_id}", + "method": "post", + "content_type": "application/json", + "headers": [], + "params": [], + "body": [], + "status_code": 200, + "response_body": '{"status": "ok"}', + "timeout": 30, + }, + } + for node_id in node_ids + ], + "edges": [], + } + + workflow = Workflow( + tenant_id=app.tenant_id, + app_id=app.id, + type="workflow", + graph=json.dumps(graph), + features=json.dumps({}), + created_by=account.id, + updated_by=account.id, + environment_variables=[], + conversation_variables=[], + version=version, + ) + db_session_with_containers.add(workflow) + db_session_with_containers.commit() + return workflow + + @staticmethod + def create_webhook_trigger( + db_session_with_containers: Session, + *, + app: App, + account: Account, + node_id: str, + webhook_id: str | None = None, + ) -> WorkflowWebhookTrigger: + webhook_trigger = WorkflowWebhookTrigger( + app_id=app.id, + node_id=node_id, + tenant_id=app.tenant_id, + webhook_id=webhook_id or uuid4().hex[:24], + created_by=account.id, + ) + db_session_with_containers.add(webhook_trigger) + db_session_with_containers.commit() + return webhook_trigger + + @staticmethod + def create_app_trigger( + db_session_with_containers: Session, + *, + app: App, + node_id: str, + status: AppTriggerStatus, + ) -> AppTrigger: + app_trigger = AppTrigger( + tenant_id=app.tenant_id, + app_id=app.id, + node_id=node_id, + trigger_type=AppTriggerType.TRIGGER_WEBHOOK, + provider_name="webhook", + title=f"Webhook {node_id}", + status=status, + ) + db_session_with_containers.add(app_trigger) + db_session_with_containers.commit() + return app_trigger + + +class TestWebhookServiceLookupWithContainers: + def test_get_webhook_trigger_and_workflow_raises_when_app_trigger_missing( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=["node-1"], version="2026-04-14.001" + ) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="node-1" + ) + + with pytest.raises(ValueError, match="App trigger not found"): + WebhookService.get_webhook_trigger_and_workflow(webhook_trigger.webhook_id) + + def test_get_webhook_trigger_and_workflow_raises_when_app_trigger_rate_limited( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=["node-1"], version="2026-04-14.001" + ) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="node-1" + ) + factory.create_app_trigger( + db_session_with_containers, app=app, node_id="node-1", status=AppTriggerStatus.RATE_LIMITED + ) + + with pytest.raises(ValueError, match="rate limited"): + WebhookService.get_webhook_trigger_and_workflow(webhook_trigger.webhook_id) + + def test_get_webhook_trigger_and_workflow_raises_when_app_trigger_disabled( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=["node-1"], version="2026-04-14.001" + ) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="node-1" + ) + factory.create_app_trigger( + db_session_with_containers, app=app, node_id="node-1", status=AppTriggerStatus.DISABLED + ) + + with pytest.raises(ValueError, match="disabled"): + WebhookService.get_webhook_trigger_and_workflow(webhook_trigger.webhook_id) + + def test_get_webhook_trigger_and_workflow_raises_when_workflow_missing( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="node-1" + ) + factory.create_app_trigger( + db_session_with_containers, app=app, node_id="node-1", status=AppTriggerStatus.ENABLED + ) + + with pytest.raises(ValueError, match="Workflow not found"): + WebhookService.get_webhook_trigger_and_workflow(webhook_trigger.webhook_id) + + def test_get_webhook_trigger_and_workflow_returns_debug_draft_workflow( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + factory.create_workflow( + db_session_with_containers, + app=app, + account=account, + node_ids=["published-node"], + version="2026-04-14.001", + ) + draft_workflow = factory.create_workflow( + db_session_with_containers, + app=app, + account=account, + node_ids=["debug-node"], + version=Workflow.VERSION_DRAFT, + ) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="debug-node" + ) + + got_trigger, got_workflow, got_node_config = WebhookService.get_webhook_trigger_and_workflow( + webhook_trigger.webhook_id, + is_debug=True, + ) + + assert got_trigger.id == webhook_trigger.id + assert got_workflow.id == draft_workflow.id + assert got_node_config["id"] == "debug-node" + + +class TestWebhookServiceTriggerExecutionWithContainers: + def test_trigger_workflow_execution_triggers_async_workflow_successfully( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + workflow = factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=["node-1"], version="2026-04-14.001" + ) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="node-1" + ) + + end_user = SimpleNamespace(id=str(uuid4())) + webhook_data = {"body": {"value": 1}, "headers": {}, "query_params": {}, "files": {}, "method": "POST"} + + with ( + patch( + "services.trigger.webhook_service.EndUserService.get_or_create_end_user_by_type", + return_value=end_user, + ), + patch("services.trigger.webhook_service.QuotaType.TRIGGER.consume") as mock_consume, + patch("services.trigger.webhook_service.AsyncWorkflowService.trigger_workflow_async") as mock_trigger, + ): + WebhookService.trigger_workflow_execution(webhook_trigger, webhook_data, workflow) + + mock_consume.assert_called_once_with(webhook_trigger.tenant_id) + mock_trigger.assert_called_once() + trigger_args = mock_trigger.call_args.args + assert trigger_args[1] is end_user + assert trigger_args[2].workflow_id == workflow.id + assert trigger_args[2].root_node_id == webhook_trigger.node_id + + def test_trigger_workflow_execution_marks_tenant_rate_limited_when_quota_exceeded( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + workflow = factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=["node-1"], version="2026-04-14.001" + ) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="node-1" + ) + + with ( + patch( + "services.trigger.webhook_service.EndUserService.get_or_create_end_user_by_type", + return_value=SimpleNamespace(id=str(uuid4())), + ), + patch( + "services.trigger.webhook_service.QuotaType.TRIGGER.consume", + side_effect=QuotaExceededError(feature="trigger", tenant_id=tenant.id, required=1), + ), + patch( + "services.trigger.webhook_service.AppTriggerService.mark_tenant_triggers_rate_limited" + ) as mock_mark_rate_limited, + ): + with pytest.raises(QuotaExceededError): + WebhookService.trigger_workflow_execution( + webhook_trigger, + {"body": {}, "headers": {}, "query_params": {}, "files": {}, "method": "POST"}, + workflow, + ) + + mock_mark_rate_limited.assert_called_once_with(tenant.id) + + def test_trigger_workflow_execution_logs_and_reraises_unexpected_errors( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + workflow = factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=["node-1"], version="2026-04-14.001" + ) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="node-1" + ) + + with ( + patch( + "services.trigger.webhook_service.EndUserService.get_or_create_end_user_by_type", + side_effect=RuntimeError("boom"), + ), + patch("services.trigger.webhook_service.logger.exception") as mock_logger_exception, + ): + with pytest.raises(RuntimeError, match="boom"): + WebhookService.trigger_workflow_execution( + webhook_trigger, + {"body": {}, "headers": {}, "query_params": {}, "files": {}, "method": "POST"}, + workflow, + ) + + mock_logger_exception.assert_called_once() + + +class TestWebhookServiceRelationshipSyncWithContainers: + def test_sync_webhook_relationships_raises_when_workflow_exceeds_node_limit( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + node_ids = [f"node-{index}" for index in range(WebhookService.MAX_WEBHOOK_NODES_PER_WORKFLOW + 1)] + workflow = factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=node_ids, version=Workflow.VERSION_DRAFT + ) + + with pytest.raises(ValueError, match="maximum webhook node limit"): + WebhookService.sync_webhook_relationships(app, workflow) + + def test_sync_webhook_relationships_raises_when_lock_not_acquired( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + workflow = factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=["node-1"], version=Workflow.VERSION_DRAFT + ) + lock = MagicMock() + lock.acquire.return_value = False + + with patch("services.trigger.webhook_service.redis_client.lock", return_value=lock): + with pytest.raises(RuntimeError, match="Failed to acquire lock"): + WebhookService.sync_webhook_relationships(app, workflow) + + def test_sync_webhook_relationships_creates_missing_records_and_deletes_stale_records( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + stale_trigger = factory.create_webhook_trigger( + db_session_with_containers, + app=app, + account=account, + node_id="node-stale", + webhook_id="stale-webhook-id-000001", + ) + stale_trigger_id = stale_trigger.id + workflow = factory.create_workflow( + db_session_with_containers, + app=app, + account=account, + node_ids=["node-new"], + version=Workflow.VERSION_DRAFT, + ) + + with patch( + "services.trigger.webhook_service.WebhookService.generate_webhook_id", return_value="new-webhook-id-000001" + ): + WebhookService.sync_webhook_relationships(app, workflow) + + db_session_with_containers.expire_all() + records = db_session_with_containers.scalars( + select(WorkflowWebhookTrigger).where(WorkflowWebhookTrigger.app_id == app.id) + ).all() + + assert [record.node_id for record in records] == ["node-new"] + assert records[0].webhook_id == "new-webhook-id-000001" + assert db_session_with_containers.get(WorkflowWebhookTrigger, stale_trigger_id) is None + + def test_sync_webhook_relationships_sets_redis_cache_for_new_record( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + workflow = factory.create_workflow( + db_session_with_containers, + app=app, + account=account, + node_ids=["node-cache"], + version=Workflow.VERSION_DRAFT, + ) + cache_key = f"{WebhookService.__WEBHOOK_NODE_CACHE_KEY__}:{app.id}:node-cache" + + with patch( + "services.trigger.webhook_service.WebhookService.generate_webhook_id", return_value="cache-webhook-id-00001" + ): + WebhookService.sync_webhook_relationships(app, workflow) + + cached_payload = WebhookServiceRelationshipFactory._read_cache(cache_key) + assert cached_payload is not None + assert cached_payload["node_id"] == "node-cache" + assert cached_payload["webhook_id"] == "cache-webhook-id-00001" + + def test_sync_webhook_relationships_logs_when_lock_release_fails( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + workflow = factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=[], version=Workflow.VERSION_DRAFT + ) + lock = MagicMock() + lock.acquire.return_value = True + lock.release.side_effect = RuntimeError("release failed") + + with ( + patch("services.trigger.webhook_service.redis_client.lock", return_value=lock), + patch("services.trigger.webhook_service.logger.exception") as mock_logger_exception, + ): + WebhookService.sync_webhook_relationships(app, workflow) + + mock_logger_exception.assert_called_once() + + +def _read_cache(cache_key: str) -> dict[str, str] | None: + from extensions.ext_redis import redis_client + + cached = redis_client.get(cache_key) + if not cached: + return None + if isinstance(cached, bytes): + cached = cached.decode("utf-8") + return json.loads(cached) + + +WebhookServiceRelationshipFactory._read_cache = staticmethod(_read_cache) diff --git a/api/tests/test_containers_integration_tests/services/test_workflow_app_service.py b/api/tests/test_containers_integration_tests/services/test_workflow_app_service.py index 749c6fff5b..1e57b5603d 100644 --- a/api/tests/test_containers_integration_tests/services/test_workflow_app_service.py +++ b/api/tests/test_containers_integration_tests/services/test_workflow_app_service.py @@ -8,9 +8,9 @@ from unittest.mock import patch import pytest from faker import Faker -from graphon.enums import WorkflowExecutionStatus from sqlalchemy.orm import Session +from graphon.enums import WorkflowExecutionStatus from models import EndUser, Workflow, WorkflowAppLog, WorkflowArchiveLog, WorkflowRun from models.enums import AppTriggerType, CreatorUserRole, WorkflowRunTriggeredFrom from models.workflow import WorkflowAppLogCreatedFrom diff --git a/api/tests/test_containers_integration_tests/services/test_workflow_draft_variable_service.py b/api/tests/test_containers_integration_tests/services/test_workflow_draft_variable_service.py index 0c281c8c33..86cf2327c7 100644 --- a/api/tests/test_containers_integration_tests/services/test_workflow_draft_variable_service.py +++ b/api/tests/test_containers_integration_tests/services/test_workflow_draft_variable_service.py @@ -1,9 +1,9 @@ import pytest from faker import Faker -from graphon.variables.segments import StringSegment from sqlalchemy.orm import Session from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, SYSTEM_VARIABLE_NODE_ID +from graphon.variables.segments import StringSegment from models import App, Workflow from models.enums import DraftVariableType from models.workflow import WorkflowDraftVariable diff --git a/api/tests/test_containers_integration_tests/services/tools/test_api_tools_manage_service.py b/api/tests/test_containers_integration_tests/services/tools/test_api_tools_manage_service.py index d3e765055a..af83adaae0 100644 --- a/api/tests/test_containers_integration_tests/services/tools/test_api_tools_manage_service.py +++ b/api/tests/test_containers_integration_tests/services/tools/test_api_tools_manage_service.py @@ -1,3 +1,5 @@ +import inspect +import json from unittest.mock import patch import pytest @@ -6,6 +8,8 @@ from pydantic import TypeAdapter, ValidationError from sqlalchemy.orm import Session from core.tools.entities.tool_entities import ApiProviderSchemaType +from core.tools.errors import ApiToolProviderNotFoundError +from core.tools.tool_label_manager import ToolLabelManager from models import Account, Tenant from models.tools import ApiToolProvider from services.tools.api_tools_manage_service import ApiToolManageService @@ -590,30 +594,204 @@ class TestApiToolManageService: with pytest.raises(ValueError, match="you have not added provider"): ApiToolManageService.delete_api_tool_provider(account.id, tenant.id, "nonexistent") - def test_update_api_tool_provider_not_found( + def test_update_api_tool_provider_success( self, flask_req_ctx_with_containers, db_session_with_containers: Session, mock_external_service_dependencies ): - """Test update raises ValueError when original provider not found.""" fake = Faker() + + # Firmware fix for cache.delete() in update flow + mock_encrypter = mock_external_service_dependencies["encrypter"] + from unittest.mock import MagicMock + + mock_cache = MagicMock() + mock_cache.delete.return_value = None + mock_encrypter.return_value = (mock_encrypter, mock_cache) + + # Get fake account and tenant account, tenant = self._create_test_account_and_tenant( db_session_with_containers, mock_external_service_dependencies ) - with pytest.raises(ValueError, match="does not exists"): - ApiToolManageService.update_api_tool_provider( + # original provider name + original_name = "original-provider" + + # Create original provider + _ = ApiToolManageService.create_api_tool_provider( + user_id=account.id, + tenant_id=tenant.id, + provider_name=original_name, + icon={"type": "emoji", "value": "🔧"}, + credentials={"auth_type": "none"}, + schema_type=ApiProviderSchemaType.OPENAPI, + schema=self._create_test_openapi_schema(), + privacy_policy="", + custom_disclaimer="", + labels=["old-label"], + ) + + # new provide name and new labels for update + new_name = "updated-provider" + new_labels = ["new-label-1", "new-label-2"] + + # Reset mock history so assertions focus on update path only + mock_external_service_dependencies["encrypter"].reset_mock() + mock_external_service_dependencies["provider_controller"].from_db.reset_mock() + mock_external_service_dependencies["tool_label_manager"].update_tool_labels.reset_mock() + + # Act: Update the provider with new values + result = ApiToolManageService.update_api_tool_provider( + user_id=account.id, + tenant_id=tenant.id, + # new provider name - changed 1 + provider_name=new_name, + original_provider=original_name, + # new icon - changed 2 + icon={"type": "emoji", "value": "🚀"}, + credentials={"auth_type": "none"}, + _schema_type=ApiProviderSchemaType.OPENAPI, + schema=self._create_test_openapi_schema(), + # new privacy policy - changed 3 + privacy_policy="https://new-policy.com", + # new custom disclaimer - changed 4 + custom_disclaimer="New disclaimer", + # new labels - changed 5 (However, we will not verify this, not this layer responsibility.) + labels=new_labels, + ) + + # Assert: Verify the result + assert result == {"result": "success"} + + # Get the updated provider from the database + updated_provider: ApiToolProvider | None = ( + db_session_with_containers.query(ApiToolProvider) + .filter(ApiToolProvider.tenant_id == tenant.id, ApiToolProvider.name == new_name) + .first() + ) + + # Verify the provider was updated successfully + assert updated_provider is not None + + # Manually refresh to keep object detachment + db_session_with_containers.refresh(updated_provider) + # Verify all the updated fields + # - changed 1 + assert updated_provider.name == new_name + # - changed 2 + icon_data = json.loads(updated_provider.icon) + assert icon_data["type"] == "emoji" + assert icon_data["value"] == "🚀" + # - changed 3 + assert updated_provider.privacy_policy == "https://new-policy.com" + # - changed 4 + assert updated_provider.custom_disclaimer == "New disclaimer" + + # Verify old provider name no longer exists after rename + original_provider: ApiToolProvider | None = ( + db_session_with_containers.query(ApiToolProvider) + .filter(ApiToolProvider.tenant_id == tenant.id, ApiToolProvider.name == original_name) + .first() + ) + assert original_provider is None + + # Verify update flow calls critical collaborators + mock_external_service_dependencies["provider_controller"].from_db.assert_called_once() + mock_external_service_dependencies["encrypter"].assert_called_once() + mock_cache.delete.assert_called_once() + + # Deeply verify on session propagation of labels update logics: + # Since in refactoring, we pass session down to label manager to keep atomicity. + # The assertion here is to verify this. + sig = inspect.signature(ToolLabelManager.update_tool_labels) + args, kwargs = mock_external_service_dependencies["tool_label_manager"].update_tool_labels.call_args + bound_args = sig.bind(*args, **kwargs) + passed_session = bound_args.arguments.get("session") + # Ensure the type: Session + assert isinstance(passed_session, Session), f"Expected Session object, got {type(passed_session)}" + assert passed_session is not None, ( + "Atomicity Failure: Session cannot be passed to Label Manager in update_api_tool_provider" + ) + + def test_update_api_tool_provider_not_found( + self, flask_req_ctx_with_containers, db_session_with_containers: Session, mock_external_service_dependencies + ): + """ + Test update raises ValueError when original provider not found. + + This test verifies: + - Proper error when trying to update a non-existing original provider + - No accidental upsert/new provider creation + - No external dependency invocation on early failure path + """ + # Arrange: Create test account and tenant + account, tenant = self._create_test_account_and_tenant( + db_session_with_containers, mock_external_service_dependencies + ) + + # Keep an existing provider in DB to ensure unrelated data remains unchanged + existing_provider_name = "existing-provider" + _ = ApiToolManageService.create_api_tool_provider( + user_id=account.id, + tenant_id=tenant.id, + provider_name=existing_provider_name, + icon={"type": "emoji", "value": "🔧"}, + credentials={"auth_type": "none"}, + schema_type=ApiProviderSchemaType.OPENAPI, + schema=self._create_test_openapi_schema(), + privacy_policy="https://existing-policy.com", + custom_disclaimer="Existing disclaimer", + labels=["existing-label"], + ) + + # Reset mock history so assertions focus on update failure path only + mock_external_service_dependencies["tool_label_manager"].update_tool_labels.reset_mock() + mock_external_service_dependencies["encrypter"].reset_mock() + mock_external_service_dependencies["provider_controller"].from_db.reset_mock() + + # Act & Assert: Verify update fails with clear error message + target_new_name = "new-provider-name" + missing_original_name = "missing-original-provider" + with pytest.raises(ApiToolProviderNotFoundError) as exc_info: + _ = ApiToolManageService.update_api_tool_provider( user_id=account.id, tenant_id=tenant.id, - provider_name="new-name", - original_provider="nonexistent", - icon={}, + provider_name=target_new_name, + original_provider=missing_original_name, + icon={"type": "emoji", "value": "🚀"}, credentials={"auth_type": "none"}, _schema_type=ApiProviderSchemaType.OPENAPI, schema=self._create_test_openapi_schema(), - privacy_policy=None, - custom_disclaimer="", - labels=[], + privacy_policy="https://new-policy.com", + custom_disclaimer="New disclaimer", + labels=["new-label"], ) + error = exc_info.value + assert error.provider_name == missing_original_name + assert error.tenant_id == tenant.id + assert error.error_code == "api_tool_provider_not_found" + + # Assert: Existing provider should remain unchanged + existing_provider: ApiToolProvider | None = ( + db_session_with_containers.query(ApiToolProvider) + .filter(ApiToolProvider.tenant_id == tenant.id, ApiToolProvider.name == existing_provider_name) + .first() + ) + assert existing_provider is not None + assert existing_provider.name == existing_provider_name + + # Assert: No new provider should be created + unexpected_new_provider: ApiToolProvider | None = ( + db_session_with_containers.query(ApiToolProvider) + .filter(ApiToolProvider.tenant_id == tenant.id, ApiToolProvider.name == target_new_name) + .first() + ) + assert unexpected_new_provider is None + + # Assert: Early failure should skip all downstream external interactions + mock_external_service_dependencies["tool_label_manager"].update_tool_labels.assert_not_called() + mock_external_service_dependencies["encrypter"].assert_not_called() + mock_external_service_dependencies["provider_controller"].from_db.assert_not_called() + def test_update_api_tool_provider_missing_auth_type( self, flask_req_ctx_with_containers, db_session_with_containers: Session, mock_external_service_dependencies ): diff --git a/api/tests/test_containers_integration_tests/services/workflow/test_workflow_converter.py b/api/tests/test_containers_integration_tests/services/workflow/test_workflow_converter.py index ce2fd2eeb1..ce5c2bd162 100644 --- a/api/tests/test_containers_integration_tests/services/workflow/test_workflow_converter.py +++ b/api/tests/test_containers_integration_tests/services/workflow/test_workflow_converter.py @@ -5,9 +5,6 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker -from graphon.model_runtime.entities.llm_entities import LLMMode -from graphon.model_runtime.entities.message_entities import PromptMessageRole -from graphon.variables.input_entities import VariableEntity, VariableEntityType from sqlalchemy.orm import Session from core.app.app_config.entities import ( @@ -21,6 +18,9 @@ from core.app.app_config.entities import ( PromptTemplateEntity, ) from core.prompt.utils.prompt_template_parser import PromptTemplateParser +from graphon.model_runtime.entities.llm_entities import LLMMode +from graphon.model_runtime.entities.message_entities import PromptMessageRole +from graphon.variables.input_entities import VariableEntity, VariableEntityType from models import Account, Tenant from models.api_based_extension import APIBasedExtension, APIBasedExtensionPoint from models.model import App, AppMode, AppModelConfig diff --git a/api/tests/test_containers_integration_tests/services/workflow/test_workflow_node_execution_service_repository.py b/api/tests/test_containers_integration_tests/services/workflow/test_workflow_node_execution_service_repository.py index 7c43bf676b..4dab895135 100644 --- a/api/tests/test_containers_integration_tests/services/workflow/test_workflow_node_execution_service_repository.py +++ b/api/tests/test_containers_integration_tests/services/workflow/test_workflow_node_execution_service_repository.py @@ -1,10 +1,10 @@ from datetime import datetime, timedelta from uuid import uuid4 -from graphon.enums import WorkflowNodeExecutionStatus from sqlalchemy import Engine, select from sqlalchemy.orm import Session, sessionmaker +from graphon.enums import WorkflowNodeExecutionStatus from libs.datetime_utils import naive_utc_now from models.enums import CreatorUserRole from models.workflow import WorkflowNodeExecutionModel diff --git a/api/tests/test_containers_integration_tests/tasks/test_add_document_to_index_task.py b/api/tests/test_containers_integration_tests/tasks/test_add_document_to_index_task.py index 4b04c1accb..fcc15aad42 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_add_document_to_index_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_add_document_to_index_task.py @@ -2,6 +2,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import select from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType @@ -530,22 +531,18 @@ class TestAddDocumentToIndexTask: redis_client.set(indexing_cache_key, "processing", ex=300) # Verify logs exist before processing - existing_logs = ( - db_session_with_containers.query(DatasetAutoDisableLog) - .where(DatasetAutoDisableLog.document_id == document.id) - .all() - ) + existing_logs = db_session_with_containers.scalars( + select(DatasetAutoDisableLog).where(DatasetAutoDisableLog.document_id == document.id) + ).all() assert len(existing_logs) == 2 # Act: Execute the task add_document_to_index_task(document.id) # Assert: Verify auto disable logs were deleted - remaining_logs = ( - db_session_with_containers.query(DatasetAutoDisableLog) - .where(DatasetAutoDisableLog.document_id == document.id) - .all() - ) + remaining_logs = db_session_with_containers.scalars( + select(DatasetAutoDisableLog).where(DatasetAutoDisableLog.document_id == document.id) + ).all() assert len(remaining_logs) == 0 # Verify index processing occurred normally diff --git a/api/tests/test_containers_integration_tests/tasks/test_batch_clean_document_task.py b/api/tests/test_containers_integration_tests/tasks/test_batch_clean_document_task.py index 6cbbe43137..e29ca7ebab 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_batch_clean_document_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_batch_clean_document_task.py @@ -11,6 +11,7 @@ from unittest.mock import Mock, patch import pytest from faker import Faker +from sqlalchemy import func, select from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexStructureType @@ -267,11 +268,13 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Ensure all changes are committed # Check that segment is deleted - deleted_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) assert deleted_segment is None # Check that upload file is deleted - deleted_file = db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() + deleted_file = db_session_with_containers.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) assert deleted_file is None def test_batch_clean_document_task_with_image_files( @@ -319,7 +322,9 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Check that segment is deleted - deleted_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) assert deleted_segment is None # Verify that the task completed successfully by checking the log output @@ -360,14 +365,14 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Check that upload file is deleted - deleted_file = db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() + deleted_file = db_session_with_containers.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) assert deleted_file is None # Verify database cleanup db_session_with_containers.commit() # Check that upload file is deleted - deleted_file = db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() + deleted_file = db_session_with_containers.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) assert deleted_file is None def test_batch_clean_document_task_dataset_not_found( @@ -410,7 +415,9 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Document should still exist since cleanup failed - existing_document = db_session_with_containers.query(Document).filter_by(id=document_id).first() + existing_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document_id).limit(1) + ) assert existing_document is not None def test_batch_clean_document_task_storage_cleanup_failure( @@ -453,11 +460,13 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Check that segment is deleted from database - deleted_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) assert deleted_segment is None # Check that upload file is deleted from database - deleted_file = db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() + deleted_file = db_session_with_containers.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) assert deleted_file is None def test_batch_clean_document_task_multiple_documents( @@ -510,12 +519,16 @@ class TestBatchCleanDocumentTask: # Check that all segments are deleted for segment_id in segment_ids: - deleted_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) assert deleted_segment is None # Check that all upload files are deleted for file_id in file_ids: - deleted_file = db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() + deleted_file = db_session_with_containers.scalar( + select(UploadFile).where(UploadFile.id == file_id).limit(1) + ) assert deleted_file is None def test_batch_clean_document_task_different_doc_forms( @@ -564,7 +577,9 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Check that segment is deleted - deleted_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) assert deleted_segment is None except Exception as e: @@ -574,7 +589,9 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Check if the segment still exists (task may have failed before deletion) - existing_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + existing_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) if existing_segment is not None: # If segment still exists, the task failed before deletion # This is acceptable in test environments with external service issues @@ -645,12 +662,16 @@ class TestBatchCleanDocumentTask: # Check that all segments are deleted for segment_id in segment_ids: - deleted_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) assert deleted_segment is None # Check that all upload files are deleted for file_id in file_ids: - deleted_file = db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() + deleted_file = db_session_with_containers.scalar( + select(UploadFile).where(UploadFile.id == file_id).limit(1) + ) assert deleted_file is None def test_batch_clean_document_task_integration_with_real_database( @@ -699,8 +720,16 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Verify initial state - assert db_session_with_containers.query(DocumentSegment).filter_by(document_id=document.id).count() == 3 - assert db_session_with_containers.query(UploadFile).filter_by(id=upload_file.id).first() is not None + assert ( + db_session_with_containers.scalar( + select(func.count()).select_from(DocumentSegment).where(DocumentSegment.document_id == document.id) + ) + == 3 + ) + assert ( + db_session_with_containers.scalar(select(UploadFile).where(UploadFile.id == upload_file.id).limit(1)) + is not None + ) # Store original IDs for verification document_id = document.id @@ -720,13 +749,20 @@ class TestBatchCleanDocumentTask: # Check that all segments are deleted for segment_id in segment_ids: - deleted_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) assert deleted_segment is None # Check that upload file is deleted - deleted_file = db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() + deleted_file = db_session_with_containers.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) assert deleted_file is None # Verify final database state - assert db_session_with_containers.query(DocumentSegment).filter_by(document_id=document_id).count() == 0 - assert db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() is None + assert ( + db_session_with_containers.scalar( + select(func.count()).select_from(DocumentSegment).where(DocumentSegment.document_id == document_id) + ) + == 0 + ) + assert db_session_with_containers.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) is None diff --git a/api/tests/test_containers_integration_tests/tasks/test_batch_create_segment_to_index_task.py b/api/tests/test_containers_integration_tests/tasks/test_batch_create_segment_to_index_task.py index f9ae33b32f..05827112d4 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_batch_create_segment_to_index_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_batch_create_segment_to_index_task.py @@ -17,6 +17,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import delete, select from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType @@ -37,13 +38,13 @@ class TestBatchCreateSegmentToIndexTask: from extensions.ext_redis import redis_client # Clear all test data - db_session_with_containers.query(DocumentSegment).delete() - db_session_with_containers.query(Document).delete() - db_session_with_containers.query(Dataset).delete() - db_session_with_containers.query(UploadFile).delete() - db_session_with_containers.query(TenantAccountJoin).delete() - db_session_with_containers.query(Tenant).delete() - db_session_with_containers.query(Account).delete() + db_session_with_containers.execute(delete(DocumentSegment)) + db_session_with_containers.execute(delete(Document)) + db_session_with_containers.execute(delete(Dataset)) + db_session_with_containers.execute(delete(UploadFile)) + db_session_with_containers.execute(delete(TenantAccountJoin)) + db_session_with_containers.execute(delete(Tenant)) + db_session_with_containers.execute(delete(Account)) db_session_with_containers.commit() # Clear Redis cache @@ -292,12 +293,9 @@ class TestBatchCreateSegmentToIndexTask: # Verify results # Check that segments were created - segments = ( - db_session_with_containers.query(DocumentSegment) - .filter_by(document_id=document.id) - .order_by(DocumentSegment.position) - .all() - ) + segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.document_id == document.id).order_by(DocumentSegment.position) + ).all() assert len(segments) == 3 # Verify segment content and metadata @@ -367,11 +365,11 @@ class TestBatchCreateSegmentToIndexTask: # Verify no segments were created (since dataset doesn't exist) - segments = db_session_with_containers.query(DocumentSegment).all() + segments = db_session_with_containers.scalars(select(DocumentSegment)).all() assert len(segments) == 0 # Verify no documents were modified - documents = db_session_with_containers.query(Document).all() + documents = db_session_with_containers.scalars(select(Document)).all() assert len(documents) == 0 def test_batch_create_segment_to_index_task_document_not_found( @@ -415,12 +413,14 @@ class TestBatchCreateSegmentToIndexTask: # Verify no segments were created - segments = db_session_with_containers.query(DocumentSegment).all() + segments = db_session_with_containers.scalars(select(DocumentSegment)).all() assert len(segments) == 0 # Verify dataset remains unchanged (no segments were added to the dataset) db_session_with_containers.refresh(dataset) - segments_for_dataset = db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() + segments_for_dataset = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(segments_for_dataset) == 0 def test_batch_create_segment_to_index_task_document_not_available( @@ -516,7 +516,9 @@ class TestBatchCreateSegmentToIndexTask: assert cache_value == b"error" # Verify no segments were created - segments = db_session_with_containers.query(DocumentSegment).filter_by(document_id=document.id).all() + segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.document_id == document.id) + ).all() assert len(segments) == 0 def test_batch_create_segment_to_index_task_upload_file_not_found( @@ -560,7 +562,7 @@ class TestBatchCreateSegmentToIndexTask: # Verify no segments were created - segments = db_session_with_containers.query(DocumentSegment).all() + segments = db_session_with_containers.scalars(select(DocumentSegment)).all() assert len(segments) == 0 # Verify document remains unchanged @@ -611,7 +613,7 @@ class TestBatchCreateSegmentToIndexTask: # Verify error handling # Since exception was raised, no segments should be created - segments = db_session_with_containers.query(DocumentSegment).all() + segments = db_session_with_containers.scalars(select(DocumentSegment)).all() assert len(segments) == 0 # Verify document remains unchanged @@ -682,12 +684,9 @@ class TestBatchCreateSegmentToIndexTask: # Verify results # Check that new segments were created with correct positions - all_segments = ( - db_session_with_containers.query(DocumentSegment) - .filter_by(document_id=document.id) - .order_by(DocumentSegment.position) - .all() - ) + all_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.document_id == document.id).order_by(DocumentSegment.position) + ).all() assert len(all_segments) == 6 # 3 existing + 3 new # Verify position ordering diff --git a/api/tests/test_containers_integration_tests/tasks/test_clean_dataset_task.py b/api/tests/test_containers_integration_tests/tasks/test_clean_dataset_task.py index 1dd37fbc92..32bc2fc0bd 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_clean_dataset_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_clean_dataset_task.py @@ -16,6 +16,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import delete, select from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType @@ -52,18 +53,18 @@ class TestCleanDatasetTask: from extensions.ext_redis import redis_client # Clear all test data using the provided session fixture - db_session_with_containers.query(DatasetMetadataBinding).delete() - db_session_with_containers.query(DatasetMetadata).delete() - db_session_with_containers.query(AppDatasetJoin).delete() - db_session_with_containers.query(DatasetQuery).delete() - db_session_with_containers.query(DatasetProcessRule).delete() - db_session_with_containers.query(DocumentSegment).delete() - db_session_with_containers.query(Document).delete() - db_session_with_containers.query(Dataset).delete() - db_session_with_containers.query(UploadFile).delete() - db_session_with_containers.query(TenantAccountJoin).delete() - db_session_with_containers.query(Tenant).delete() - db_session_with_containers.query(Account).delete() + db_session_with_containers.execute(delete(DatasetMetadataBinding)) + db_session_with_containers.execute(delete(DatasetMetadata)) + db_session_with_containers.execute(delete(AppDatasetJoin)) + db_session_with_containers.execute(delete(DatasetQuery)) + db_session_with_containers.execute(delete(DatasetProcessRule)) + db_session_with_containers.execute(delete(DocumentSegment)) + db_session_with_containers.execute(delete(Document)) + db_session_with_containers.execute(delete(Dataset)) + db_session_with_containers.execute(delete(UploadFile)) + db_session_with_containers.execute(delete(TenantAccountJoin)) + db_session_with_containers.execute(delete(Tenant)) + db_session_with_containers.execute(delete(Account)) db_session_with_containers.commit() # Clear Redis cache @@ -302,28 +303,40 @@ class TestCleanDatasetTask: # Verify results # Check that dataset-related data was cleaned up - documents = db_session_with_containers.query(Document).filter_by(dataset_id=dataset.id).all() + documents = db_session_with_containers.scalars(select(Document).where(Document.dataset_id == dataset.id)).all() assert len(documents) == 0 - segments = db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() + segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(segments) == 0 # Check that metadata and bindings were cleaned up - metadata = db_session_with_containers.query(DatasetMetadata).filter_by(dataset_id=dataset.id).all() + metadata = db_session_with_containers.scalars( + select(DatasetMetadata).where(DatasetMetadata.dataset_id == dataset.id) + ).all() assert len(metadata) == 0 - bindings = db_session_with_containers.query(DatasetMetadataBinding).filter_by(dataset_id=dataset.id).all() + bindings = db_session_with_containers.scalars( + select(DatasetMetadataBinding).where(DatasetMetadataBinding.dataset_id == dataset.id) + ).all() assert len(bindings) == 0 # Check that process rules and queries were cleaned up - process_rules = db_session_with_containers.query(DatasetProcessRule).filter_by(dataset_id=dataset.id).all() + process_rules = db_session_with_containers.scalars( + select(DatasetProcessRule).where(DatasetProcessRule.dataset_id == dataset.id) + ).all() assert len(process_rules) == 0 - queries = db_session_with_containers.query(DatasetQuery).filter_by(dataset_id=dataset.id).all() + queries = db_session_with_containers.scalars( + select(DatasetQuery).where(DatasetQuery.dataset_id == dataset.id) + ).all() assert len(queries) == 0 # Check that app dataset joins were cleaned up - app_joins = db_session_with_containers.query(AppDatasetJoin).filter_by(dataset_id=dataset.id).all() + app_joins = db_session_with_containers.scalars( + select(AppDatasetJoin).where(AppDatasetJoin.dataset_id == dataset.id) + ).all() assert len(app_joins) == 0 # Verify index processor was called @@ -414,24 +427,32 @@ class TestCleanDatasetTask: # Verify results # Check that all documents were deleted - remaining_documents = db_session_with_containers.query(Document).filter_by(dataset_id=dataset.id).all() + remaining_documents = db_session_with_containers.scalars( + select(Document).where(Document.dataset_id == dataset.id) + ).all() assert len(remaining_documents) == 0 # Check that all segments were deleted - remaining_segments = db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() + remaining_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(remaining_segments) == 0 # Check that all upload files were deleted - remaining_files = db_session_with_containers.query(UploadFile).where(UploadFile.id.in_(upload_file_ids)).all() + remaining_files = db_session_with_containers.scalars( + select(UploadFile).where(UploadFile.id.in_(upload_file_ids)) + ).all() assert len(remaining_files) == 0 # Check that metadata and bindings were cleaned up - remaining_metadata = db_session_with_containers.query(DatasetMetadata).filter_by(dataset_id=dataset.id).all() + remaining_metadata = db_session_with_containers.scalars( + select(DatasetMetadata).where(DatasetMetadata.dataset_id == dataset.id) + ).all() assert len(remaining_metadata) == 0 - remaining_bindings = ( - db_session_with_containers.query(DatasetMetadataBinding).filter_by(dataset_id=dataset.id).all() - ) + remaining_bindings = db_session_with_containers.scalars( + select(DatasetMetadataBinding).where(DatasetMetadataBinding.dataset_id == dataset.id) + ).all() assert len(remaining_bindings) == 0 # Verify index processor was called @@ -485,12 +506,14 @@ class TestCleanDatasetTask: # Check that all data was cleaned up - remaining_documents = db_session_with_containers.query(Document).filter_by(dataset_id=dataset.id).all() + remaining_documents = db_session_with_containers.scalars( + select(Document).where(Document.dataset_id == dataset.id) + ).all() assert len(remaining_documents) == 0 - remaining_segments = ( - db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() - ) + remaining_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(remaining_segments) == 0 # Recreate data for next test case @@ -538,11 +561,15 @@ class TestCleanDatasetTask: # Verify results - even with vector cleanup failure, documents and segments should be deleted # Check that documents were still deleted despite vector cleanup failure - remaining_documents = db_session_with_containers.query(Document).filter_by(dataset_id=dataset.id).all() + remaining_documents = db_session_with_containers.scalars( + select(Document).where(Document.dataset_id == dataset.id) + ).all() assert len(remaining_documents) == 0 # Check that segments were still deleted despite vector cleanup failure - remaining_segments = db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() + remaining_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(remaining_segments) == 0 # Verify that index processor was called and failed @@ -622,18 +649,22 @@ class TestCleanDatasetTask: # Verify results # Check that all documents were deleted - remaining_documents = db_session_with_containers.query(Document).filter_by(dataset_id=dataset.id).all() + remaining_documents = db_session_with_containers.scalars( + select(Document).where(Document.dataset_id == dataset.id) + ).all() assert len(remaining_documents) == 0 # Check that all segments were deleted - remaining_segments = db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() + remaining_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(remaining_segments) == 0 # Check that all image files were deleted from database image_file_ids = [f.id for f in image_files] - remaining_image_files = ( - db_session_with_containers.query(UploadFile).where(UploadFile.id.in_(image_file_ids)).all() - ) + remaining_image_files = db_session_with_containers.scalars( + select(UploadFile).where(UploadFile.id.in_(image_file_ids)) + ).all() assert len(remaining_image_files) == 0 # Verify that storage.delete was called for each image file @@ -738,24 +769,32 @@ class TestCleanDatasetTask: # Verify results # Check that all documents were deleted - remaining_documents = db_session_with_containers.query(Document).filter_by(dataset_id=dataset.id).all() + remaining_documents = db_session_with_containers.scalars( + select(Document).where(Document.dataset_id == dataset.id) + ).all() assert len(remaining_documents) == 0 # Check that all segments were deleted - remaining_segments = db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() + remaining_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(remaining_segments) == 0 # Check that all upload files were deleted - remaining_files = db_session_with_containers.query(UploadFile).where(UploadFile.id.in_(upload_file_ids)).all() + remaining_files = db_session_with_containers.scalars( + select(UploadFile).where(UploadFile.id.in_(upload_file_ids)) + ).all() assert len(remaining_files) == 0 # Check that all metadata and bindings were deleted - remaining_metadata = db_session_with_containers.query(DatasetMetadata).filter_by(dataset_id=dataset.id).all() + remaining_metadata = db_session_with_containers.scalars( + select(DatasetMetadata).where(DatasetMetadata.dataset_id == dataset.id) + ).all() assert len(remaining_metadata) == 0 - remaining_bindings = ( - db_session_with_containers.query(DatasetMetadataBinding).filter_by(dataset_id=dataset.id).all() - ) + remaining_bindings = db_session_with_containers.scalars( + select(DatasetMetadataBinding).where(DatasetMetadataBinding.dataset_id == dataset.id) + ).all() assert len(remaining_bindings) == 0 # Verify performance expectations @@ -826,7 +865,9 @@ class TestCleanDatasetTask: # Check that upload file was still deleted from database despite storage failure # Note: When storage operations fail, the upload file may not be deleted # This demonstrates that the cleanup process continues even with storage errors - remaining_files = db_session_with_containers.query(UploadFile).filter_by(id=upload_file.id).all() + remaining_files = db_session_with_containers.scalars( + select(UploadFile).where(UploadFile.id == upload_file.id) + ).all() # The upload file should still be deleted from the database even if storage cleanup fails # However, this depends on the specific implementation of clean_dataset_task if len(remaining_files) > 0: @@ -976,19 +1017,27 @@ class TestCleanDatasetTask: # Verify results # Check that all documents were deleted - remaining_documents = db_session_with_containers.query(Document).filter_by(dataset_id=dataset.id).all() + remaining_documents = db_session_with_containers.scalars( + select(Document).where(Document.dataset_id == dataset.id) + ).all() assert len(remaining_documents) == 0 # Check that all segments were deleted - remaining_segments = db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() + remaining_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(remaining_segments) == 0 # Check that all upload files were deleted - remaining_files = db_session_with_containers.query(UploadFile).filter_by(id=upload_file_id).all() + remaining_files = db_session_with_containers.scalars( + select(UploadFile).where(UploadFile.id == upload_file_id) + ).all() assert len(remaining_files) == 0 # Check that all metadata was deleted - remaining_metadata = db_session_with_containers.query(DatasetMetadata).filter_by(dataset_id=dataset.id).all() + remaining_metadata = db_session_with_containers.scalars( + select(DatasetMetadata).where(DatasetMetadata.dataset_id == dataset.id) + ).all() assert len(remaining_metadata) == 0 # Verify that storage.delete was called diff --git a/api/tests/test_containers_integration_tests/tasks/test_clean_notion_document_task.py b/api/tests/test_containers_integration_tests/tasks/test_clean_notion_document_task.py index 926c839c8b..fa3ac12cf0 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_clean_notion_document_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_clean_notion_document_task.py @@ -11,6 +11,8 @@ from unittest.mock import Mock, patch import pytest from faker import Faker +from sqlalchemy import ColumnElement, func, select +from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexStructureType from models.dataset import Dataset, Document, DocumentSegment @@ -20,6 +22,14 @@ from tasks.clean_notion_document_task import clean_notion_document_task from tests.test_containers_integration_tests.helpers import generate_valid_password +def _count_documents(session: Session, condition: ColumnElement[bool]) -> int: + return session.scalar(select(func.count()).select_from(Document).where(condition)) or 0 + + +def _count_segments(session: Session, condition: ColumnElement[bool]) -> int: + return session.scalar(select(func.count()).select_from(DocumentSegment).where(condition)) or 0 + + class TestCleanNotionDocumentTask: """Integration tests for clean_notion_document_task using testcontainers.""" @@ -145,24 +155,14 @@ class TestCleanNotionDocumentTask: db_session_with_containers.commit() # Verify data exists before cleanup - assert db_session_with_containers.query(Document).filter(Document.id.in_(document_ids)).count() == 3 - assert ( - db_session_with_containers.query(DocumentSegment) - .filter(DocumentSegment.document_id.in_(document_ids)) - .count() - == 6 - ) + assert _count_documents(db_session_with_containers, Document.id.in_(document_ids)) == 3 + assert _count_segments(db_session_with_containers, DocumentSegment.document_id.in_(document_ids)) == 6 # Execute cleanup task clean_notion_document_task(document_ids, dataset.id) # Verify segments are deleted - assert ( - db_session_with_containers.query(DocumentSegment) - .filter(DocumentSegment.document_id.in_(document_ids)) - .count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id.in_(document_ids)) == 0 # Verify index processor was called mock_processor = mock_index_processor_factory.return_value.init_index_processor.return_value @@ -322,12 +322,7 @@ class TestCleanNotionDocumentTask: # The task properly handles various index types and document configurations. # Verify segments are deleted - assert ( - db_session_with_containers.query(DocumentSegment) - .filter(DocumentSegment.document_id == document.id) - .count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id == document.id) == 0 # Reset mock for next iteration mock_index_processor_factory.reset_mock() @@ -410,10 +405,7 @@ class TestCleanNotionDocumentTask: clean_notion_document_task([document.id], dataset.id) # Verify segments are deleted - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.document_id == document.id).count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id == document.id) == 0 # Note: This test successfully verifies that segments without index_node_ids # are properly deleted from the database. @@ -499,11 +491,8 @@ class TestCleanNotionDocumentTask: db_session_with_containers.commit() # Verify all data exists before cleanup - assert db_session_with_containers.query(Document).filter(Document.dataset_id == dataset.id).count() == 5 - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.dataset_id == dataset.id).count() - == 10 - ) + assert _count_documents(db_session_with_containers, Document.dataset_id == dataset.id) == 5 + assert _count_segments(db_session_with_containers, DocumentSegment.dataset_id == dataset.id) == 10 # Clean up only first 3 documents documents_to_clean = [doc.id for doc in documents[:3]] @@ -513,22 +502,12 @@ class TestCleanNotionDocumentTask: clean_notion_document_task(documents_to_clean, dataset.id) # Verify only specified documents' segments are deleted - assert ( - db_session_with_containers.query(DocumentSegment) - .filter(DocumentSegment.document_id.in_(documents_to_clean)) - .count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id.in_(documents_to_clean)) == 0 # Verify remaining documents and segments are intact remaining_docs = [doc.id for doc in documents[3:]] - assert db_session_with_containers.query(Document).filter(Document.id.in_(remaining_docs)).count() == 2 - assert ( - db_session_with_containers.query(DocumentSegment) - .filter(DocumentSegment.document_id.in_(remaining_docs)) - .count() - == 4 - ) + assert _count_documents(db_session_with_containers, Document.id.in_(remaining_docs)) == 2 + assert _count_segments(db_session_with_containers, DocumentSegment.document_id.in_(remaining_docs)) == 4 # Note: This test successfully verifies partial document cleanup operations. # The database operations work correctly, isolating only the specified documents. @@ -612,19 +591,13 @@ class TestCleanNotionDocumentTask: db_session_with_containers.commit() # Verify all segments exist before cleanup - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.document_id == document.id).count() - == 4 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id == document.id) == 4 # Execute cleanup task clean_notion_document_task([document.id], dataset.id) # Verify all segments are deleted regardless of status - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.document_id == document.id).count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id == document.id) == 0 # Note: This test successfully verifies database operations. # IndexProcessor verification would require more sophisticated mocking. @@ -794,12 +767,9 @@ class TestCleanNotionDocumentTask: db_session_with_containers.commit() # Verify all data exists before cleanup + assert _count_documents(db_session_with_containers, Document.dataset_id == dataset.id) == num_documents assert ( - db_session_with_containers.query(Document).filter(Document.dataset_id == dataset.id).count() - == num_documents - ) - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.dataset_id == dataset.id).count() + _count_segments(db_session_with_containers, DocumentSegment.dataset_id == dataset.id) == num_documents * num_segments_per_doc ) @@ -808,10 +778,7 @@ class TestCleanNotionDocumentTask: clean_notion_document_task(all_document_ids, dataset.id) # Verify all segments are deleted - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.dataset_id == dataset.id).count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.dataset_id == dataset.id) == 0 # Note: This test successfully verifies bulk document cleanup operations. # The database efficiently handles large-scale deletions. @@ -906,8 +873,8 @@ class TestCleanNotionDocumentTask: # Verify all data exists before cleanup # Note: There may be documents from previous tests, so we check for at least 3 - assert db_session_with_containers.query(Document).count() >= 3 - assert db_session_with_containers.query(DocumentSegment).count() >= 9 + assert db_session_with_containers.scalar(select(func.count()).select_from(Document)) >= 3 + assert db_session_with_containers.scalar(select(func.count()).select_from(DocumentSegment)) >= 9 # Clean up documents from only the first dataset target_dataset = datasets[0] @@ -918,22 +885,12 @@ class TestCleanNotionDocumentTask: clean_notion_document_task([target_document.id], target_dataset.id) # Verify only documents' segments from target dataset are deleted - assert ( - db_session_with_containers.query(DocumentSegment) - .filter(DocumentSegment.document_id == target_document.id) - .count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id == target_document.id) == 0 # Verify documents from other datasets remain intact remaining_docs = [doc.id for doc in all_documents[1:]] - assert db_session_with_containers.query(Document).filter(Document.id.in_(remaining_docs)).count() == 2 - assert ( - db_session_with_containers.query(DocumentSegment) - .filter(DocumentSegment.document_id.in_(remaining_docs)) - .count() - == 6 - ) + assert _count_documents(db_session_with_containers, Document.id.in_(remaining_docs)) == 2 + assert _count_segments(db_session_with_containers, DocumentSegment.document_id.in_(remaining_docs)) == 6 # Note: This test successfully verifies multi-tenant isolation. # Only documents from the target dataset are affected, maintaining tenant separation. @@ -1028,11 +985,9 @@ class TestCleanNotionDocumentTask: db_session_with_containers.commit() # Verify all data exists before cleanup - assert db_session_with_containers.query(Document).filter(Document.dataset_id == dataset.id).count() == len( - document_statuses - ) + assert _count_documents(db_session_with_containers, Document.dataset_id == dataset.id) == len(document_statuses) assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.dataset_id == dataset.id).count() + _count_segments(db_session_with_containers, DocumentSegment.dataset_id == dataset.id) == len(document_statuses) * 2 ) @@ -1041,10 +996,7 @@ class TestCleanNotionDocumentTask: clean_notion_document_task(all_document_ids, dataset.id) # Verify all segments are deleted regardless of status - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.dataset_id == dataset.id).count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.dataset_id == dataset.id) == 0 # Note: This test successfully verifies cleanup of documents in various states. # All documents are deleted regardless of their indexing status. @@ -1142,20 +1094,14 @@ class TestCleanNotionDocumentTask: db_session_with_containers.commit() # Verify data exists before cleanup - assert db_session_with_containers.query(Document).filter(Document.id == document.id).count() == 1 - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.document_id == document.id).count() - == 3 - ) + assert _count_documents(db_session_with_containers, Document.id == document.id) == 1 + assert _count_segments(db_session_with_containers, DocumentSegment.document_id == document.id) == 3 # Execute cleanup task clean_notion_document_task([document.id], dataset.id) # Verify segments are deleted - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.document_id == document.id).count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id == document.id) == 0 # Note: This test successfully verifies cleanup of documents with rich metadata. # The task properly handles complex document structures and metadata fields. diff --git a/api/tests/test_containers_integration_tests/tasks/test_create_segment_to_index_task.py b/api/tests/test_containers_integration_tests/tasks/test_create_segment_to_index_task.py index 9f8e37fc9e..9084667c31 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_create_segment_to_index_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_create_segment_to_index_task.py @@ -11,6 +11,7 @@ from uuid import uuid4 import pytest from faker import Faker +from sqlalchemy import delete from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType from extensions.ext_redis import redis_client @@ -28,12 +29,12 @@ class TestCreateSegmentToIndexTask: """Clean up database and Redis before each test to ensure isolation.""" # Clear all test data using fixture session - db_session_with_containers.query(DocumentSegment).delete() - db_session_with_containers.query(Document).delete() - db_session_with_containers.query(Dataset).delete() - db_session_with_containers.query(TenantAccountJoin).delete() - db_session_with_containers.query(Tenant).delete() - db_session_with_containers.query(Account).delete() + db_session_with_containers.execute(delete(DocumentSegment)) + db_session_with_containers.execute(delete(Document)) + db_session_with_containers.execute(delete(Dataset)) + db_session_with_containers.execute(delete(TenantAccountJoin)) + db_session_with_containers.execute(delete(Tenant)) + db_session_with_containers.execute(delete(Account)) db_session_with_containers.commit() # Clear Redis cache diff --git a/api/tests/test_containers_integration_tests/tasks/test_dataset_indexing_task.py b/api/tests/test_containers_integration_tests/tasks/test_dataset_indexing_task.py index 13ea94348a..684097851b 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_dataset_indexing_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_dataset_indexing_task.py @@ -6,6 +6,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import select from core.indexing_runner import DocumentIsPausedError from core.rag.index_processor.constant.index_type import IndexTechniqueType @@ -175,7 +176,7 @@ class TestDatasetIndexingTaskIntegration: def _query_document(self, db_session_with_containers, document_id: str) -> Document | None: """Return the latest persisted document state.""" - return db_session_with_containers.query(Document).where(Document.id == document_id).first() + return db_session_with_containers.scalar(select(Document).where(Document.id == document_id).limit(1)) def _assert_documents_parsing(self, db_session_with_containers, document_ids: Sequence[str]) -> None: """Assert all target documents are persisted in parsing status.""" diff --git a/api/tests/test_containers_integration_tests/tasks/test_deal_dataset_vector_index_task.py b/api/tests/test_containers_integration_tests/tasks/test_deal_dataset_vector_index_task.py index d457b59d58..48fec441c5 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_deal_dataset_vector_index_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_deal_dataset_vector_index_task.py @@ -11,6 +11,7 @@ from unittest.mock import ANY, Mock, patch import pytest from faker import Faker +from sqlalchemy import select from core.rag.index_processor.constant.index_type import IndexStructureType from models.dataset import Dataset, Document, DocumentSegment @@ -221,7 +222,9 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify document status was updated to indexing then completed - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.COMPLETED # Verify index processor load method was called @@ -322,7 +325,9 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "update") # Verify document status was updated to indexing then completed - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.COMPLETED # Verify index processor clean and load methods were called @@ -431,7 +436,9 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify document status was updated to indexing then completed - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.COMPLETED # Verify that no index processor load was called since no segments exist @@ -564,7 +571,9 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify document status was updated to error - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.ERROR assert "Test exception during indexing" in updated_document.error @@ -635,7 +644,9 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify document status was updated to indexing then completed - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.COMPLETED # Verify index processor was initialized with custom index type @@ -711,7 +722,9 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify document status was updated to indexing then completed - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.COMPLETED # Verify index processor was initialized with the document's index type @@ -815,7 +828,9 @@ class TestDealDatasetVectorIndexTask: # Verify all documents were processed for document in documents: - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.COMPLETED # Verify index processor load was called multiple times @@ -917,7 +932,9 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify final document status - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.COMPLETED def test_deal_dataset_vector_index_task_with_disabled_documents( @@ -1027,12 +1044,14 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify only enabled document was processed - updated_enabled_document = db_session_with_containers.query(Document).filter_by(id=enabled_document.id).first() + updated_enabled_document = db_session_with_containers.scalar( + select(Document).where(Document.id == enabled_document.id).limit(1) + ) assert updated_enabled_document.indexing_status == IndexingStatus.COMPLETED # Verify disabled document status remains unchanged - updated_disabled_document = ( - db_session_with_containers.query(Document).filter_by(id=disabled_document.id).first() + updated_disabled_document = db_session_with_containers.scalar( + select(Document).where(Document.id == disabled_document.id).limit(1) ) assert updated_disabled_document.indexing_status == IndexingStatus.COMPLETED # Should not change @@ -1148,12 +1167,14 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify only active document was processed - updated_active_document = db_session_with_containers.query(Document).filter_by(id=active_document.id).first() + updated_active_document = db_session_with_containers.scalar( + select(Document).where(Document.id == active_document.id).limit(1) + ) assert updated_active_document.indexing_status == IndexingStatus.COMPLETED # Verify archived document status remains unchanged - updated_archived_document = ( - db_session_with_containers.query(Document).filter_by(id=archived_document.id).first() + updated_archived_document = db_session_with_containers.scalar( + select(Document).where(Document.id == archived_document.id).limit(1) ) assert updated_archived_document.indexing_status == IndexingStatus.COMPLETED # Should not change @@ -1269,14 +1290,14 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify only completed document was processed - updated_completed_document = ( - db_session_with_containers.query(Document).filter_by(id=completed_document.id).first() + updated_completed_document = db_session_with_containers.scalar( + select(Document).where(Document.id == completed_document.id).limit(1) ) assert updated_completed_document.indexing_status == IndexingStatus.COMPLETED # Verify incomplete document status remains unchanged - updated_incomplete_document = ( - db_session_with_containers.query(Document).filter_by(id=incomplete_document.id).first() + updated_incomplete_document = db_session_with_containers.scalar( + select(Document).where(Document.id == incomplete_document.id).limit(1) ) assert updated_incomplete_document.indexing_status == IndexingStatus.INDEXING # Should not change diff --git a/api/tests/test_containers_integration_tests/tasks/test_disable_segments_from_index_task.py b/api/tests/test_containers_integration_tests/tasks/test_disable_segments_from_index_task.py index 3e9a0c8f7f..6e03bd9351 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_disable_segments_from_index_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_disable_segments_from_index_task.py @@ -9,6 +9,7 @@ The task is responsible for removing document segments from the search index whe from unittest.mock import MagicMock, patch from faker import Faker +from sqlalchemy import select from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType @@ -471,9 +472,9 @@ class TestDisableSegmentsFromIndexTask: db_session_with_containers.refresh(segments[1]) # Check that segments are re-enabled after error - updated_segments = ( - db_session_with_containers.query(DocumentSegment).where(DocumentSegment.id.in_(segment_ids)).all() - ) + updated_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.id.in_(segment_ids)) + ).all() for segment in updated_segments: assert segment.enabled is True diff --git a/api/tests/test_containers_integration_tests/tasks/test_document_indexing_sync_task.py b/api/tests/test_containers_integration_tests/tasks/test_document_indexing_sync_task.py index d4021143ef..b6e7e6e5c9 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_document_indexing_sync_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_document_indexing_sync_task.py @@ -12,10 +12,11 @@ from unittest.mock import Mock, patch from uuid import uuid4 import pytest +from sqlalchemy import delete, func, select, update from core.indexing_runner import DocumentIsPausedError, IndexingRunner from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType -from models import Account, Tenant, TenantAccountJoin, TenantAccountRole +from models import Account, AccountStatus, Tenant, TenantAccountJoin, TenantAccountRole, TenantStatus from models.dataset import Dataset, Document, DocumentSegment from models.enums import DataSourceType, DocumentCreatedFrom, IndexingStatus, SegmentStatus from tasks.document_indexing_sync_task import document_indexing_sync_task @@ -30,12 +31,12 @@ class DocumentIndexingSyncTaskTestDataFactory: email=f"{uuid4()}@example.com", name=f"user-{uuid4()}", interface_language="en-US", - status="active", + status=AccountStatus.ACTIVE, ) db_session_with_containers.add(account) db_session_with_containers.flush() - tenant = Tenant(name=f"tenant-{account.id}", status="normal") + tenant = Tenant(name=f"tenant-{account.id}", status=TenantStatus.NORMAL) db_session_with_containers.add(tenant) db_session_with_containers.flush() @@ -254,8 +255,8 @@ class TestDocumentIndexingSyncTask: """Test that task raises error when data_source_info is empty.""" # Arrange context = self._create_notion_sync_context(db_session_with_containers, data_source_info=None) - db_session_with_containers.query(Document).where(Document.id == context["document"].id).update( - {"data_source_info": None} + db_session_with_containers.execute( + update(Document).where(Document.id == context["document"].id).values(data_source_info=None) ) db_session_with_containers.commit() @@ -274,8 +275,8 @@ class TestDocumentIndexingSyncTask: # Assert db_session_with_containers.expire_all() - updated_document = ( - db_session_with_containers.query(Document).where(Document.id == context["document"].id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == context["document"].id).limit(1) ) assert updated_document is not None assert updated_document.indexing_status == IndexingStatus.ERROR @@ -294,13 +295,13 @@ class TestDocumentIndexingSyncTask: # Assert db_session_with_containers.expire_all() - updated_document = ( - db_session_with_containers.query(Document).where(Document.id == context["document"].id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == context["document"].id).limit(1) ) - remaining_segments = ( - db_session_with_containers.query(DocumentSegment) + remaining_segments = db_session_with_containers.scalar( + select(func.count()) + .select_from(DocumentSegment) .where(DocumentSegment.document_id == context["document"].id) - .count() ) assert updated_document is not None assert updated_document.indexing_status == IndexingStatus.COMPLETED @@ -319,13 +320,13 @@ class TestDocumentIndexingSyncTask: # Assert db_session_with_containers.expire_all() - updated_document = ( - db_session_with_containers.query(Document).where(Document.id == context["document"].id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == context["document"].id).limit(1) ) - remaining_segments = ( - db_session_with_containers.query(DocumentSegment) + remaining_segments = db_session_with_containers.scalar( + select(func.count()) + .select_from(DocumentSegment) .where(DocumentSegment.document_id == context["document"].id) - .count() ) assert updated_document is not None @@ -354,7 +355,7 @@ class TestDocumentIndexingSyncTask: context = self._create_notion_sync_context(db_session_with_containers) def _delete_dataset_before_clean() -> str: - db_session_with_containers.query(Dataset).where(Dataset.id == context["dataset"].id).delete() + db_session_with_containers.execute(delete(Dataset).where(Dataset.id == context["dataset"].id)) db_session_with_containers.commit() return "2024-01-02T00:00:00Z" @@ -367,8 +368,8 @@ class TestDocumentIndexingSyncTask: # Assert db_session_with_containers.expire_all() - updated_document = ( - db_session_with_containers.query(Document).where(Document.id == context["document"].id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == context["document"].id).limit(1) ) assert updated_document is not None assert updated_document.indexing_status == IndexingStatus.PARSING @@ -386,13 +387,13 @@ class TestDocumentIndexingSyncTask: # Assert db_session_with_containers.expire_all() - updated_document = ( - db_session_with_containers.query(Document).where(Document.id == context["document"].id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == context["document"].id).limit(1) ) - remaining_segments = ( - db_session_with_containers.query(DocumentSegment) + remaining_segments = db_session_with_containers.scalar( + select(func.count()) + .select_from(DocumentSegment) .where(DocumentSegment.document_id == context["document"].id) - .count() ) assert updated_document is not None assert updated_document.indexing_status == IndexingStatus.PARSING @@ -410,8 +411,8 @@ class TestDocumentIndexingSyncTask: # Assert db_session_with_containers.expire_all() - updated_document = ( - db_session_with_containers.query(Document).where(Document.id == context["document"].id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == context["document"].id).limit(1) ) assert updated_document is not None assert updated_document.indexing_status == IndexingStatus.PARSING @@ -428,8 +429,8 @@ class TestDocumentIndexingSyncTask: # Assert db_session_with_containers.expire_all() - updated_document = ( - db_session_with_containers.query(Document).where(Document.id == context["document"].id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == context["document"].id).limit(1) ) assert updated_document is not None assert updated_document.indexing_status == IndexingStatus.ERROR diff --git a/api/tests/test_containers_integration_tests/tasks/test_document_indexing_update_task.py b/api/tests/test_containers_integration_tests/tasks/test_document_indexing_update_task.py index d94abf2b40..a9a8c0f30c 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_document_indexing_update_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_document_indexing_update_task.py @@ -2,6 +2,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import func, select from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType from models import Account, Tenant, TenantAccountJoin, TenantAccountRole @@ -123,13 +124,13 @@ class TestDocumentIndexingUpdateTask: db_session_with_containers.expire_all() # Assert document status updated before reindex - updated = db_session_with_containers.query(Document).where(Document.id == document.id).first() + updated = db_session_with_containers.scalar(select(Document).where(Document.id == document.id).limit(1)) assert updated.indexing_status == IndexingStatus.PARSING assert updated.processing_started_at is not None # Segments should be deleted - remaining = ( - db_session_with_containers.query(DocumentSegment).where(DocumentSegment.document_id == document.id).count() + remaining = db_session_with_containers.scalar( + select(func.count()).select_from(DocumentSegment).where(DocumentSegment.document_id == document.id) ) assert remaining == 0 @@ -167,8 +168,8 @@ class TestDocumentIndexingUpdateTask: mock_external_dependencies["runner_instance"].run.assert_called_once() # Segments should remain (since clean failed before DB delete) - remaining = ( - db_session_with_containers.query(DocumentSegment).where(DocumentSegment.document_id == document.id).count() + remaining = db_session_with_containers.scalar( + select(func.count()).select_from(DocumentSegment).where(DocumentSegment.document_id == document.id) ) assert remaining > 0 diff --git a/api/tests/test_containers_integration_tests/tasks/test_duplicate_document_indexing_task.py b/api/tests/test_containers_integration_tests/tasks/test_duplicate_document_indexing_task.py index 6a8e186958..39c58987fd 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_duplicate_document_indexing_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_duplicate_document_indexing_task.py @@ -2,6 +2,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import select from core.indexing_runner import DocumentIsPausedError from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType @@ -317,7 +318,7 @@ class TestDuplicateDocumentIndexingTasks: # Verify documents were updated to parsing status # Re-query documents from database since _duplicate_document_indexing_task uses a different session for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.PARSING assert updated_document.processing_started_at is not None @@ -362,14 +363,14 @@ class TestDuplicateDocumentIndexingTasks: # Verify segments were deleted from database # Re-query segments from database using captured IDs to avoid stale ORM instances for seg_id in segment_ids: - deleted_segment = ( - db_session_with_containers.query(DocumentSegment).where(DocumentSegment.id == seg_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == seg_id).limit(1) ) assert deleted_segment is None # Verify documents were updated to parsing status for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.PARSING assert updated_document.processing_started_at is not None @@ -438,7 +439,7 @@ class TestDuplicateDocumentIndexingTasks: # Verify only existing documents were updated # Re-query documents from database since _duplicate_document_indexing_task uses a different session for doc_id in existing_document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.PARSING assert updated_document.processing_started_at is not None @@ -485,7 +486,7 @@ class TestDuplicateDocumentIndexingTasks: # Verify documents were still updated to parsing status before the exception # Re-query documents from database since _duplicate_document_indexing_task close the session for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.PARSING assert updated_document.processing_started_at is not None @@ -543,7 +544,7 @@ class TestDuplicateDocumentIndexingTasks: # Assert: Verify error handling # Re-query documents from database since _duplicate_document_indexing_task uses a different session for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.ERROR assert updated_document.error is not None assert "batch upload" in updated_document.error.lower() @@ -585,7 +586,7 @@ class TestDuplicateDocumentIndexingTasks: # Assert: Verify error handling # Re-query documents from database since _duplicate_document_indexing_task uses a different session for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.ERROR assert updated_document.error is not None assert "limit" in updated_document.error.lower() @@ -649,7 +650,7 @@ class TestDuplicateDocumentIndexingTasks: # Verify documents were processed for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.PARSING @patch("tasks.duplicate_document_indexing_task.TenantIsolatedTaskQueue", autospec=True) @@ -692,7 +693,7 @@ class TestDuplicateDocumentIndexingTasks: # Verify documents were processed for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.PARSING @patch("tasks.duplicate_document_indexing_task.TenantIsolatedTaskQueue", autospec=True) @@ -736,7 +737,7 @@ class TestDuplicateDocumentIndexingTasks: # Verify documents were processed for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.PARSING @patch("tasks.duplicate_document_indexing_task.TenantIsolatedTaskQueue", autospec=True) @@ -851,7 +852,7 @@ class TestDuplicateDocumentIndexingTasks: # Assert for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.is_paused is True assert updated_document.indexing_status == IndexingStatus.PARSING assert updated_document.display_status == "paused" diff --git a/api/tests/test_containers_integration_tests/tasks/test_mail_email_code_login_task.py b/api/tests/test_containers_integration_tests/tasks/test_mail_email_code_login_task.py index c0ddc27286..8343711998 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_mail_email_code_login_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_mail_email_code_login_task.py @@ -14,6 +14,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import delete from libs.email_i18n import EmailType from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole @@ -41,9 +42,9 @@ class TestSendEmailCodeLoginMailTask: from extensions.ext_redis import redis_client # Clear all test data - db_session_with_containers.query(TenantAccountJoin).delete() - db_session_with_containers.query(Tenant).delete() - db_session_with_containers.query(Account).delete() + db_session_with_containers.execute(delete(TenantAccountJoin)) + db_session_with_containers.execute(delete(Tenant)) + db_session_with_containers.execute(delete(Account)) db_session_with_containers.commit() # Clear Redis cache diff --git a/api/tests/test_containers_integration_tests/tasks/test_mail_human_input_delivery_task.py b/api/tests/test_containers_integration_tests/tasks/test_mail_human_input_delivery_task.py index a16f3ff773..328bdbf055 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_mail_human_input_delivery_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_mail_human_input_delivery_task.py @@ -3,9 +3,7 @@ from datetime import UTC, datetime from unittest.mock import patch import pytest -from graphon.enums import WorkflowExecutionStatus -from graphon.nodes.human_input.entities import HumanInputNodeData -from graphon.runtime import GraphRuntimeState, VariablePool +from sqlalchemy import delete from configs import dify_config from core.app.app_config.entities import WorkflowUIBasedAppConfig @@ -20,6 +18,9 @@ from core.workflow.human_input_compat import ( MemberRecipient, ) from extensions.ext_storage import storage +from graphon.enums import WorkflowExecutionStatus +from graphon.nodes.human_input.entities import HumanInputNodeData +from graphon.runtime import GraphRuntimeState, VariablePool from models.account import Account, AccountStatus, Tenant, TenantAccountJoin, TenantAccountRole from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom from models.human_input import HumanInputDelivery, HumanInputForm, HumanInputFormRecipient @@ -30,14 +31,14 @@ from tasks.mail_human_input_delivery_task import dispatch_human_input_email_task @pytest.fixture(autouse=True) def cleanup_database(db_session_with_containers): - db_session_with_containers.query(HumanInputFormRecipient).delete() - db_session_with_containers.query(HumanInputDelivery).delete() - db_session_with_containers.query(HumanInputForm).delete() - db_session_with_containers.query(WorkflowPause).delete() - db_session_with_containers.query(WorkflowRun).delete() - db_session_with_containers.query(TenantAccountJoin).delete() - db_session_with_containers.query(Tenant).delete() - db_session_with_containers.query(Account).delete() + db_session_with_containers.execute(delete(HumanInputFormRecipient)) + db_session_with_containers.execute(delete(HumanInputDelivery)) + db_session_with_containers.execute(delete(HumanInputForm)) + db_session_with_containers.execute(delete(WorkflowPause)) + db_session_with_containers.execute(delete(WorkflowRun)) + db_session_with_containers.execute(delete(TenantAccountJoin)) + db_session_with_containers.execute(delete(Tenant)) + db_session_with_containers.execute(delete(Account)) db_session_with_containers.commit() diff --git a/api/tests/test_containers_integration_tests/tasks/test_mail_invite_member_task.py b/api/tests/test_containers_integration_tests/tasks/test_mail_invite_member_task.py index 212fbd26cd..d34828c4b1 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_mail_invite_member_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_mail_invite_member_task.py @@ -17,6 +17,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import delete, select from extensions.ext_redis import redis_client from libs.email_i18n import EmailType @@ -44,9 +45,9 @@ class TestMailInviteMemberTask: def cleanup_database(self, db_session_with_containers): """Clean up database before each test to ensure isolation.""" # Clear all test data - db_session_with_containers.query(TenantAccountJoin).delete() - db_session_with_containers.query(Tenant).delete() - db_session_with_containers.query(Account).delete() + db_session_with_containers.execute(delete(TenantAccountJoin)) + db_session_with_containers.execute(delete(Tenant)) + db_session_with_containers.execute(delete(Account)) db_session_with_containers.commit() # Clear Redis cache @@ -491,10 +492,10 @@ class TestMailInviteMemberTask: assert tenant.name is not None # Verify tenant relationship exists - tenant_join = ( - db_session_with_containers.query(TenantAccountJoin) - .filter_by(tenant_id=tenant.id, account_id=pending_account.id) - .first() + tenant_join = db_session_with_containers.scalar( + select(TenantAccountJoin) + .where(TenantAccountJoin.tenant_id == tenant.id, TenantAccountJoin.account_id == pending_account.id) + .limit(1) ) assert tenant_join is not None assert tenant_join.role == TenantAccountRole.NORMAL diff --git a/api/tests/test_containers_integration_tests/tasks/test_remove_app_and_related_data_task.py b/api/tests/test_containers_integration_tests/tasks/test_remove_app_and_related_data_task.py index 96cf9cebf5..b43b622870 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_remove_app_and_related_data_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_remove_app_and_related_data_task.py @@ -2,11 +2,12 @@ import uuid from unittest.mock import ANY, call, patch import pytest -from graphon.variables.segments import StringSegment -from graphon.variables.types import SegmentType +from sqlalchemy import delete, func, select from core.db.session_factory import session_factory from extensions.storage.storage_type import StorageType +from graphon.variables.segments import StringSegment +from graphon.variables.types import SegmentType from libs.datetime_utils import naive_utc_now from models import Tenant from models.enums import CreatorUserRole @@ -20,11 +21,11 @@ from tasks.remove_app_and_related_data_task import ( @pytest.fixture(autouse=True) def cleanup_database(db_session_with_containers): - db_session_with_containers.query(WorkflowDraftVariable).delete() - db_session_with_containers.query(WorkflowDraftVariableFile).delete() - db_session_with_containers.query(UploadFile).delete() - db_session_with_containers.query(App).delete() - db_session_with_containers.query(Tenant).delete() + db_session_with_containers.execute(delete(WorkflowDraftVariable)) + db_session_with_containers.execute(delete(WorkflowDraftVariableFile)) + db_session_with_containers.execute(delete(UploadFile)) + db_session_with_containers.execute(delete(App)) + db_session_with_containers.execute(delete(Tenant)) db_session_with_containers.commit() @@ -127,21 +128,21 @@ class TestDeleteDraftVariablesBatch: result = delete_draft_variables_batch(app1.id, batch_size=100) assert result == 150 - app1_remaining = db_session_with_containers.query(WorkflowDraftVariable).where( - WorkflowDraftVariable.app_id == app1.id + app1_remaining_count = db_session_with_containers.scalar( + select(func.count()).select_from(WorkflowDraftVariable).where(WorkflowDraftVariable.app_id == app1.id) ) - app2_remaining = db_session_with_containers.query(WorkflowDraftVariable).where( - WorkflowDraftVariable.app_id == app2.id + app2_remaining_count = db_session_with_containers.scalar( + select(func.count()).select_from(WorkflowDraftVariable).where(WorkflowDraftVariable.app_id == app2.id) ) - assert app1_remaining.count() == 0 - assert app2_remaining.count() == 100 + assert app1_remaining_count == 0 + assert app2_remaining_count == 100 def test_delete_draft_variables_batch_empty_result(self, db_session_with_containers): """Test deletion when no draft variables exist for the app.""" result = delete_draft_variables_batch(str(uuid.uuid4()), 1000) assert result == 0 - assert db_session_with_containers.query(WorkflowDraftVariable).count() == 0 + assert db_session_with_containers.scalar(select(func.count()).select_from(WorkflowDraftVariable)) == 0 @patch("tasks.remove_app_and_related_data_task._delete_draft_variable_offload_data") @patch("tasks.remove_app_and_related_data_task.logger") @@ -190,12 +191,16 @@ class TestDeleteDraftVariableOffloadData: expected_storage_calls = [call(storage_key) for storage_key in upload_file_keys] mock_storage.delete.assert_has_calls(expected_storage_calls, any_order=True) - remaining_var_files = db_session_with_containers.query(WorkflowDraftVariableFile).where( - WorkflowDraftVariableFile.id.in_(file_ids) + remaining_var_files_count = db_session_with_containers.scalar( + select(func.count()) + .select_from(WorkflowDraftVariableFile) + .where(WorkflowDraftVariableFile.id.in_(file_ids)) ) - remaining_upload_files = db_session_with_containers.query(UploadFile).where(UploadFile.id.in_(upload_file_ids)) - assert remaining_var_files.count() == 0 - assert remaining_upload_files.count() == 0 + remaining_upload_files_count = db_session_with_containers.scalar( + select(func.count()).select_from(UploadFile).where(UploadFile.id.in_(upload_file_ids)) + ) + assert remaining_var_files_count == 0 + assert remaining_upload_files_count == 0 @patch("extensions.ext_storage.storage") @patch("tasks.remove_app_and_related_data_task.logging") @@ -217,9 +222,13 @@ class TestDeleteDraftVariableOffloadData: assert result == 1 mock_logging.exception.assert_called_once_with("Failed to delete storage object %s", storage_keys[0]) - remaining_var_files = db_session_with_containers.query(WorkflowDraftVariableFile).where( - WorkflowDraftVariableFile.id.in_(file_ids) + remaining_var_files_count = db_session_with_containers.scalar( + select(func.count()) + .select_from(WorkflowDraftVariableFile) + .where(WorkflowDraftVariableFile.id.in_(file_ids)) ) - remaining_upload_files = db_session_with_containers.query(UploadFile).where(UploadFile.id.in_(upload_file_ids)) - assert remaining_var_files.count() == 0 - assert remaining_upload_files.count() == 0 + remaining_upload_files_count = db_session_with_containers.scalar( + select(func.count()).select_from(UploadFile).where(UploadFile.id.in_(upload_file_ids)) + ) + assert remaining_var_files_count == 0 + assert remaining_upload_files_count == 0 diff --git a/api/tests/test_containers_integration_tests/test_workflow_pause_integration.py b/api/tests/test_containers_integration_tests/test_workflow_pause_integration.py index 159ab51304..b00d827e37 100644 --- a/api/tests/test_containers_integration_tests/test_workflow_pause_integration.py +++ b/api/tests/test_containers_integration_tests/test_workflow_pause_integration.py @@ -24,16 +24,16 @@ from dataclasses import dataclass from datetime import timedelta import pytest -from graphon.entities import WorkflowExecution -from graphon.enums import WorkflowExecutionStatus -from sqlalchemy import delete, select +from sqlalchemy import delete, func, select from sqlalchemy.orm import Session, selectinload, sessionmaker from extensions.ext_storage import storage +from graphon.entities import WorkflowExecution +from graphon.enums import WorkflowExecutionStatus from libs.datetime_utils import naive_utc_now from models import Account from models import WorkflowPause as WorkflowPauseModel -from models.account import Tenant, TenantAccountJoin, TenantAccountRole +from models.account import AccountStatus, Tenant, TenantAccountJoin, TenantAccountRole, TenantStatus from models.model import UploadFile from models.workflow import Workflow, WorkflowRun from repositories.sqlalchemy_api_workflow_run_repository import ( @@ -181,7 +181,7 @@ class TestWorkflowPauseIntegration: tenant = Tenant( name="Test Tenant", - status="normal", + status=TenantStatus.NORMAL, ) db_session_with_containers.add(tenant) db_session_with_containers.commit() @@ -190,7 +190,7 @@ class TestWorkflowPauseIntegration: email="test@example.com", name="Test User", interface_language="en-US", - status="active", + status=AccountStatus.ACTIVE, ) db_session_with_containers.add(account) db_session_with_containers.commit() @@ -679,9 +679,12 @@ class TestWorkflowPauseIntegration: # Verify only 3 were deleted remaining_count = ( - self.session.query(WorkflowPauseModel) - .filter(WorkflowPauseModel.id.in_([pe.id for pe in pause_entities])) - .count() + self.session.scalar( + select(func.count(WorkflowPauseModel.id)).where( + WorkflowPauseModel.id.in_([pe.id for pe in pause_entities]) + ) + ) + or 0 ) assert remaining_count == 2 @@ -693,7 +696,7 @@ class TestWorkflowPauseIntegration: tenant2 = Tenant( name="Test Tenant 2", - status="normal", + status=TenantStatus.NORMAL, ) self.session.add(tenant2) self.session.commit() @@ -702,7 +705,7 @@ class TestWorkflowPauseIntegration: email="test2@example.com", name="Test User 2", interface_language="en-US", - status="active", + status=AccountStatus.ACTIVE, ) self.session.add(account2) self.session.commit() diff --git a/api/tests/test_containers_integration_tests/trigger/conftest.py b/api/tests/test_containers_integration_tests/trigger/conftest.py index e3832fb2ef..272bee9630 100644 --- a/api/tests/test_containers_integration_tests/trigger/conftest.py +++ b/api/tests/test_containers_integration_tests/trigger/conftest.py @@ -11,6 +11,7 @@ from collections.abc import Generator from typing import Any import pytest +from sqlalchemy import delete from sqlalchemy.orm import Session from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole @@ -40,9 +41,9 @@ def tenant_and_account(db_session_with_containers: Session) -> Generator[tuple[T yield tenant, account # Cleanup - db_session_with_containers.query(TenantAccountJoin).filter_by(tenant_id=tenant.id).delete() - db_session_with_containers.query(Account).filter_by(id=account.id).delete() - db_session_with_containers.query(Tenant).filter_by(id=tenant.id).delete() + db_session_with_containers.execute(delete(TenantAccountJoin).where(TenantAccountJoin.tenant_id == tenant.id)) + db_session_with_containers.execute(delete(Account).where(Account.id == account.id)) + db_session_with_containers.execute(delete(Tenant).where(Tenant.id == tenant.id)) db_session_with_containers.commit() @@ -93,14 +94,14 @@ def app_model( ) from models.workflow import Workflow - db_session_with_containers.query(WorkflowTriggerLog).filter_by(app_id=app.id).delete() - db_session_with_containers.query(WorkflowSchedulePlan).filter_by(app_id=app.id).delete() - db_session_with_containers.query(WorkflowWebhookTrigger).filter_by(app_id=app.id).delete() - db_session_with_containers.query(WorkflowPluginTrigger).filter_by(app_id=app.id).delete() - db_session_with_containers.query(AppTrigger).filter_by(app_id=app.id).delete() - db_session_with_containers.query(TriggerSubscription).filter_by(tenant_id=tenant.id).delete() - db_session_with_containers.query(Workflow).filter_by(app_id=app.id).delete() - db_session_with_containers.query(App).filter_by(id=app.id).delete() + db_session_with_containers.execute(delete(WorkflowTriggerLog).where(WorkflowTriggerLog.app_id == app.id)) + db_session_with_containers.execute(delete(WorkflowSchedulePlan).where(WorkflowSchedulePlan.app_id == app.id)) + db_session_with_containers.execute(delete(WorkflowWebhookTrigger).where(WorkflowWebhookTrigger.app_id == app.id)) + db_session_with_containers.execute(delete(WorkflowPluginTrigger).where(WorkflowPluginTrigger.app_id == app.id)) + db_session_with_containers.execute(delete(AppTrigger).where(AppTrigger.app_id == app.id)) + db_session_with_containers.execute(delete(TriggerSubscription).where(TriggerSubscription.tenant_id == tenant.id)) + db_session_with_containers.execute(delete(Workflow).where(Workflow.app_id == app.id)) + db_session_with_containers.execute(delete(App).where(App.id == app.id)) db_session_with_containers.commit() diff --git a/api/tests/test_containers_integration_tests/trigger/test_trigger_e2e.py b/api/tests/test_containers_integration_tests/trigger/test_trigger_e2e.py index 3514447240..9c20118e27 100644 --- a/api/tests/test_containers_integration_tests/trigger/test_trigger_e2e.py +++ b/api/tests/test_containers_integration_tests/trigger/test_trigger_e2e.py @@ -10,7 +10,7 @@ from typing import Any import pytest from flask import Flask, Response from flask.testing import FlaskClient -from graphon.enums import BuiltinNodeTypes +from sqlalchemy import select from sqlalchemy.orm import Session from configs import dify_config @@ -24,6 +24,7 @@ from core.trigger.debug import event_selectors from core.trigger.debug.event_bus import TriggerDebugEventBus from core.trigger.debug.event_selectors import PluginTriggerDebugEventPoller, WebhookTriggerDebugEventPoller from core.trigger.debug.events import PluginTriggerDebugEvent, build_plugin_pool_key +from graphon.enums import BuiltinNodeTypes from libs.datetime_utils import naive_utc_now from models.account import Account, Tenant from models.enums import AppTriggerStatus, AppTriggerType, CreatorUserRole, WorkflowTriggerStatus @@ -227,7 +228,9 @@ def test_webhook_trigger_creates_trigger_log( assert response.status_code == 200 db_session_with_containers.expire_all() - logs = db_session_with_containers.query(WorkflowTriggerLog).filter_by(app_id=app_model.id).all() + logs = db_session_with_containers.scalars( + select(WorkflowTriggerLog).where(WorkflowTriggerLog.app_id == app_model.id) + ).all() assert logs, "Webhook trigger should create trigger log" @@ -611,7 +614,9 @@ def test_schedule_trigger_creates_trigger_log( # Verify WorkflowTriggerLog was created db_session_with_containers.expire_all() - logs = db_session_with_containers.query(WorkflowTriggerLog).filter_by(app_id=app_model.id).all() + logs = db_session_with_containers.scalars( + select(WorkflowTriggerLog).where(WorkflowTriggerLog.app_id == app_model.id) + ).all() assert logs, "Schedule trigger should create WorkflowTriggerLog" assert logs[0].trigger_type == AppTriggerType.TRIGGER_SCHEDULE assert logs[0].root_node_id == schedule_node_id @@ -786,11 +791,12 @@ def test_plugin_trigger_full_chain_with_db_verification( # Verify database records exist db_session_with_containers.expire_all() - plugin_triggers = ( - db_session_with_containers.query(WorkflowPluginTrigger) - .filter_by(app_id=app_model.id, node_id=plugin_node_id) - .all() - ) + plugin_triggers = db_session_with_containers.scalars( + select(WorkflowPluginTrigger).where( + WorkflowPluginTrigger.app_id == app_model.id, + WorkflowPluginTrigger.node_id == plugin_node_id, + ) + ).all() assert plugin_triggers, "WorkflowPluginTrigger record should exist" assert plugin_triggers[0].provider_id == provider_id assert plugin_triggers[0].event_name == "test_event" diff --git a/api/tests/unit_tests/configs/test_dify_config.py b/api/tests/unit_tests/configs/test_dify_config.py index d6933e2180..bad246a4bb 100644 --- a/api/tests/unit_tests/configs/test_dify_config.py +++ b/api/tests/unit_tests/configs/test_dify_config.py @@ -145,7 +145,7 @@ def test_inner_api_config_exist(monkeypatch: pytest.MonkeyPatch): def test_db_extras_options_merging(monkeypatch: pytest.MonkeyPatch): - """Test that DB_EXTRAS options are properly merged with default timezone setting""" + """Test that DB_EXTRAS options are merged with the default timezone startup option.""" # Set environment variables monkeypatch.setenv("DB_TYPE", "postgresql") monkeypatch.setenv("DB_USERNAME", "postgres") @@ -158,15 +158,28 @@ def test_db_extras_options_merging(monkeypatch: pytest.MonkeyPatch): # Create config config = DifyConfig() - # Get engine options - engine_options = config.SQLALCHEMY_ENGINE_OPTIONS - - # Verify options contains both search_path and timezone - options = engine_options["connect_args"]["options"] + options = config.SQLALCHEMY_ENGINE_OPTIONS["connect_args"]["options"] assert "search_path=myschema" in options assert "timezone=UTC" in options +def test_db_session_timezone_override_can_disable_app_level_timezone_injection(monkeypatch: pytest.MonkeyPatch): + monkeypatch.setenv("DB_TYPE", "postgresql") + monkeypatch.setenv("DB_USERNAME", "postgres") + monkeypatch.setenv("DB_PASSWORD", "postgres") + monkeypatch.setenv("DB_HOST", "localhost") + monkeypatch.setenv("DB_PORT", "5432") + monkeypatch.setenv("DB_DATABASE", "dify") + monkeypatch.setenv("DB_EXTRAS", "options=-c search_path=myschema") + monkeypatch.setenv("DB_SESSION_TIMEZONE_OVERRIDE", "") + + config = DifyConfig() + + assert config.SQLALCHEMY_ENGINE_OPTIONS["connect_args"] == { + "options": "-c search_path=myschema", + } + + def test_pubsub_redis_url_default(monkeypatch: pytest.MonkeyPatch): os.environ.clear() @@ -223,6 +236,41 @@ def test_pubsub_redis_url_required_when_default_unavailable(monkeypatch: pytest. _ = DifyConfig().normalized_pubsub_redis_url +def test_dify_config_exposes_redis_key_prefix_default(monkeypatch: pytest.MonkeyPatch): + os.environ.clear() + + monkeypatch.setenv("CONSOLE_API_URL", "https://example.com") + monkeypatch.setenv("CONSOLE_WEB_URL", "https://example.com") + monkeypatch.setenv("DB_TYPE", "postgresql") + monkeypatch.setenv("DB_USERNAME", "postgres") + monkeypatch.setenv("DB_PASSWORD", "postgres") + monkeypatch.setenv("DB_HOST", "localhost") + monkeypatch.setenv("DB_PORT", "5432") + monkeypatch.setenv("DB_DATABASE", "dify") + + config = DifyConfig(_env_file=None) + + assert config.REDIS_KEY_PREFIX == "" + + +def test_dify_config_reads_redis_key_prefix_from_env(monkeypatch: pytest.MonkeyPatch): + os.environ.clear() + + monkeypatch.setenv("CONSOLE_API_URL", "https://example.com") + monkeypatch.setenv("CONSOLE_WEB_URL", "https://example.com") + monkeypatch.setenv("DB_TYPE", "postgresql") + monkeypatch.setenv("DB_USERNAME", "postgres") + monkeypatch.setenv("DB_PASSWORD", "postgres") + monkeypatch.setenv("DB_HOST", "localhost") + monkeypatch.setenv("DB_PORT", "5432") + monkeypatch.setenv("DB_DATABASE", "dify") + monkeypatch.setenv("REDIS_KEY_PREFIX", "enterprise-a") + + config = DifyConfig(_env_file=None) + + assert config.REDIS_KEY_PREFIX == "enterprise-a" + + @pytest.mark.parametrize( ("broker_url", "expected_host", "expected_port", "expected_username", "expected_password", "expected_db"), [ diff --git a/api/tests/unit_tests/controllers/console/app/test_app_import_api.py b/api/tests/unit_tests/controllers/console/app/test_app_import_api.py new file mode 100644 index 0000000000..9c4678aed3 --- /dev/null +++ b/api/tests/unit_tests/controllers/console/app/test_app_import_api.py @@ -0,0 +1,139 @@ +"""Unit tests for console app import endpoints.""" + +from __future__ import annotations + +from types import SimpleNamespace +from unittest.mock import MagicMock + +import pytest + +from controllers.console.app import app_import as app_import_module +from services.app_dsl_service import ImportStatus + + +def _unwrap(func): + bound_self = getattr(func, "__self__", None) + while hasattr(func, "__wrapped__"): + func = func.__wrapped__ + if bound_self is not None: + return func.__get__(bound_self, bound_self.__class__) + return func + + +class _Result: + def __init__(self, status: ImportStatus, app_id: str | None = "app-1"): + self.status = status + self.app_id = app_id + + def model_dump(self, mode: str = "json"): + return {"status": self.status, "app_id": self.app_id} + + +def _install_features(monkeypatch: pytest.MonkeyPatch, enabled: bool) -> None: + features = SimpleNamespace(webapp_auth=SimpleNamespace(enabled=enabled)) + monkeypatch.setattr(app_import_module.FeatureService, "get_system_features", lambda: features) + + +def _mock_session(monkeypatch: pytest.MonkeyPatch) -> MagicMock: + fake_session = MagicMock() + fake_session.__enter__.return_value = fake_session + fake_session.__exit__.return_value = None + monkeypatch.setattr(app_import_module, "db", SimpleNamespace(engine=object())) + monkeypatch.setattr(app_import_module, "Session", lambda *_args, **_kwargs: fake_session) + return fake_session + + +class TestAppImportApi: + @pytest.fixture + def api(self): + return app_import_module.AppImportApi() + + def test_import_post_returns_failed_status_and_rolls_back(self, api, app, monkeypatch: pytest.MonkeyPatch) -> None: + method = _unwrap(api.post) + + _install_features(monkeypatch, enabled=False) + session = _mock_session(monkeypatch) + monkeypatch.setattr( + app_import_module.AppDslService, + "import_app", + lambda *_args, **_kwargs: _Result(ImportStatus.FAILED, app_id=None), + ) + monkeypatch.setattr(app_import_module, "current_account_with_tenant", lambda: (SimpleNamespace(id="u1"), "t1")) + + with app.test_request_context("/console/api/apps/imports", method="POST", json={"mode": "yaml-content"}): + response, status = method() + + session.rollback.assert_called_once_with() + session.commit.assert_not_called() + assert status == 400 + assert response["status"] == ImportStatus.FAILED + + def test_import_post_returns_pending_status_and_commits(self, api, app, monkeypatch: pytest.MonkeyPatch) -> None: + method = _unwrap(api.post) + + _install_features(monkeypatch, enabled=False) + session = _mock_session(monkeypatch) + monkeypatch.setattr( + app_import_module.AppDslService, + "import_app", + lambda *_args, **_kwargs: _Result(ImportStatus.PENDING), + ) + monkeypatch.setattr(app_import_module, "current_account_with_tenant", lambda: (SimpleNamespace(id="u1"), "t1")) + + with app.test_request_context("/console/api/apps/imports", method="POST", json={"mode": "yaml-content"}): + response, status = method() + + session.commit.assert_called_once_with() + session.rollback.assert_not_called() + assert status == 202 + assert response["status"] == ImportStatus.PENDING + + def test_import_post_updates_webapp_auth_when_enabled(self, api, app, monkeypatch: pytest.MonkeyPatch) -> None: + method = _unwrap(api.post) + + _install_features(monkeypatch, enabled=True) + session = _mock_session(monkeypatch) + monkeypatch.setattr( + app_import_module.AppDslService, + "import_app", + lambda *_args, **_kwargs: _Result(ImportStatus.COMPLETED, app_id="app-123"), + ) + update_access = MagicMock() + monkeypatch.setattr(app_import_module.EnterpriseService.WebAppAuth, "update_app_access_mode", update_access) + monkeypatch.setattr(app_import_module, "current_account_with_tenant", lambda: (SimpleNamespace(id="u1"), "t1")) + + with app.test_request_context("/console/api/apps/imports", method="POST", json={"mode": "yaml-content"}): + response, status = method() + + session.commit.assert_called_once_with() + session.rollback.assert_not_called() + update_access.assert_called_once_with("app-123", "private") + assert status == 200 + assert response["status"] == ImportStatus.COMPLETED + + +class TestAppImportConfirmApi: + @pytest.fixture + def api(self): + return app_import_module.AppImportConfirmApi() + + def test_import_confirm_returns_failed_status_and_rolls_back( + self, api, app, monkeypatch: pytest.MonkeyPatch + ) -> None: + method = _unwrap(api.post) + + session = _mock_session(monkeypatch) + monkeypatch.setattr( + app_import_module.AppDslService, + "confirm_import", + lambda *_args, **_kwargs: _Result(ImportStatus.FAILED), + ) + monkeypatch.setattr(app_import_module, "current_account_with_tenant", lambda: (SimpleNamespace(id="u1"), "t1")) + + with app.test_request_context("/console/api/apps/imports/import-1/confirm", method="POST"): + response, status = method(import_id="import-1") + + session.rollback.assert_called_once_with() + session.commit.assert_not_called() + assert status == 400 + assert response["status"] == ImportStatus.FAILED diff --git a/api/tests/unit_tests/controllers/console/app/test_app_response_models.py b/api/tests/unit_tests/controllers/console/app/test_app_response_models.py index 2ac3dc037d..35d07a987d 100644 --- a/api/tests/unit_tests/controllers/console/app/test_app_response_models.py +++ b/api/tests/unit_tests/controllers/console/app/test_app_response_models.py @@ -138,12 +138,15 @@ def app_models(app_module): def patch_signed_url(monkeypatch, app_module): """Ensure icon URL generation uses a deterministic helper for tests.""" - def _fake_signed_url(key: str | None) -> str | None: - if not key: + def _fake_build_icon_url(_icon_type, key: str | None) -> str | None: + if key is None: + return None + icon_type = str(_icon_type).lower() + if icon_type != "image": return None return f"signed:{key}" - monkeypatch.setattr(app_module.file_helpers, "get_signed_file_url", _fake_signed_url) + monkeypatch.setattr(app_module, "build_icon_url", _fake_build_icon_url) def _ts(hour: int = 12) -> datetime: diff --git a/api/tests/unit_tests/controllers/console/app/test_audio.py b/api/tests/unit_tests/controllers/console/app/test_audio.py index c52bc02420..2d218dac7e 100644 --- a/api/tests/unit_tests/controllers/console/app/test_audio.py +++ b/api/tests/unit_tests/controllers/console/app/test_audio.py @@ -4,7 +4,6 @@ import io from types import SimpleNamespace import pytest -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.datastructures import FileStorage from werkzeug.exceptions import InternalServerError @@ -21,6 +20,7 @@ from controllers.console.app.error import ( UnsupportedAudioTypeError, ) from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from services.audio_service import AudioService from services.errors.app_model_config import AppModelConfigBrokenError from services.errors.audio import ( diff --git a/api/tests/unit_tests/controllers/console/app/test_conversation_api.py b/api/tests/unit_tests/controllers/console/app/test_conversation_api.py index 11b3b3470d..24b7e39f73 100644 --- a/api/tests/unit_tests/controllers/console/app/test_conversation_api.py +++ b/api/tests/unit_tests/controllers/console/app/test_conversation_api.py @@ -33,12 +33,17 @@ def test_completion_conversation_list_returns_paginated_result(app, monkeypatch: monkeypatch.setattr(conversation_module, "parse_time_range", lambda *_args, **_kwargs: (None, None)) paginate_result = MagicMock() + paginate_result.page = 1 + paginate_result.per_page = 20 + paginate_result.total = 0 + paginate_result.has_next = False + paginate_result.items = [] monkeypatch.setattr(conversation_module.db, "paginate", lambda *_args, **_kwargs: paginate_result) with app.test_request_context("/console/api/apps/app-1/completion-conversations", method="GET"): response = method(app_model=SimpleNamespace(id="app-1")) - assert response is paginate_result + assert response == {"page": 1, "limit": 20, "total": 0, "has_more": False, "data": []} def test_completion_conversation_list_invalid_time_range(app, monkeypatch: pytest.MonkeyPatch) -> None: @@ -71,12 +76,17 @@ def test_chat_conversation_list_advanced_chat_calls_paginate(app, monkeypatch: p monkeypatch.setattr(conversation_module, "parse_time_range", lambda *_args, **_kwargs: (None, None)) paginate_result = MagicMock() + paginate_result.page = 1 + paginate_result.per_page = 20 + paginate_result.total = 0 + paginate_result.has_next = False + paginate_result.items = [] monkeypatch.setattr(conversation_module.db, "paginate", lambda *_args, **_kwargs: paginate_result) with app.test_request_context("/console/api/apps/app-1/chat-conversations", method="GET"): response = method(app_model=SimpleNamespace(id="app-1", mode=AppMode.ADVANCED_CHAT)) - assert response is paginate_result + assert response == {"page": 1, "limit": 20, "total": 0, "has_more": False, "data": []} def test_get_conversation_updates_read_at(monkeypatch: pytest.MonkeyPatch) -> None: diff --git a/api/tests/unit_tests/controllers/console/app/test_conversation_read_timestamp.py b/api/tests/unit_tests/controllers/console/app/test_conversation_read_timestamp.py deleted file mode 100644 index f588ab261d..0000000000 --- a/api/tests/unit_tests/controllers/console/app/test_conversation_read_timestamp.py +++ /dev/null @@ -1,42 +0,0 @@ -from datetime import datetime -from types import SimpleNamespace -from unittest.mock import MagicMock, patch - -from controllers.console.app.conversation import _get_conversation - - -def test_get_conversation_mark_read_keeps_updated_at_unchanged(): - app_model = SimpleNamespace(id="app-id") - account = SimpleNamespace(id="account-id") - conversation = MagicMock() - conversation.id = "conversation-id" - - with ( - patch( - "controllers.console.app.conversation.current_account_with_tenant", - return_value=(account, None), - autospec=True, - ), - patch( - "controllers.console.app.conversation.naive_utc_now", - return_value=datetime(2026, 2, 9, 0, 0, 0), - autospec=True, - ), - patch("controllers.console.app.conversation.db.session", autospec=True) as mock_session, - ): - mock_session.scalar.return_value = conversation - - _get_conversation(app_model, "conversation-id") - - statement = mock_session.execute.call_args[0][0] - compiled = statement.compile() - sql_text = str(compiled).lower() - compact_sql_text = sql_text.replace(" ", "") - params = compiled.params - - assert "updated_at=current_timestamp" not in compact_sql_text - assert "updated_at=conversations.updated_at" in compact_sql_text - assert "read_at=:read_at" in compact_sql_text - assert "read_account_id=:read_account_id" in compact_sql_text - assert params["read_at"] == datetime(2026, 2, 9, 0, 0, 0) - assert params["read_account_id"] == "account-id" diff --git a/api/tests/unit_tests/controllers/console/app/test_conversation_variables_api.py b/api/tests/unit_tests/controllers/console/app/test_conversation_variables_api.py new file mode 100644 index 0000000000..1a412aff29 --- /dev/null +++ b/api/tests/unit_tests/controllers/console/app/test_conversation_variables_api.py @@ -0,0 +1,108 @@ +from __future__ import annotations + +from contextlib import nullcontext +from datetime import UTC, datetime +from types import SimpleNamespace + +import pytest +from pydantic import ValidationError + +from controllers.console.app import conversation_variables as conversation_variables_module +from graphon.variables.types import SegmentType + + +def _unwrap(func): + bound_self = getattr(func, "__self__", None) + while hasattr(func, "__wrapped__"): + func = func.__wrapped__ + if bound_self is not None: + return func.__get__(bound_self, bound_self.__class__) + return func + + +def test_get_conversation_variables_returns_paginated_response(app, monkeypatch: pytest.MonkeyPatch) -> None: + api = conversation_variables_module.ConversationVariablesApi() + method = _unwrap(api.get) + + created_at = datetime(2026, 1, 1, tzinfo=UTC) + updated_at = datetime(2026, 1, 2, tzinfo=UTC) + row = SimpleNamespace( + created_at=created_at, + updated_at=updated_at, + to_variable=lambda: SimpleNamespace( + model_dump=lambda: { + "id": "var-1", + "name": "my_var", + "value_type": "string", + "value": "value", + "description": "desc", + } + ), + ) + session = SimpleNamespace(scalars=lambda _stmt: SimpleNamespace(all=lambda: [row])) + monkeypatch.setattr(conversation_variables_module, "db", SimpleNamespace(engine=object())) + monkeypatch.setattr( + conversation_variables_module, + "sessionmaker", + lambda *_args, **_kwargs: SimpleNamespace(begin=lambda: nullcontext(session)), + ) + + with app.test_request_context( + "/console/api/apps/app-1/conversation-variables", + method="GET", + query_string={"conversation_id": "conv-1"}, + ): + response = method(app_model=SimpleNamespace(id="app-1")) + + assert response["page"] == 1 + assert response["limit"] == 100 + assert response["total"] == 1 + assert response["has_more"] is False + assert response["data"][0]["id"] == "var-1" + assert response["data"][0]["created_at"] == int(created_at.timestamp()) + assert response["data"][0]["updated_at"] == int(updated_at.timestamp()) + + +def test_get_conversation_variables_normalizes_value_type_and_value(app, monkeypatch: pytest.MonkeyPatch) -> None: + api = conversation_variables_module.ConversationVariablesApi() + method = _unwrap(api.get) + + row = SimpleNamespace( + created_at=None, + updated_at=None, + to_variable=lambda: SimpleNamespace( + model_dump=lambda: { + "id": "var-2", + "name": "my_var_2", + "value_type": SegmentType.INTEGER, + "value": 42, + "description": None, + } + ), + ) + session = SimpleNamespace(scalars=lambda _stmt: SimpleNamespace(all=lambda: [row])) + monkeypatch.setattr(conversation_variables_module, "db", SimpleNamespace(engine=object())) + monkeypatch.setattr( + conversation_variables_module, + "sessionmaker", + lambda *_args, **_kwargs: SimpleNamespace(begin=lambda: nullcontext(session)), + ) + + with app.test_request_context( + "/console/api/apps/app-1/conversation-variables", + method="GET", + query_string={"conversation_id": "conv-1"}, + ): + response = method(app_model=SimpleNamespace(id="app-1")) + + assert response["data"][0]["value_type"] == "number" + assert response["data"][0]["value"] == "42" + + +def test_get_conversation_variables_requires_conversation_id(app) -> None: + api = conversation_variables_module.ConversationVariablesApi() + method = _unwrap(api.get) + + with app.test_request_context("/console/api/apps/app-1/conversation-variables", method="GET"): + with pytest.raises(ValidationError): + method(app_model=SimpleNamespace(id="app-1")) diff --git a/api/tests/unit_tests/controllers/console/app/test_mcp_server_response.py b/api/tests/unit_tests/controllers/console/app/test_mcp_server_response.py new file mode 100644 index 0000000000..1af15d8dc6 --- /dev/null +++ b/api/tests/unit_tests/controllers/console/app/test_mcp_server_response.py @@ -0,0 +1,138 @@ +import datetime +from types import SimpleNamespace +from unittest.mock import PropertyMock, patch + +from flask import Flask + +from controllers.console import console_ns +from controllers.console.app.mcp_server import AppMCPServerController, AppMCPServerResponse + + +def unwrap(func): + while hasattr(func, "__wrapped__"): + func = func.__wrapped__ + return func + + +class _ValidatedResponse: + def __init__(self, payload): + self._payload = payload + + def model_dump(self, mode="json"): + return self._payload + + +class TestAppMCPServerResponse: + def test_parameters_json_string_parsed(self): + data = { + "id": "s1", + "name": "test", + "server_code": "code", + "description": "desc", + "status": "active", + "parameters": '{"key": "value"}', + } + resp = AppMCPServerResponse.model_validate(data) + assert resp.parameters == {"key": "value"} + + def test_parameters_invalid_json_returns_original(self): + data = { + "id": "s1", + "name": "test", + "server_code": "code", + "description": "desc", + "status": "active", + "parameters": "not-valid-json", + } + resp = AppMCPServerResponse.model_validate(data) + assert resp.parameters == "not-valid-json" + + def test_parameters_dict_passthrough(self): + data = { + "id": "s1", + "name": "test", + "server_code": "code", + "description": "desc", + "status": "active", + "parameters": {"already": "parsed"}, + } + resp = AppMCPServerResponse.model_validate(data) + assert resp.parameters == {"already": "parsed"} + + def test_parameters_json_array_parsed(self): + data = { + "id": "s1", + "name": "test", + "server_code": "code", + "description": "desc", + "status": "active", + "parameters": '["a", "b"]', + } + resp = AppMCPServerResponse.model_validate(data) + assert resp.parameters == ["a", "b"] + + def test_timestamps_normalized(self): + dt = datetime.datetime(2024, 1, 1, 0, 0, 0, tzinfo=datetime.UTC) + data = { + "id": "s1", + "name": "test", + "server_code": "code", + "description": "desc", + "status": "active", + "parameters": {}, + "created_at": dt, + "updated_at": dt, + } + resp = AppMCPServerResponse.model_validate(data) + assert resp.created_at == int(dt.timestamp()) + assert resp.updated_at == int(dt.timestamp()) + + def test_timestamps_none(self): + data = { + "id": "s1", + "name": "test", + "server_code": "code", + "description": "desc", + "status": "active", + "parameters": {}, + } + resp = AppMCPServerResponse.model_validate(data) + assert resp.created_at is None + assert resp.updated_at is None + + +class TestAppMCPServerController: + def test_get_returns_empty_dict_when_server_missing(self): + api = AppMCPServerController() + method = unwrap(api.get) + + with patch("controllers.console.app.mcp_server.db.session.scalar", return_value=None): + response = method(api, app_model=SimpleNamespace(id="app-1")) + + assert response == {} + + def test_post_returns_201(self): + api = AppMCPServerController() + method = unwrap(api.post) + payload = {"parameters": {"timeout": 30}} + app = Flask(__name__) + app.config["TESTING"] = True + + with ( + app.test_request_context("/", json=payload), + patch.object(type(console_ns), "payload", new_callable=PropertyMock, return_value=payload), + patch("controllers.console.app.mcp_server.current_account_with_tenant", return_value=(None, "tenant-1")), + patch("controllers.console.app.mcp_server.db.session.add"), + patch("controllers.console.app.mcp_server.db.session.commit"), + patch("controllers.console.app.mcp_server.AppMCPServer.generate_server_code", return_value="server-code"), + patch( + "controllers.console.app.mcp_server.AppMCPServerResponse.model_validate", + return_value=_ValidatedResponse({"id": "server-1"}), + ), + ): + response, status_code = method( + api, app_model=SimpleNamespace(id="app-1", name="Demo App", description="App description") + ) + + assert response == {"id": "server-1"} + assert status_code == 201 diff --git a/api/tests/unit_tests/controllers/console/app/test_message_api.py b/api/tests/unit_tests/controllers/console/app/test_message_api.py index a76e958829..c984dbef5d 100644 --- a/api/tests/unit_tests/controllers/console/app/test_message_api.py +++ b/api/tests/unit_tests/controllers/console/app/test_message_api.py @@ -1,5 +1,7 @@ from __future__ import annotations +from datetime import UTC, datetime + import pytest from controllers.console.app import message as message_module @@ -120,3 +122,24 @@ def test_suggested_questions_response(app, monkeypatch: pytest.MonkeyPatch) -> N response = message_module.SuggestedQuestionsResponse(data=["What is AI?", "How does ML work?"]) assert len(response.data) == 2 assert response.data[0] == "What is AI?" + + +def test_message_detail_response_normalizes_aliases_and_timestamp(app, monkeypatch: pytest.MonkeyPatch) -> None: + """Test MessageDetailResponse normalizes alias fields and datetime timestamps.""" + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + response = message_module.MessageDetailResponse.model_validate( + { + "id": "550e8400-e29b-41d4-a716-446655440000", + "conversation_id": "550e8400-e29b-41d4-a716-446655440001", + "inputs": {"foo": "bar"}, + "query": "hello", + "re_sign_file_url_answer": "world", + "from_source": "user", + "status": "normal", + "created_at": created_at, + "message_metadata_dict": {"token_usage": 3}, + } + ) + assert response.answer == "world" + assert response.metadata == {"token_usage": 3} + assert response.created_at == int(created_at.timestamp()) diff --git a/api/tests/unit_tests/controllers/console/app/test_workflow.py b/api/tests/unit_tests/controllers/console/app/test_workflow.py index 3607636880..6ff3b19362 100644 --- a/api/tests/unit_tests/controllers/console/app/test_workflow.py +++ b/api/tests/unit_tests/controllers/console/app/test_workflow.py @@ -1,15 +1,16 @@ from __future__ import annotations +import json from datetime import datetime from types import SimpleNamespace from unittest.mock import Mock import pytest -from graphon.file import File, FileTransferMethod, FileType from werkzeug.exceptions import HTTPException, NotFound from controllers.console.app import workflow as workflow_module from controllers.console.app.error import DraftWorkflowNotExist, DraftWorkflowNotSync +from graphon.file import File, FileTransferMethod, FileType def _unwrap(func): @@ -258,6 +259,63 @@ def test_restore_published_workflow_to_draft_returns_400_for_invalid_structure( assert exc.value.description == "invalid workflow graph" +def test_get_published_workflows_marshals_items_before_session_closes(app, monkeypatch: pytest.MonkeyPatch) -> None: + api = workflow_module.PublishedAllWorkflowApi() + handler = _unwrap(api.get) + + session_state = {"open": False} + + class _SessionContext: + def __enter__(self): + session_state["open"] = True + return object() + + def __exit__(self, exc_type, exc, tb): + session_state["open"] = False + return False + + class _SessionMaker: + def begin(self): + return _SessionContext() + + class _Workflow: + @property + def id(self): + assert session_state["open"] is True + return "w1" + + monkeypatch.setattr(workflow_module, "db", SimpleNamespace(engine=object())) + monkeypatch.setattr(workflow_module, "sessionmaker", lambda *_args, **_kwargs: _SessionMaker()) + monkeypatch.setattr(workflow_module, "current_account_with_tenant", lambda: (SimpleNamespace(id="u1"), "t1")) + monkeypatch.setattr( + workflow_module, + "WorkflowService", + lambda: SimpleNamespace( + get_all_published_workflow=lambda **_kwargs: ([_Workflow()], False), + ), + ) + + def _fake_marshal(items, fields): + assert session_state["open"] is True + return [{"id": item.id} for item in items] + + monkeypatch.setattr(workflow_module, "marshal", _fake_marshal) + + with app.test_request_context( + "/apps/app/workflows", + method="GET", + query_string={"page": 1, "limit": 10, "user_id": "", "named_only": "false"}, + ): + response = handler(api, app_model=SimpleNamespace(id="app", workflow_id="wf-1")) + + assert response == { + "items": [{"id": "w1"}], + "page": 1, + "limit": 10, + "has_more": False, + } + + def test_draft_workflow_get_not_found(monkeypatch: pytest.MonkeyPatch) -> None: monkeypatch.setattr( workflow_module, "WorkflowService", lambda: SimpleNamespace(get_draft_workflow=lambda **_k: None) @@ -290,3 +348,87 @@ def test_advanced_chat_run_conversation_not_exists(app, monkeypatch: pytest.Monk ): with pytest.raises(NotFound): handler(api, app_model=SimpleNamespace(id="app")) + + +def test_workflow_online_users_filters_inaccessible_workflow(app, monkeypatch: pytest.MonkeyPatch) -> None: + app_id_1 = "11111111-1111-1111-1111-111111111111" + app_id_2 = "22222222-2222-2222-2222-222222222222" + signed_avatar_url = "https://files.example.com/signed/avatar-1" + sign_avatar = Mock(return_value=signed_avatar_url) + monkeypatch.setattr(workflow_module, "current_account_with_tenant", lambda: (SimpleNamespace(), "tenant-1")) + monkeypatch.setattr( + workflow_module, + "WorkflowService", + lambda: SimpleNamespace(get_accessible_app_ids=lambda app_ids, tenant_id: {app_id_1}), + ) + monkeypatch.setattr(workflow_module.file_helpers, "get_signed_file_url", sign_avatar) + + workflow_module.redis_client.hgetall.side_effect = lambda key: ( + { + b"sid-1": json.dumps( + { + "user_id": "u-1", + "username": "Alice", + "avatar": "avatar-file-id", + "sid": "sid-1", + } + ) + } + if key == f"{workflow_module.WORKFLOW_ONLINE_USERS_PREFIX}{app_id_1}" + else {} + ) + + api = workflow_module.WorkflowOnlineUsersApi() + handler = _unwrap(api.get) + + with app.test_request_context( + f"/apps/workflows/online-users?app_ids={app_id_1},{app_id_2}", + method="GET", + ): + response = handler(api) + + assert response == { + "data": [ + { + "app_id": app_id_1, + "users": [ + { + "user_id": "u-1", + "username": "Alice", + "avatar": signed_avatar_url, + "sid": "sid-1", + } + ], + } + ] + } + workflow_module.redis_client.hgetall.assert_called_once_with( + f"{workflow_module.WORKFLOW_ONLINE_USERS_PREFIX}{app_id_1}" + ) + sign_avatar.assert_called_once_with("avatar-file-id") + + +def test_workflow_online_users_rejects_excessive_workflow_ids(app, monkeypatch: pytest.MonkeyPatch) -> None: + monkeypatch.setattr(workflow_module, "current_account_with_tenant", lambda: (SimpleNamespace(), "tenant-1")) + accessible_app_ids = Mock(return_value=set()) + monkeypatch.setattr( + workflow_module, + "WorkflowService", + lambda: SimpleNamespace(get_accessible_app_ids=accessible_app_ids), + ) + + excessive_ids = ",".join(f"wf-{index}" for index in range(workflow_module.MAX_WORKFLOW_ONLINE_USERS_QUERY_IDS + 1)) + + api = workflow_module.WorkflowOnlineUsersApi() + handler = _unwrap(api.get) + + with app.test_request_context( + f"/apps/workflows/online-users?app_ids={excessive_ids}", + method="GET", + ): + with pytest.raises(HTTPException) as exc: + handler(api) + + assert exc.value.code == 400 + assert "Maximum" in exc.value.description + accessible_app_ids.assert_not_called() diff --git a/api/tests/unit_tests/controllers/console/app/test_workflow_app_log_api.py b/api/tests/unit_tests/controllers/console/app/test_workflow_app_log_api.py new file mode 100644 index 0000000000..a9853f25b0 --- /dev/null +++ b/api/tests/unit_tests/controllers/console/app/test_workflow_app_log_api.py @@ -0,0 +1,84 @@ +from __future__ import annotations + +from datetime import UTC, datetime + +from controllers.console.app import workflow_app_log as workflow_app_log_module +from graphon.enums import WorkflowExecutionStatus + + +def test_workflow_app_log_query_parses_bool_and_datetime(): + query = workflow_app_log_module.WorkflowAppLogQuery.model_validate( + { + "detail": "true", + "created_at__before": "2026-01-02T03:04:05Z", + "page": "2", + "limit": "10", + } + ) + + assert query.detail is True + assert query.created_at__before == datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + assert query.page == 2 + assert query.limit == 10 + + +def test_workflow_app_log_pagination_response_normalizes_nested_fields(): + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + response = workflow_app_log_module.WorkflowAppLogPaginationResponse.model_validate( + { + "page": 1, + "limit": 20, + "total": 1, + "has_more": False, + "data": [ + { + "id": "log-1", + "workflow_run": { + "id": "run-1", + "status": WorkflowExecutionStatus.SUCCEEDED, + "created_at": created_at, + "finished_at": created_at, + }, + "details": {"trigger_metadata": {}}, + "created_by_account": {"id": "acc-1", "name": "acc", "email": "acc@example.com"}, + "created_at": created_at, + } + ], + } + ).model_dump(mode="json") + + assert response["data"][0]["workflow_run"]["status"] == "succeeded" + assert response["data"][0]["workflow_run"]["created_at"] == int(created_at.timestamp()) + assert response["data"][0]["created_at"] == int(created_at.timestamp()) + + +def test_workflow_archived_log_pagination_response_normalizes_nested_fields(): + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + response = workflow_app_log_module.WorkflowArchivedLogPaginationResponse.model_validate( + { + "page": 1, + "limit": 20, + "total": 1, + "has_more": False, + "data": [ + { + "id": "archived-1", + "workflow_run": { + "id": "run-1", + "status": WorkflowExecutionStatus.FAILED, + }, + "trigger_metadata": {"type": "trigger-plugin"}, + "created_by_end_user": { + "id": "eu-1", + "type": "anonymous", + "is_anonymous": True, + "session_id": "session-1", + }, + "created_at": created_at, + } + ], + } + ).model_dump(mode="json") + + assert response["data"][0]["workflow_run"]["status"] == "failed" + assert response["data"][0]["created_at"] == int(created_at.timestamp()) diff --git a/api/tests/unit_tests/controllers/console/app/test_workflow_comment_api.py b/api/tests/unit_tests/controllers/console/app/test_workflow_comment_api.py new file mode 100644 index 0000000000..85afcf0e60 --- /dev/null +++ b/api/tests/unit_tests/controllers/console/app/test_workflow_comment_api.py @@ -0,0 +1,201 @@ +from __future__ import annotations + +from contextlib import nullcontext +from dataclasses import dataclass +from datetime import datetime +from types import SimpleNamespace +from unittest.mock import MagicMock, PropertyMock, patch + +import pytest +from flask import Flask +from werkzeug.exceptions import Forbidden + +from controllers.console import console_ns +from controllers.console import wraps as console_wraps +from controllers.console.app import workflow_comment as workflow_comment_module +from controllers.console.app import wraps as app_wraps +from libs import login as login_lib +from models.account import Account, AccountStatus, TenantAccountRole + + +def _make_account(role: TenantAccountRole) -> Account: + account = Account(name="tester", email="tester@example.com") + account.status = AccountStatus.ACTIVE + account.role = role + account.id = "account-123" # type: ignore[assignment] + account._current_tenant = SimpleNamespace(id="tenant-123") # type: ignore[attr-defined] + account._get_current_object = lambda: account # type: ignore[attr-defined] + return account + + +def _make_app() -> SimpleNamespace: + return SimpleNamespace(id="app-123", tenant_id="tenant-123", status="normal", mode="workflow") + + +def _patch_console_guards(monkeypatch: pytest.MonkeyPatch, account: Account, app_model: SimpleNamespace) -> None: + monkeypatch.setattr(login_lib.dify_config, "LOGIN_DISABLED", True) + monkeypatch.setattr(login_lib, "current_user", account) + monkeypatch.setattr(login_lib, "current_account_with_tenant", lambda: (account, account.current_tenant_id)) + monkeypatch.setattr(login_lib, "check_csrf_token", lambda *_, **__: None) + monkeypatch.setattr(console_wraps, "current_account_with_tenant", lambda: (account, account.current_tenant_id)) + monkeypatch.setattr(console_wraps.dify_config, "EDITION", "CLOUD") + monkeypatch.setattr(app_wraps, "current_account_with_tenant", lambda: (account, account.current_tenant_id)) + monkeypatch.setattr(app_wraps, "_load_app_model", lambda _app_id: app_model) + monkeypatch.setattr(workflow_comment_module, "current_user", account) + + +def _patch_write_services(monkeypatch: pytest.MonkeyPatch) -> None: + for method_name in ( + "create_comment", + "update_comment", + "delete_comment", + "resolve_comment", + "validate_comment_access", + "create_reply", + "update_reply", + "delete_reply", + ): + monkeypatch.setattr(workflow_comment_module.WorkflowCommentService, method_name, MagicMock()) + + +def _patch_payload(payload: dict[str, object] | None): + if payload is None: + return nullcontext() + return patch.object( + type(console_ns), + "payload", + new_callable=PropertyMock, + return_value=payload, + ) + + +@dataclass(frozen=True) +class WriteCase: + resource_cls: type + method_name: str + path: str + kwargs: dict[str, str] + payload: dict[str, object] | None = None + + +@pytest.mark.parametrize( + "case", + [ + WriteCase( + resource_cls=workflow_comment_module.WorkflowCommentListApi, + method_name="post", + path="/console/api/apps/app-123/workflow/comments", + kwargs={"app_id": "app-123"}, + payload={"content": "hello", "position_x": 1.0, "position_y": 2.0, "mentioned_user_ids": []}, + ), + WriteCase( + resource_cls=workflow_comment_module.WorkflowCommentDetailApi, + method_name="put", + path="/console/api/apps/app-123/workflow/comments/comment-1", + kwargs={"app_id": "app-123", "comment_id": "comment-1"}, + payload={"content": "hello", "position_x": 1.0, "position_y": 2.0, "mentioned_user_ids": []}, + ), + WriteCase( + resource_cls=workflow_comment_module.WorkflowCommentDetailApi, + method_name="delete", + path="/console/api/apps/app-123/workflow/comments/comment-1", + kwargs={"app_id": "app-123", "comment_id": "comment-1"}, + ), + WriteCase( + resource_cls=workflow_comment_module.WorkflowCommentResolveApi, + method_name="post", + path="/console/api/apps/app-123/workflow/comments/comment-1/resolve", + kwargs={"app_id": "app-123", "comment_id": "comment-1"}, + ), + WriteCase( + resource_cls=workflow_comment_module.WorkflowCommentReplyApi, + method_name="post", + path="/console/api/apps/app-123/workflow/comments/comment-1/replies", + kwargs={"app_id": "app-123", "comment_id": "comment-1"}, + payload={"content": "reply", "mentioned_user_ids": []}, + ), + WriteCase( + resource_cls=workflow_comment_module.WorkflowCommentReplyDetailApi, + method_name="put", + path="/console/api/apps/app-123/workflow/comments/comment-1/replies/reply-1", + kwargs={"app_id": "app-123", "comment_id": "comment-1", "reply_id": "reply-1"}, + payload={"content": "reply", "mentioned_user_ids": []}, + ), + WriteCase( + resource_cls=workflow_comment_module.WorkflowCommentReplyDetailApi, + method_name="delete", + path="/console/api/apps/app-123/workflow/comments/comment-1/replies/reply-1", + kwargs={"app_id": "app-123", "comment_id": "comment-1", "reply_id": "reply-1"}, + ), + ], +) +def test_write_endpoints_require_edit_permission(app: Flask, monkeypatch: pytest.MonkeyPatch, case: WriteCase) -> None: + app.config.setdefault("RESTX_MASK_HEADER", "X-Fields") + account = _make_account(TenantAccountRole.NORMAL) + app_model = _make_app() + _patch_console_guards(monkeypatch, account, app_model) + _patch_write_services(monkeypatch) + + with app.test_request_context(case.path, method=case.method_name.upper(), json=case.payload): + with _patch_payload(case.payload): + handler = getattr(case.resource_cls(), case.method_name) + with pytest.raises(Forbidden): + handler(**case.kwargs) + + +def test_create_comment_allows_editor(app: Flask, monkeypatch: pytest.MonkeyPatch) -> None: + app.config.setdefault("RESTX_MASK_HEADER", "X-Fields") + account = _make_account(TenantAccountRole.EDITOR) + app_model = _make_app() + _patch_console_guards(monkeypatch, account, app_model) + + create_comment_mock = MagicMock(return_value={"id": "comment-1"}) + monkeypatch.setattr(workflow_comment_module.WorkflowCommentService, "create_comment", create_comment_mock) + payload = {"content": "hello", "position_x": 1.0, "position_y": 2.0, "mentioned_user_ids": []} + + with app.test_request_context("/console/api/apps/app-123/workflow/comments", method="POST", json=payload): + with _patch_payload(payload): + result = workflow_comment_module.WorkflowCommentListApi().post(app_id="app-123") + + if isinstance(result, tuple): + response = result[0] + else: + response = result + assert response["id"] == "comment-1" + create_comment_mock.assert_called_once_with( + tenant_id="tenant-123", + app_id="app-123", + created_by="account-123", + content="hello", + position_x=1.0, + position_y=2.0, + mentioned_user_ids=[], + ) + + +def test_update_comment_omits_mentions_when_payload_does_not_include_them( + app: Flask, monkeypatch: pytest.MonkeyPatch +) -> None: + app.config.setdefault("RESTX_MASK_HEADER", "X-Fields") + account = _make_account(TenantAccountRole.EDITOR) + app_model = _make_app() + _patch_console_guards(monkeypatch, account, app_model) + + update_comment_mock = MagicMock(return_value={"id": "comment-1", "updated_at": datetime(2024, 1, 1, 12, 0, 0)}) + monkeypatch.setattr(workflow_comment_module.WorkflowCommentService, "update_comment", update_comment_mock) + payload = {"content": "hello", "position_x": 10.0, "position_y": 20.0} + + with app.test_request_context("/console/api/apps/app-123/workflow/comments/comment-1", method="PUT", json=payload): + with _patch_payload(payload): + workflow_comment_module.WorkflowCommentDetailApi().put(app_id="app-123", comment_id="comment-1") + + update_comment_mock.assert_called_once_with( + tenant_id="tenant-123", + app_id="app-123", + comment_id="comment-1", + user_id="account-123", + content="hello", + position_x=10.0, + position_y=20.0, + mentioned_user_ids=None, + ) diff --git a/api/tests/unit_tests/controllers/console/app/test_workflow_pause_details_api.py b/api/tests/unit_tests/controllers/console/app/test_workflow_pause_details_api.py index e11102acb1..c4a8148446 100644 --- a/api/tests/unit_tests/controllers/console/app/test_workflow_pause_details_api.py +++ b/api/tests/unit_tests/controllers/console/app/test_workflow_pause_details_api.py @@ -6,14 +6,14 @@ from unittest.mock import Mock import pytest from flask import Flask -from graphon.entities.pause_reason import HumanInputRequired -from graphon.enums import WorkflowExecutionStatus -from graphon.nodes.human_input.entities import FormInput, UserAction -from graphon.nodes.human_input.enums import FormInputType from controllers.console import wraps as console_wraps from controllers.console.app import workflow_run as workflow_run_module from controllers.web.error import NotFoundError +from graphon.entities.pause_reason import HumanInputRequired +from graphon.enums import WorkflowExecutionStatus +from graphon.nodes.human_input.entities import FormInput, UserAction +from graphon.nodes.human_input.enums import FormInputType from libs import login as login_lib from models.account import Account, AccountStatus, TenantAccountRole from models.workflow import WorkflowRun diff --git a/api/tests/unit_tests/controllers/console/app/test_workflow_trigger_api.py b/api/tests/unit_tests/controllers/console/app/test_workflow_trigger_api.py new file mode 100644 index 0000000000..5363aa154f --- /dev/null +++ b/api/tests/unit_tests/controllers/console/app/test_workflow_trigger_api.py @@ -0,0 +1,54 @@ +from __future__ import annotations + +from datetime import UTC, datetime +from types import SimpleNamespace + +from controllers.console.app import workflow_trigger as workflow_trigger_module + + +def test_parser_models_validate(): + parser = workflow_trigger_module.Parser(node_id="node-1") + enable_parser = workflow_trigger_module.ParserEnable( + trigger_id="550e8400-e29b-41d4-a716-446655440000", enable_trigger=True + ) + + assert parser.node_id == "node-1" + assert enable_parser.enable_trigger is True + + +def test_workflow_trigger_response_serializes_datetime(): + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + trigger = SimpleNamespace( + id="trigger-1", + trigger_type="trigger-plugin", + title="Trigger", + node_id="node-1", + provider_name="provider", + icon="https://example.com/icon", + status="enabled", + created_at=created_at, + updated_at=created_at, + ) + + payload = workflow_trigger_module.WorkflowTriggerResponse.model_validate(trigger, from_attributes=True).model_dump( + mode="json" + ) + assert payload["id"] == "trigger-1" + assert payload["created_at"] == "2026-01-02T03:04:05Z" + assert payload["updated_at"] == "2026-01-02T03:04:05Z" + + +def test_webhook_trigger_response_serializes_datetime(): + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + webhook = { + "id": "webhook-1", + "webhook_id": "whk-1", + "webhook_url": "https://example.com/hook", + "webhook_debug_url": "https://example.com/hook/debug", + "node_id": "node-1", + "created_at": created_at, + } + + payload = workflow_trigger_module.WebhookTriggerResponse.model_validate(webhook).model_dump(mode="json") + assert payload["webhook_id"] == "whk-1" + assert payload["created_at"] == "2026-01-02T03:04:05Z" diff --git a/api/tests/unit_tests/controllers/console/app/workflow_draft_variables_test.py b/api/tests/unit_tests/controllers/console/app/workflow_draft_variables_test.py index 740da1f1df..b19a1740eb 100644 --- a/api/tests/unit_tests/controllers/console/app/workflow_draft_variables_test.py +++ b/api/tests/unit_tests/controllers/console/app/workflow_draft_variables_test.py @@ -5,7 +5,6 @@ from unittest.mock import MagicMock, patch import pytest from flask_restx import marshal -from graphon.variables.types import SegmentType from controllers.console.app.workflow_draft_variable import ( _WORKFLOW_DRAFT_VARIABLE_FIELDS, @@ -16,6 +15,7 @@ from controllers.console.app.workflow_draft_variable import ( ) from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, SYSTEM_VARIABLE_NODE_ID from factories.variable_factory import build_segment +from graphon.variables.types import SegmentType from libs.datetime_utils import naive_utc_now from libs.uuid_utils import uuidv7 from models.workflow import WorkflowDraftVariable, WorkflowDraftVariableFile diff --git a/api/tests/unit_tests/controllers/console/auth/test_login_logout.py b/api/tests/unit_tests/controllers/console/auth/test_login_logout.py index 560971206f..0cf97da878 100644 --- a/api/tests/unit_tests/controllers/console/auth/test_login_logout.py +++ b/api/tests/unit_tests/controllers/console/auth/test_login_logout.py @@ -14,18 +14,20 @@ from unittest.mock import MagicMock, patch import pytest from flask import Flask from flask_restx import Api +from werkzeug.exceptions import Unauthorized from controllers.console.auth.error import ( AuthenticationFailedError, EmailPasswordLoginLimitError, InvalidEmailError, ) -from controllers.console.auth.login import LoginApi, LogoutApi +from controllers.console.auth.login import EmailCodeLoginApi, LoginApi, LogoutApi from controllers.console.error import ( AccountBannedError, AccountInFreezeError, WorkspacesLimitExceeded, ) +from services.entities.auth_entities import LoginFailureReason from services.errors.account import AccountLoginError, AccountPasswordError @@ -34,6 +36,11 @@ def encode_password(password: str) -> str: return base64.b64encode(password.encode("utf-8")).decode() +def encode_code(code: str) -> str: + """Helper to encode verification code as Base64 for testing.""" + return base64.b64encode(code.encode("utf-8")).decode() + + class TestLoginApi: """Test cases for the LoginApi endpoint.""" @@ -197,12 +204,17 @@ class TestLoginApi: mock_get_invitation.return_value = None # Act & Assert - with app.test_request_context( - "/login", method="POST", json={"email": "test@example.com", "password": encode_password("password")} - ): - login_api = LoginApi() - with pytest.raises(EmailPasswordLoginLimitError): - login_api.post() + with patch("controllers.console.auth.login.logger.warning") as mock_log_warning: + with app.test_request_context( + "/login", method="POST", json={"email": "test@example.com", "password": encode_password("password")} + ): + login_api = LoginApi() + with pytest.raises(EmailPasswordLoginLimitError): + login_api.post() + + assert mock_log_warning.call_count == 1 + assert mock_log_warning.call_args.args[1] == "test@example.com" + assert mock_log_warning.call_args.args[2] == LoginFailureReason.LOGIN_RATE_LIMITED @patch("controllers.console.wraps.db") @patch("controllers.console.auth.login.dify_config.BILLING_ENABLED", True) @@ -220,12 +232,17 @@ class TestLoginApi: mock_is_frozen.return_value = True # Act & Assert - with app.test_request_context( - "/login", method="POST", json={"email": "frozen@example.com", "password": encode_password("password")} - ): - login_api = LoginApi() - with pytest.raises(AccountInFreezeError): - login_api.post() + with patch("controllers.console.auth.login.logger.warning") as mock_log_warning: + with app.test_request_context( + "/login", method="POST", json={"email": "frozen@example.com", "password": encode_password("password")} + ): + login_api = LoginApi() + with pytest.raises(AccountInFreezeError): + login_api.post() + + assert mock_log_warning.call_count == 1 + assert mock_log_warning.call_args.args[1] == "frozen@example.com" + assert mock_log_warning.call_args.args[2] == LoginFailureReason.ACCOUNT_IN_FREEZE @patch("controllers.console.wraps.db") @patch("controllers.console.auth.login.dify_config.BILLING_ENABLED", False) @@ -257,14 +274,20 @@ class TestLoginApi: mock_authenticate.side_effect = AccountPasswordError("Invalid password") # Act & Assert - with app.test_request_context( - "/login", method="POST", json={"email": "test@example.com", "password": encode_password("WrongPass123!")} - ): - login_api = LoginApi() - with pytest.raises(AuthenticationFailedError): - login_api.post() + with patch("controllers.console.auth.login.logger.warning") as mock_log_warning: + with app.test_request_context( + "/login", + method="POST", + json={"email": "test@example.com", "password": encode_password("WrongPass123!")}, + ): + login_api = LoginApi() + with pytest.raises(AuthenticationFailedError): + login_api.post() mock_add_rate_limit.assert_called_once_with("test@example.com") + assert mock_log_warning.call_count == 1 + assert mock_log_warning.call_args.args[1] == "test@example.com" + assert mock_log_warning.call_args.args[2] == LoginFailureReason.INVALID_CREDENTIALS @patch("controllers.console.wraps.db") @patch("controllers.console.auth.login.dify_config.BILLING_ENABLED", False) @@ -288,12 +311,19 @@ class TestLoginApi: mock_authenticate.side_effect = AccountLoginError("Account is banned") # Act & Assert - with app.test_request_context( - "/login", method="POST", json={"email": "banned@example.com", "password": encode_password("ValidPass123!")} - ): - login_api = LoginApi() - with pytest.raises(AccountBannedError): - login_api.post() + with patch("controllers.console.auth.login.logger.warning") as mock_log_warning: + with app.test_request_context( + "/login", + method="POST", + json={"email": "banned@example.com", "password": encode_password("ValidPass123!")}, + ): + login_api = LoginApi() + with pytest.raises(AccountBannedError): + login_api.post() + + assert mock_log_warning.call_count == 1 + assert mock_log_warning.call_args.args[1] == "banned@example.com" + assert mock_log_warning.call_args.args[2] == LoginFailureReason.ACCOUNT_BANNED @patch("controllers.console.wraps.db") @patch("controllers.console.auth.login.dify_config.BILLING_ENABLED", False) @@ -417,6 +447,36 @@ class TestLoginApi: mock_add_rate_limit.assert_not_called() mock_reset_rate_limit.assert_called_once_with("upper@example.com") + @patch("controllers.console.wraps.db") + @patch("controllers.console.auth.login.AccountService.get_email_code_login_data") + @patch("controllers.console.auth.login.AccountService.revoke_email_code_login_token") + @patch("controllers.console.auth.login._get_account_with_case_fallback") + def test_email_code_login_logs_banned_account( + self, + mock_get_account, + mock_revoke_token, + mock_get_token_data, + mock_db, + app, + ): + mock_db.session.query.return_value.first.return_value = MagicMock() + mock_get_token_data.return_value = {"email": "User@Example.com", "code": "123456"} + mock_get_account.side_effect = Unauthorized("Account is banned.") + + with patch("controllers.console.auth.login.logger.warning") as mock_log_warning: + with app.test_request_context( + "/email-code-login/validity", + method="POST", + json={"email": "User@Example.com", "code": encode_code("123456"), "token": "token-123"}, + ): + with pytest.raises(AccountBannedError): + EmailCodeLoginApi().post() + + mock_revoke_token.assert_called_once_with("token-123") + assert mock_log_warning.call_count == 1 + assert mock_log_warning.call_args.args[1] == "user@example.com" + assert mock_log_warning.call_args.args[2] == LoginFailureReason.ACCOUNT_BANNED + class TestLogoutApi: """Test cases for the LogoutApi endpoint.""" diff --git a/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_datasource_auth.py b/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_datasource_auth.py index 9c9f8da87c..5136922e88 100644 --- a/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_datasource_auth.py +++ b/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_datasource_auth.py @@ -1,7 +1,6 @@ from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from werkzeug.exceptions import Forbidden, NotFound from controllers.console import console_ns @@ -18,6 +17,7 @@ from controllers.console.datasets.rag_pipeline.datasource_auth import ( DatasourceUpdateProviderNameApi, ) from core.plugin.impl.oauth import OAuthHandler +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from services.datasource_provider_service import DatasourceProviderService from services.plugin.oauth_service import OAuthProxyService diff --git a/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_rag_pipeline_draft_variable.py b/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_rag_pipeline_draft_variable.py index 6ef8ccfdbd..63950736c5 100644 --- a/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_rag_pipeline_draft_variable.py +++ b/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_rag_pipeline_draft_variable.py @@ -2,7 +2,6 @@ from unittest.mock import MagicMock, patch import pytest from flask import Response -from graphon.variables.types import SegmentType from controllers.console import console_ns from controllers.console.app.error import DraftWorkflowNotExist @@ -16,6 +15,7 @@ from controllers.console.datasets.rag_pipeline.rag_pipeline_draft_variable impor ) from controllers.web.error import InvalidArgumentError, NotFoundError from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID +from graphon.variables.types import SegmentType from models.account import Account diff --git a/api/tests/unit_tests/controllers/console/datasets/test_datasets.py b/api/tests/unit_tests/controllers/console/datasets/test_datasets.py index 8555900f4e..94d6c17915 100644 --- a/api/tests/unit_tests/controllers/console/datasets/test_datasets.py +++ b/api/tests/unit_tests/controllers/console/datasets/test_datasets.py @@ -1555,7 +1555,17 @@ class TestDatasetApiKeyApi: method = unwrap(api.get) mock_key_1 = MagicMock(spec=ApiToken) + mock_key_1.id = "key-1" + mock_key_1.type = "dataset" + mock_key_1.token = "ds-abc" + mock_key_1.last_used_at = None + mock_key_1.created_at = None mock_key_2 = MagicMock(spec=ApiToken) + mock_key_2.id = "key-2" + mock_key_2.type = "dataset" + mock_key_2.token = "ds-def" + mock_key_2.last_used_at = None + mock_key_2.created_at = None with ( app.test_request_context("/"), @@ -1570,13 +1580,26 @@ class TestDatasetApiKeyApi: ): response = method(api) - assert "items" in response - assert response["items"] == [mock_key_1, mock_key_2] + assert "data" in response + assert len(response["data"]) == 2 + assert response["data"][0]["id"] == "key-1" + assert response["data"][0]["token"] == "ds-abc" + assert response["data"][1]["id"] == "key-2" + assert response["data"][1]["token"] == "ds-def" def test_post_create_api_key_success(self, app): api = DatasetApiKeyApi() method = unwrap(api.post) + mock_token = MagicMock() + mock_token.id = "new-key-id" + mock_token.last_used_at = None + mock_token.created_at = datetime.datetime(2024, 1, 1, 0, 0, 0, tzinfo=datetime.UTC) + + mock_api_token_cls = MagicMock() + mock_api_token_cls.return_value = mock_token + mock_api_token_cls.generate_api_key.return_value = "dataset-abc123" + with ( app.test_request_context("/"), patch( @@ -1588,8 +1611,8 @@ class TestDatasetApiKeyApi: return_value=3, ), patch( - "controllers.console.datasets.datasets.ApiToken.generate_api_key", - return_value="dataset-abc123", + "controllers.console.datasets.datasets.ApiToken", + mock_api_token_cls, ), patch( "controllers.console.datasets.datasets.db.session.add", @@ -1603,9 +1626,11 @@ class TestDatasetApiKeyApi: response, status = method(api) assert status == 200 - assert isinstance(response, ApiToken) - assert response.token == "dataset-abc123" - assert response.type == "dataset" + assert isinstance(response, dict) + assert response["id"] == "new-key-id" + assert response["token"] == "dataset-abc123" + assert response["type"] == "dataset" + assert response["created_at"] is not None def test_post_exceed_max_keys(self, app): api = DatasetApiKeyApi() diff --git a/api/tests/unit_tests/controllers/console/datasets/test_datasets_document.py b/api/tests/unit_tests/controllers/console/datasets/test_datasets_document.py index ce2278de4f..d9b02ac453 100644 --- a/api/tests/unit_tests/controllers/console/datasets/test_datasets_document.py +++ b/api/tests/unit_tests/controllers/console/datasets/test_datasets_document.py @@ -1,3 +1,4 @@ +from types import SimpleNamespace from unittest.mock import MagicMock, patch import pytest @@ -215,17 +216,23 @@ class TestDatasetDocumentListApi: method = unwrap(api.post) payload = {"indexing_technique": "economy"} + created_dataset = SimpleNamespace(id="ds-1", name="Dataset", indexing_technique="economy") + created_document = SimpleNamespace(id="doc-1", name="Document", doc_metadata_details=None) with ( app.test_request_context("/", json=payload), patch.object(type(console_ns), "payload", payload), + patch( + "controllers.console.datasets.datasets_document.DatasetService.get_dataset", + return_value=created_dataset, + ), patch( "controllers.console.datasets.datasets_document.DocumentService.document_create_args_validate", return_value=None, ), patch( "controllers.console.datasets.datasets_document.DocumentService.save_document_with_dataset_id", - return_value=([MagicMock()], "batch-1"), + return_value=([created_document], "batch-1"), ), ): response = method(api, "ds-1") diff --git a/api/tests/unit_tests/controllers/console/datasets/test_external.py b/api/tests/unit_tests/controllers/console/datasets/test_external.py index 161d0c41e8..514bbbe040 100644 --- a/api/tests/unit_tests/controllers/console/datasets/test_external.py +++ b/api/tests/unit_tests/controllers/console/datasets/test_external.py @@ -1,3 +1,4 @@ +from importlib import import_module from unittest.mock import MagicMock, PropertyMock, patch import pytest @@ -11,6 +12,7 @@ from controllers.console.datasets.external import ( BedrockRetrievalApi, ExternalApiTemplateApi, ExternalApiTemplateListApi, + ExternalApiUseCheckApi, ExternalDatasetCreateApi, ExternalKnowledgeHitTestingApi, ) @@ -19,6 +21,8 @@ from services.external_knowledge_service import ExternalDatasetService from services.hit_testing_service import HitTestingService from services.knowledge_service import ExternalDatasetTestService +external_controller = import_module("controllers.console.datasets.external") + def unwrap(func): while hasattr(func, "__wrapped__"): @@ -44,10 +48,11 @@ def current_user(): @pytest.fixture(autouse=True) -def mock_auth(mocker, current_user): - mocker.patch( - "controllers.console.datasets.external.current_account_with_tenant", - return_value=(current_user, "tenant-1"), +def mock_auth(monkeypatch, current_user): + monkeypatch.setattr( + external_controller, + "current_account_with_tenant", + lambda: (current_user, "tenant-1"), ) @@ -136,6 +141,26 @@ class TestExternalApiTemplateApi: method(api, "api-id") +class TestExternalApiUseCheckApi: + def test_get_scopes_usage_check_to_current_tenant(self, app): + api = ExternalApiUseCheckApi() + method = unwrap(api.get) + + with ( + app.test_request_context("/"), + patch.object( + ExternalDatasetService, + "external_knowledge_api_use_check", + return_value=(True, 2), + ) as mock_use_check, + ): + response, status = method(api, "api-id") + + assert status == 200 + assert response == {"is_using": True, "count": 2} + mock_use_check.assert_called_once_with("api-id", "tenant-1") + + class TestExternalDatasetCreateApi: def test_create_success(self, app): api = ExternalDatasetCreateApi() diff --git a/api/tests/unit_tests/controllers/console/datasets/test_hit_testing.py b/api/tests/unit_tests/controllers/console/datasets/test_hit_testing.py index 726c0a5cf3..09ed2aaf69 100644 --- a/api/tests/unit_tests/controllers/console/datasets/test_hit_testing.py +++ b/api/tests/unit_tests/controllers/console/datasets/test_hit_testing.py @@ -99,6 +99,57 @@ class TestHitTestingApi: assert "records" in result assert result["records"] == [] + def test_hit_testing_success_with_optional_record_fields(self, app, dataset, dataset_id): + api = HitTestingApi() + method = unwrap(api.post) + + payload = { + "query": "what is vector search", + } + records = [ + { + "segment": None, + "child_chunks": [], + "score": None, + "tsne_position": None, + "files": [], + "summary": None, + } + ] + + with ( + app.test_request_context("/"), + patch.object( + type(console_ns), + "payload", + new_callable=PropertyMock, + return_value=payload, + ), + patch.object( + HitTestingPayload, + "model_validate", + return_value=MagicMock(model_dump=lambda **_: payload), + ), + patch.object( + HitTestingApi, + "get_and_validate_dataset", + return_value=dataset, + ), + patch.object( + HitTestingApi, + "hit_testing_args_check", + ), + patch.object( + HitTestingApi, + "perform_hit_testing", + return_value={"query": payload["query"], "records": records}, + ), + ): + result = method(api, dataset_id) + + assert result["query"] == payload["query"] + assert result["records"] == records + def test_hit_testing_dataset_not_found(self, app, dataset_id): api = HitTestingApi() method = unwrap(api.post) diff --git a/api/tests/unit_tests/controllers/console/datasets/test_hit_testing_base.py b/api/tests/unit_tests/controllers/console/datasets/test_hit_testing_base.py index 710c9be684..e4acd91b76 100644 --- a/api/tests/unit_tests/controllers/console/datasets/test_hit_testing_base.py +++ b/api/tests/unit_tests/controllers/console/datasets/test_hit_testing_base.py @@ -1,7 +1,6 @@ from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import Forbidden, InternalServerError, NotFound import services @@ -21,6 +20,7 @@ from core.errors.error import ( ProviderTokenNotInitError, QuotaExceededError, ) +from graphon.model_runtime.errors.invoke import InvokeError from models.account import Account from services.dataset_service import DatasetService from services.hit_testing_service import HitTestingService diff --git a/api/tests/unit_tests/controllers/console/explore/test_audio.py b/api/tests/unit_tests/controllers/console/explore/test_audio.py index 66c9ba48c5..b4b57022e2 100644 --- a/api/tests/unit_tests/controllers/console/explore/test_audio.py +++ b/api/tests/unit_tests/controllers/console/explore/test_audio.py @@ -2,7 +2,6 @@ from io import BytesIO from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import InternalServerError import controllers.console.explore.audio as audio_module @@ -20,6 +19,7 @@ from core.errors.error import ( ProviderTokenNotInitError, QuotaExceededError, ) +from graphon.model_runtime.errors.invoke import InvokeError from services.errors.audio import ( AudioTooLargeServiceError, NoAudioUploadedServiceError, diff --git a/api/tests/unit_tests/controllers/console/explore/test_message.py b/api/tests/unit_tests/controllers/console/explore/test_message.py index 2e4ca4f2a4..145cc9cdd7 100644 --- a/api/tests/unit_tests/controllers/console/explore/test_message.py +++ b/api/tests/unit_tests/controllers/console/explore/test_message.py @@ -1,7 +1,6 @@ from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import InternalServerError, NotFound import controllers.console.explore.message as module @@ -22,6 +21,7 @@ from core.errors.error import ( ProviderTokenNotInitError, QuotaExceededError, ) +from graphon.model_runtime.errors.invoke import InvokeError from services.errors.conversation import ConversationNotExistsError from services.errors.message import ( FirstMessageNotExistsError, diff --git a/api/tests/unit_tests/controllers/console/explore/test_recommended_app.py b/api/tests/unit_tests/controllers/console/explore/test_recommended_app.py index 02c7507ea7..76c863577a 100644 --- a/api/tests/unit_tests/controllers/console/explore/test_recommended_app.py +++ b/api/tests/unit_tests/controllers/console/explore/test_recommended_app.py @@ -1,6 +1,7 @@ from unittest.mock import MagicMock, patch import controllers.console.explore.recommended_app as module +from models.model import AppMode, IconType def unwrap(func): @@ -90,3 +91,48 @@ class TestRecommendedAppApi: service_mock.assert_called_once_with("11111111-1111-1111-1111-111111111111") assert result == result_data + + +class TestRecommendedAppResponseModels: + def test_recommended_app_info_response_computes_icon_url(self): + with patch.object(module, "build_icon_url", return_value="https://signed/icon.png"): + payload = module.RecommendedAppInfoResponse.model_validate( + { + "id": "app-1", + "name": "App", + "mode": AppMode.CHAT, + "icon": "icon.png", + "icon_type": IconType.IMAGE, + "icon_background": "#fff", + } + ).model_dump(mode="json") + + assert payload["icon_url"] == "https://signed/icon.png" + + def test_recommended_app_list_response_serialization(self): + response = module.RecommendedAppListResponse.model_validate( + { + "recommended_apps": [ + { + "app": { + "id": "app-1", + "name": "App", + "mode": "chat", + "icon": "icon.png", + "icon_type": "emoji", + "icon_background": "#fff", + }, + "app_id": "app-1", + "description": "desc", + "category": "cat", + "position": 1, + "is_listed": True, + "can_trial": False, + } + ], + "categories": ["cat"], + } + ).model_dump(mode="json") + + assert response["recommended_apps"][0]["app_id"] == "app-1" + assert response["categories"] == ["cat"] diff --git a/api/tests/unit_tests/controllers/console/explore/test_trial.py b/api/tests/unit_tests/controllers/console/explore/test_trial.py index 04beb31389..3625056af9 100644 --- a/api/tests/unit_tests/controllers/console/explore/test_trial.py +++ b/api/tests/unit_tests/controllers/console/explore/test_trial.py @@ -3,7 +3,6 @@ from unittest.mock import MagicMock, patch from uuid import uuid4 import pytest -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import Forbidden, InternalServerError, NotFound import controllers.console.explore.trial as module @@ -26,6 +25,7 @@ from core.errors.error import ( ProviderTokenNotInitError, QuotaExceededError, ) +from graphon.model_runtime.errors.invoke import InvokeError from models import Account from models.account import TenantStatus from models.model import AppMode @@ -94,7 +94,7 @@ class TestTrialAppWorkflowRunApi: with app.test_request_context("/"): with pytest.raises(NotWorkflowAppError): - method(MagicMock(mode=AppMode.CHAT)) + method(api, MagicMock(mode=AppMode.CHAT)) def test_success(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowRunApi() @@ -106,7 +106,7 @@ class TestTrialAppWorkflowRunApi: patch.object(module.AppGenerateService, "generate", return_value=MagicMock()), patch.object(module.RecommendedAppService, "add_trial_app_record"), ): - result = method(trial_app_workflow) + result = method(api, trial_app_workflow) assert result is not None @@ -124,7 +124,7 @@ class TestTrialAppWorkflowRunApi: ), ): with pytest.raises(ProviderNotInitializeError): - method(trial_app_workflow) + method(api, trial_app_workflow) def test_workflow_quota_exceeded(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowRunApi() @@ -140,7 +140,7 @@ class TestTrialAppWorkflowRunApi: ), ): with pytest.raises(ProviderQuotaExceededError): - method(trial_app_workflow) + method(api, trial_app_workflow) def test_workflow_model_not_support(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowRunApi() @@ -156,7 +156,7 @@ class TestTrialAppWorkflowRunApi: ), ): with pytest.raises(ProviderModelCurrentlyNotSupportError): - method(trial_app_workflow) + method(api, trial_app_workflow) def test_workflow_invoke_error(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowRunApi() @@ -172,7 +172,7 @@ class TestTrialAppWorkflowRunApi: ), ): with pytest.raises(CompletionRequestError): - method(trial_app_workflow) + method(api, trial_app_workflow) def test_workflow_rate_limit_error(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowRunApi() @@ -188,7 +188,7 @@ class TestTrialAppWorkflowRunApi: ), ): with pytest.raises(InvokeRateLimitHttpError): - method(trial_app_workflow) + method(api, trial_app_workflow) def test_workflow_value_error(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowRunApi() @@ -204,7 +204,7 @@ class TestTrialAppWorkflowRunApi: ), ): with pytest.raises(ValueError): - method(trial_app_workflow) + method(api, trial_app_workflow) def test_workflow_generic_exception(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowRunApi() @@ -220,7 +220,7 @@ class TestTrialAppWorkflowRunApi: ), ): with pytest.raises(InternalServerError): - method(trial_app_workflow) + method(api, trial_app_workflow) class TestTrialChatApi: @@ -566,7 +566,7 @@ class TestTrialMessageSuggestedQuestionApi: with app.test_request_context("/"): with pytest.raises(NotChatAppError): - method(api, MagicMock(mode="completion"), str(uuid4())) + method(MagicMock(mode="completion"), str(uuid4())) def test_success(self, app, trial_app_chat, account): api = module.TrialMessageSuggestedQuestionApi() @@ -581,7 +581,7 @@ class TestTrialMessageSuggestedQuestionApi: return_value=["q1", "q2"], ), ): - result = method(api, trial_app_chat, str(uuid4())) + result = method(trial_app_chat, str(uuid4())) assert result == {"data": ["q1", "q2"]} @@ -599,7 +599,7 @@ class TestTrialMessageSuggestedQuestionApi: ), ): with pytest.raises(NotFound): - method(api, trial_app_chat, str(uuid4())) + method(trial_app_chat, str(uuid4())) class TestTrialAppParameterApi: @@ -931,7 +931,7 @@ class TestTrialAppWorkflowTaskStopApi: with app.test_request_context("/"): with pytest.raises(NotWorkflowAppError): - method(trial_app_chat, str(uuid4())) + method(api, trial_app_chat, str(uuid4())) def test_success(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowTaskStopApi() @@ -944,7 +944,7 @@ class TestTrialAppWorkflowTaskStopApi: patch.object(module.AppQueueManager, "set_stop_flag_no_user_check") as mock_set_flag, patch.object(module.GraphEngineManager, "send_stop_command") as mock_send_cmd, ): - result = method(trial_app_workflow, task_id) + result = method(api, trial_app_workflow, task_id) assert result == {"result": "success"} mock_set_flag.assert_called_once_with(task_id) diff --git a/api/tests/unit_tests/controllers/console/tag/test_tags.py b/api/tests/unit_tests/controllers/console/tag/test_tags.py index e89b89c8b1..2be5a21f28 100644 --- a/api/tests/unit_tests/controllers/console/tag/test_tags.py +++ b/api/tests/unit_tests/controllers/console/tag/test_tags.py @@ -1,9 +1,11 @@ +from types import SimpleNamespace from unittest.mock import MagicMock, PropertyMock, patch import pytest from flask import Flask from werkzeug.exceptions import Forbidden +import controllers.console.tag.tags as module from controllers.console import console_ns from controllers.console.tag.tags import ( TagBindingCreateApi, @@ -83,13 +85,20 @@ class TestTagListApi: ), patch( "controllers.console.tag.tags.TagService.get_tags", - return_value=[{"id": "1", "name": "tag"}], + return_value=[ + SimpleNamespace( + id="1", + name="tag", + type=TagType.KNOWLEDGE, + binding_count=1, + ) + ], ), ): result, status = method(api) assert status == 200 - assert isinstance(result, list) + assert result == [{"id": "1", "name": "tag", "type": "knowledge", "binding_count": "1"}] def test_post_success(self, app, admin_user, tag, payload_patch): api = TagListApi() @@ -113,6 +122,7 @@ class TestTagListApi: assert status == 200 assert result["name"] == "test-tag" + assert result["binding_count"] == "0" def test_post_forbidden(self, app, readonly_user, payload_patch): api = TagListApi() @@ -158,7 +168,7 @@ class TestTagUpdateDeleteApi: result, status = method(api, "tag-1") assert status == 200 - assert result["binding_count"] == 3 + assert result["binding_count"] == "3" def test_patch_forbidden(self, app, readonly_user, payload_patch): api = TagUpdateDeleteApi() @@ -277,3 +287,13 @@ class TestTagBindingDeleteApi: ): with pytest.raises(Forbidden): method(api) + + +class TestTagResponseModel: + def test_tag_response_normalizes_enum_type(self): + payload = module.TagResponse.model_validate( + {"id": "tag-1", "name": "tag", "type": TagType.KNOWLEDGE, "binding_count": 1} + ).model_dump(mode="json") + + assert payload["type"] == "knowledge" + assert payload["binding_count"] == "1" diff --git a/api/tests/unit_tests/controllers/console/test_workspace_account.py b/api/tests/unit_tests/controllers/console/test_workspace_account.py index 7f9fe9cbf9..c513be950b 100644 --- a/api/tests/unit_tests/controllers/console/test_workspace_account.py +++ b/api/tests/unit_tests/controllers/console/test_workspace_account.py @@ -11,7 +11,7 @@ from controllers.console.workspace.account import ( ChangeEmailSendEmailApi, CheckEmailUnique, ) -from models import Account +from models import Account, AccountStatus from services.account_service import AccountService @@ -33,7 +33,7 @@ def _build_account(email: str, account_id: str = "acc", tenant: object | None = account = Account(name=account_id, email=email) account.email = email account.id = account_id - account.status = "active" + account.status = AccountStatus.ACTIVE account._current_tenant = tenant_obj return account @@ -233,15 +233,20 @@ class TestCheckEmailUnique: def test_get_account_by_email_with_case_fallback_uses_lowercase_lookup(): - session = MagicMock() + mock_session = MagicMock() first = MagicMock() first.scalar_one_or_none.return_value = None second = MagicMock() expected_account = MagicMock() second.scalar_one_or_none.return_value = expected_account - session.execute.side_effect = [first, second] + mock_session.execute.side_effect = [first, second] - result = AccountService.get_account_by_email_with_case_fallback("Mixed@Test.com", session=session) + mock_factory = MagicMock() + mock_factory.create_session.return_value.__enter__ = MagicMock(return_value=mock_session) + mock_factory.create_session.return_value.__exit__ = MagicMock(return_value=False) + + with patch("services.account_service.session_factory", mock_factory): + result = AccountService.get_account_by_email_with_case_fallback("Mixed@Test.com") assert result is expected_account - assert session.execute.call_count == 2 + assert mock_session.execute.call_count == 2 diff --git a/api/tests/unit_tests/controllers/console/workspace/test_load_balancing_config.py b/api/tests/unit_tests/controllers/console/workspace/test_load_balancing_config.py index 9c42ee9529..b2f949c6e2 100644 --- a/api/tests/unit_tests/controllers/console/workspace/test_load_balancing_config.py +++ b/api/tests/unit_tests/controllers/console/workspace/test_load_balancing_config.py @@ -11,9 +11,10 @@ from unittest.mock import MagicMock import pytest from flask import Flask from flask.views import MethodView +from werkzeug.exceptions import Forbidden + from graphon.model_runtime.entities.model_entities import ModelType from graphon.model_runtime.errors.validate import CredentialsValidateFailedError -from werkzeug.exceptions import Forbidden if not hasattr(builtins, "MethodView"): builtins.MethodView = MethodView # type: ignore[attr-defined] diff --git a/api/tests/unit_tests/controllers/console/workspace/test_model_providers.py b/api/tests/unit_tests/controllers/console/workspace/test_model_providers.py index fb9eec98cb..168479af1e 100644 --- a/api/tests/unit_tests/controllers/console/workspace/test_model_providers.py +++ b/api/tests/unit_tests/controllers/console/workspace/test_model_providers.py @@ -1,7 +1,6 @@ from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from pydantic_core import ValidationError from werkzeug.exceptions import Forbidden @@ -14,6 +13,7 @@ from controllers.console.workspace.model_providers import ( ModelProviderValidateApi, PreferredProviderTypeUpdateApi, ) +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError VALID_UUID = "123e4567-e89b-12d3-a456-426614174000" INVALID_UUID = "123" diff --git a/api/tests/unit_tests/controllers/console/workspace/test_models.py b/api/tests/unit_tests/controllers/console/workspace/test_models.py index c829327bc7..f0d32f81fb 100644 --- a/api/tests/unit_tests/controllers/console/workspace/test_models.py +++ b/api/tests/unit_tests/controllers/console/workspace/test_models.py @@ -2,8 +2,6 @@ from unittest.mock import MagicMock, patch import pytest from flask import Flask -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from controllers.console.workspace.models import ( DefaultModelApi, @@ -16,6 +14,8 @@ from controllers.console.workspace.models import ( ModelProviderModelParameterRuleApi, ModelProviderModelValidateApi, ) +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError def unwrap(func): diff --git a/api/tests/unit_tests/controllers/console/workspace/test_workspace.py b/api/tests/unit_tests/controllers/console/workspace/test_workspace.py index b2d13dbbdf..e82a29f045 100644 --- a/api/tests/unit_tests/controllers/console/workspace/test_workspace.py +++ b/api/tests/unit_tests/controllers/console/workspace/test_workspace.py @@ -18,6 +18,7 @@ from controllers.console.workspace.workspace import ( CustomConfigWorkspaceApi, SwitchWorkspaceApi, TenantApi, + TenantInfoResponse, TenantListApi, WebappLogoWorkspaceApi, WorkspaceInfoApi, @@ -435,6 +436,23 @@ class TestTenantApi: assert status == 200 +class TestTenantInfoResponse: + def test_tenant_info_response_normalizes_enum_and_datetime(self): + created_at = naive_utc_now() + payload = TenantInfoResponse.model_validate( + { + "id": "t1", + "status": TenantStatus.NORMAL, + "plan": CloudPlan.TEAM, + "created_at": created_at, + } + ).model_dump(mode="json") + + assert payload["status"] == "normal" + assert payload["plan"] == "team" + assert payload["created_at"] == int(created_at.timestamp()) + + class TestSwitchWorkspaceApi: def test_switch_success(self, app): api = SwitchWorkspaceApi() diff --git a/api/tests/unit_tests/controllers/inner_api/app/test_dsl.py b/api/tests/unit_tests/controllers/inner_api/app/test_dsl.py index 974d8f7bc6..71381e6a2b 100644 --- a/api/tests/unit_tests/controllers/inner_api/app/test_dsl.py +++ b/api/tests/unit_tests/controllers/inner_api/app/test_dsl.py @@ -18,6 +18,7 @@ from controllers.inner_api.app.dsl import ( InnerAppDSLImportPayload, _get_active_account, ) +from models.account import AccountStatus from services.app_dsl_service import ImportStatus @@ -63,7 +64,7 @@ class TestGetActiveAccount: @patch("controllers.inner_api.app.dsl.db") def test_returns_active_account(self, mock_db): mock_account = MagicMock() - mock_account.status = "active" + mock_account.status = AccountStatus.ACTIVE mock_db.session.scalar.return_value = mock_account result = _get_active_account("user@example.com") @@ -74,7 +75,7 @@ class TestGetActiveAccount: @patch("controllers.inner_api.app.dsl.db") def test_returns_none_for_inactive_account(self, mock_db): mock_account = MagicMock() - mock_account.status = "banned" + mock_account.status = AccountStatus.BANNED mock_db.session.scalar.return_value = mock_account result = _get_active_account("banned@example.com") @@ -102,16 +103,16 @@ class TestEnterpriseAppDSLImport: @pytest.fixture def _mock_import_deps(self): - """Patch db, sessionmaker, and AppDslService for import handler tests.""" - mock_session_ctx = MagicMock() - mock_session_ctx.__enter__ = MagicMock(return_value=MagicMock()) - mock_session_ctx.__exit__ = MagicMock(return_value=False) - mock_sessionmaker = MagicMock(return_value=MagicMock(begin=MagicMock(return_value=mock_session_ctx))) + """Patch db, Session, and AppDslService for import handler tests.""" + mock_session = MagicMock() + mock_session.__enter__ = MagicMock(return_value=mock_session) + mock_session.__exit__ = MagicMock(return_value=False) with ( patch("controllers.inner_api.app.dsl.db"), - patch("controllers.inner_api.app.dsl.sessionmaker", mock_sessionmaker), + patch("controllers.inner_api.app.dsl.Session", return_value=mock_session), patch("controllers.inner_api.app.dsl.AppDslService") as mock_dsl_cls, ): + self._mock_session = mock_session self._mock_dsl = MagicMock() mock_dsl_cls.return_value = self._mock_dsl yield @@ -147,6 +148,8 @@ class TestEnterpriseAppDSLImport: assert status_code == 200 assert body["status"] == "completed" mock_account.set_tenant_id.assert_called_once_with("ws-123") + self._mock_session.commit.assert_called_once_with() + self._mock_session.rollback.assert_not_called() @pytest.mark.usefixtures("_mock_import_deps") @patch("controllers.inner_api.app.dsl._get_active_account") @@ -162,6 +165,8 @@ class TestEnterpriseAppDSLImport: assert status_code == 202 assert body["status"] == "pending" + self._mock_session.commit.assert_called_once_with() + self._mock_session.rollback.assert_not_called() @pytest.mark.usefixtures("_mock_import_deps") @patch("controllers.inner_api.app.dsl._get_active_account") @@ -177,6 +182,8 @@ class TestEnterpriseAppDSLImport: assert status_code == 400 assert body["status"] == "failed" + self._mock_session.rollback.assert_called_once_with() + self._mock_session.commit.assert_not_called() @patch("controllers.inner_api.app.dsl._get_active_account") def test_import_account_not_found_returns_404(self, mock_get_account, api_instance, app: Flask): diff --git a/api/tests/unit_tests/controllers/inner_api/plugin/test_plugin_wraps.py b/api/tests/unit_tests/controllers/inner_api/plugin/test_plugin_wraps.py index 957d7fbd9b..0895fac3a4 100644 --- a/api/tests/unit_tests/controllers/inner_api/plugin/test_plugin_wraps.py +++ b/api/tests/unit_tests/controllers/inner_api/plugin/test_plugin_wraps.py @@ -2,6 +2,7 @@ Unit tests for inner_api plugin decorators """ +from typing import Any from unittest.mock import MagicMock, patch import pytest @@ -232,11 +233,11 @@ class TestGetUserTenant: class PluginTestPayload: """Simple test payload class""" - def __init__(self, data: dict): + def __init__(self, data: dict[str, Any]): self.value = data.get("value") @classmethod - def model_validate(cls, data: dict): + def model_validate(cls, data: dict[str, Any]): return cls(data) @@ -277,7 +278,7 @@ class TestPluginData: # Arrange class InvalidPayload: @classmethod - def model_validate(cls, data: dict): + def model_validate(cls, data: dict[str, Any]): raise Exception("Validation failed") @plugin_data(payload_type=InvalidPayload) diff --git a/api/tests/unit_tests/controllers/inner_api/workspace/test_workspace.py b/api/tests/unit_tests/controllers/inner_api/workspace/test_workspace.py index 56a8f94963..7d2193adc6 100644 --- a/api/tests/unit_tests/controllers/inner_api/workspace/test_workspace.py +++ b/api/tests/unit_tests/controllers/inner_api/workspace/test_workspace.py @@ -20,6 +20,7 @@ from controllers.inner_api.workspace.workspace import ( WorkspaceCreatePayload, WorkspaceOwnerlessPayload, ) +from models.account import TenantStatus class TestWorkspaceCreatePayload: @@ -98,7 +99,7 @@ class TestEnterpriseWorkspace: mock_tenant.id = "tenant-id" mock_tenant.name = "My Workspace" mock_tenant.plan = "sandbox" - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_tenant.created_at = now mock_tenant.updated_at = now mock_tenant_svc.create_tenant.return_value = mock_tenant @@ -162,7 +163,7 @@ class TestEnterpriseWorkspaceNoOwnerEmail: mock_tenant.name = "My Workspace" mock_tenant.encrypt_public_key = "pub-key" mock_tenant.plan = "sandbox" - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_tenant.custom_config = None mock_tenant.created_at = now mock_tenant.updated_at = now diff --git a/api/tests/unit_tests/controllers/service_api/app/test_app.py b/api/tests/unit_tests/controllers/service_api/app/test_app.py index 1507bf7a5f..f48ace427d 100644 --- a/api/tests/unit_tests/controllers/service_api/app/test_app.py +++ b/api/tests/unit_tests/controllers/service_api/app/test_app.py @@ -10,6 +10,7 @@ from flask import Flask from controllers.service_api.app.app import AppInfoApi, AppMetaApi, AppParameterApi from controllers.service_api.app.error import AppUnavailableError +from models.account import TenantStatus from models.model import App, AppMode from tests.unit_tests.conftest import setup_mock_tenant_account_query @@ -62,7 +63,7 @@ class TestAppParameterApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL # Mock DB queries for app and tenant mock_db.session.get.side_effect = [ @@ -110,7 +111,7 @@ class TestAppParameterApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app_model, @@ -151,7 +152,7 @@ class TestAppParameterApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app_model, @@ -190,7 +191,7 @@ class TestAppParameterApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app_model, @@ -253,7 +254,7 @@ class TestAppMetaApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app_model, @@ -321,7 +322,7 @@ class TestAppInfoApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app_model, @@ -378,7 +379,7 @@ class TestAppInfoApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app, @@ -424,7 +425,7 @@ class TestAppInfoApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app, @@ -476,7 +477,7 @@ class TestAppInfoApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app, diff --git a/api/tests/unit_tests/controllers/service_api/app/test_audio.py b/api/tests/unit_tests/controllers/service_api/app/test_audio.py index 5a8cb4619f..c16ebad739 100644 --- a/api/tests/unit_tests/controllers/service_api/app/test_audio.py +++ b/api/tests/unit_tests/controllers/service_api/app/test_audio.py @@ -13,7 +13,6 @@ from types import SimpleNamespace from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.datastructures import FileStorage from werkzeug.exceptions import InternalServerError @@ -30,6 +29,7 @@ from controllers.service_api.app.error import ( UnsupportedAudioTypeError, ) from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from services.audio_service import AudioService from services.errors.app_model_config import AppModelConfigBrokenError from services.errors.audio import ( @@ -95,30 +95,6 @@ class TestTextToAudioPayload: assert payload.streaming is True -# --------------------------------------------------------------------------- -# AudioService Interface Tests -# --------------------------------------------------------------------------- - - -class TestAudioServiceInterface: - """Test AudioService method interfaces exist.""" - - def test_transcript_asr_method_exists(self): - """Test that AudioService.transcript_asr exists.""" - assert hasattr(AudioService, "transcript_asr") - assert callable(AudioService.transcript_asr) - - def test_transcript_tts_method_exists(self): - """Test that AudioService.transcript_tts exists.""" - assert hasattr(AudioService, "transcript_tts") - assert callable(AudioService.transcript_tts) - - -# --------------------------------------------------------------------------- -# Audio Service Tests -# --------------------------------------------------------------------------- - - class TestAudioServiceInterface: """Test suite for AudioService interface methods.""" diff --git a/api/tests/unit_tests/controllers/service_api/app/test_completion.py b/api/tests/unit_tests/controllers/service_api/app/test_completion.py index 57681d8f5b..3364c07e62 100644 --- a/api/tests/unit_tests/controllers/service_api/app/test_completion.py +++ b/api/tests/unit_tests/controllers/service_api/app/test_completion.py @@ -16,7 +16,6 @@ from types import SimpleNamespace from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import ValidationError from werkzeug.exceptions import BadRequest, NotFound @@ -35,6 +34,7 @@ from controllers.service_api.app.error import ( NotChatAppError, ) from core.errors.error import QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from models.model import App, AppMode, EndUser from services.app_generate_service import AppGenerateService from services.app_task_service import AppTaskService diff --git a/api/tests/unit_tests/controllers/service_api/app/test_conversation.py b/api/tests/unit_tests/controllers/service_api/app/test_conversation.py index dbd06677d8..14c35a9ed5 100644 --- a/api/tests/unit_tests/controllers/service_api/app/test_conversation.py +++ b/api/tests/unit_tests/controllers/service_api/app/test_conversation.py @@ -15,6 +15,7 @@ Focus on: import sys import uuid +from datetime import UTC, datetime from types import SimpleNamespace from unittest.mock import Mock, patch @@ -29,11 +30,14 @@ from controllers.service_api.app.conversation import ( ConversationRenameApi, ConversationRenamePayload, ConversationVariableDetailApi, + ConversationVariableInfiniteScrollPaginationResponse, + ConversationVariableResponse, ConversationVariablesApi, ConversationVariablesQuery, ConversationVariableUpdatePayload, ) from controllers.service_api.app.error import NotChatAppError +from graphon.variables.types import SegmentType from models.model import App, AppMode, EndUser from services.conversation_service import ConversationService from services.errors.conversation import ( @@ -261,6 +265,46 @@ class TestConversationVariableUpdatePayload: assert payload.value == nested +class TestConversationVariableResponseModels: + def test_variable_response_normalizes_value_type_and_timestamps(self): + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + response = ConversationVariableResponse.model_validate( + { + "id": "550e8400-e29b-41d4-a716-446655440000", + "name": "foo", + "value_type": SegmentType.INTEGER, + "value": 1, + "description": "desc", + "created_at": created_at, + "updated_at": created_at, + } + ) + assert response.value_type == "number" + assert response.value == "1" + assert response.created_at == int(created_at.timestamp()) + assert response.updated_at == int(created_at.timestamp()) + + def test_variable_pagination_response(self): + response = ConversationVariableInfiniteScrollPaginationResponse.model_validate( + { + "limit": 1, + "has_more": False, + "data": [ + { + "id": "550e8400-e29b-41d4-a716-446655440000", + "name": "foo", + "value_type": "string", + "value": "bar", + } + ], + } + ) + assert response.limit == 1 + assert response.has_more is False + assert len(response.data) == 1 + assert response.data[0].name == "foo" + + class TestConversationAppModeValidation: """Test app mode validation for conversation endpoints.""" @@ -549,6 +593,44 @@ class TestConversationVariablesApiController: with pytest.raises(NotFound): handler(api, app_model=app_model, end_user=end_user, c_id="00000000-0000-0000-0000-000000000001") + def test_success_serializes_response(self, app, monkeypatch: pytest.MonkeyPatch) -> None: + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + monkeypatch.setattr( + ConversationService, + "get_conversational_variable", + lambda *_args, **_kwargs: SimpleNamespace( + limit=1, + has_more=False, + data=[ + { + "id": "550e8400-e29b-41d4-a716-446655440000", + "name": "foo", + "value_type": SegmentType.INTEGER, + "value": 1, + "created_at": created_at, + "updated_at": created_at, + } + ], + ), + ) + + api = ConversationVariablesApi() + handler = _unwrap(api.get) + app_model = SimpleNamespace(mode=AppMode.CHAT.value) + end_user = SimpleNamespace() + + with app.test_request_context( + "/conversations/1/variables?limit=20", + method="GET", + ): + result = handler(api, app_model=app_model, end_user=end_user, c_id="00000000-0000-0000-0000-000000000001") + + assert result["limit"] == 1 + assert result["has_more"] is False + assert result["data"][0]["value_type"] == "number" + assert result["data"][0]["value"] == "1" + assert result["data"][0]["created_at"] == int(created_at.timestamp()) + class TestConversationVariableDetailApiController: def test_update_type_mismatch(self, app, monkeypatch: pytest.MonkeyPatch) -> None: @@ -602,3 +684,41 @@ class TestConversationVariableDetailApiController: c_id="00000000-0000-0000-0000-000000000001", variable_id="00000000-0000-0000-0000-000000000002", ) + + def test_update_success_serializes_response(self, app, monkeypatch: pytest.MonkeyPatch) -> None: + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + monkeypatch.setattr( + ConversationService, + "update_conversation_variable", + lambda *_args, **_kwargs: { + "id": "550e8400-e29b-41d4-a716-446655440000", + "name": "foo", + "value_type": SegmentType.INTEGER, + "value": 1, + "created_at": created_at, + "updated_at": created_at, + }, + ) + + api = ConversationVariableDetailApi() + handler = _unwrap(api.put) + app_model = SimpleNamespace(mode=AppMode.CHAT.value) + end_user = SimpleNamespace() + + with app.test_request_context( + "/conversations/1/variables/2", + method="PUT", + json={"value": 1}, + ): + result = handler( + api, + app_model=app_model, + end_user=end_user, + c_id="00000000-0000-0000-0000-000000000001", + variable_id="00000000-0000-0000-0000-000000000002", + ) + + assert result["id"] == "550e8400-e29b-41d4-a716-446655440000" + assert result["value_type"] == "number" + assert result["value"] == "1" + assert result["created_at"] == int(created_at.timestamp()) diff --git a/api/tests/unit_tests/controllers/service_api/app/test_workflow.py b/api/tests/unit_tests/controllers/service_api/app/test_workflow.py index cfa21bf2dd..da09ec13ce 100644 --- a/api/tests/unit_tests/controllers/service_api/app/test_workflow.py +++ b/api/tests/unit_tests/controllers/service_api/app/test_workflow.py @@ -15,11 +15,11 @@ Focus on: import sys import uuid +from datetime import UTC, datetime from types import SimpleNamespace from unittest.mock import Mock, patch import pytest -from graphon.enums import WorkflowExecutionStatus from werkzeug.exceptions import BadRequest, NotFound from controllers.service_api.app.error import NotWorkflowAppError @@ -36,6 +36,7 @@ from controllers.service_api.app.workflow import ( WorkflowTaskStopApi, ) from controllers.web.error import InvokeRateLimitError as InvokeRateLimitHttpError +from graphon.enums import WorkflowExecutionStatus from models.model import App, AppMode from services.app_generate_service import AppGenerateService from services.errors.app import IsDraftWorkflowError, WorkflowNotFoundError @@ -43,6 +44,22 @@ from services.errors.llm import InvokeRateLimitError from services.workflow_app_service import WorkflowAppService +def _make_mock_workflow_run(run_id: str = "run-1"): + run = Mock() + run.id = run_id + run.workflow_id = "wf-1" + run.status = WorkflowExecutionStatus.SUCCEEDED + run.inputs = {"input": "value"} + run.outputs_dict = {"output": "value"} + run.error = None + run.total_steps = 1 + run.total_tokens = 10 + run.created_at = datetime(2026, 1, 1, tzinfo=UTC) + run.finished_at = datetime(2026, 1, 1, tzinfo=UTC) + run.elapsed_time = 0.1 + return run + + class TestWorkflowRunPayload: """Test suite for WorkflowRunPayload Pydantic model.""" @@ -359,7 +376,7 @@ class TestWorkflowRunDetailApi: handler(api, app_model=app_model, workflow_run_id="run") def test_success(self, monkeypatch: pytest.MonkeyPatch) -> None: - run = SimpleNamespace(id="run") + run = _make_mock_workflow_run(run_id="run") repo = SimpleNamespace(get_workflow_run_by_id=lambda **_kwargs: run) workflow_module = sys.modules["controllers.service_api.app.workflow"] monkeypatch.setattr(workflow_module, "db", SimpleNamespace(engine=object())) @@ -373,7 +390,10 @@ class TestWorkflowRunDetailApi: handler = _unwrap(api.get) app_model = SimpleNamespace(mode=AppMode.WORKFLOW.value, tenant_id="t1", id="a1") - assert handler(api, app_model=app_model, workflow_run_id="run") == run + result = handler(api, app_model=app_model, workflow_run_id="run") + assert result["id"] == "run" + assert result["workflow_id"] == "wf-1" + assert result["status"] == "succeeded" class TestWorkflowRunApi: @@ -490,7 +510,7 @@ class TestWorkflowAppLogApi: monkeypatch.setattr( WorkflowAppService, "get_paginate_workflow_app_logs", - lambda *_args, **_kwargs: {"items": [], "total": 0}, + lambda *_args, **_kwargs: {"page": 1, "limit": 20, "total": 0, "has_more": False, "data": []}, ) api = WorkflowAppLogApi() @@ -500,7 +520,7 @@ class TestWorkflowAppLogApi: with app.test_request_context("/workflows/logs", method="GET"): response = handler(api, app_model=app_model) - assert response == {"items": [], "total": 0} + assert response == {"page": 1, "limit": 20, "total": 0, "has_more": False, "data": []} # ============================================================================= @@ -527,9 +547,8 @@ def mock_workflow_app(): class TestWorkflowRunDetailApiGet: """Test suite for WorkflowRunDetailApi.get() endpoint. - ``get`` is wrapped by ``@validate_app_token`` (preserves ``__wrapped__``) - and ``@service_api_ns.marshal_with``. We call the unwrapped method - directly; ``marshal_with`` is a no-op when calling directly. + ``get`` is wrapped by ``@validate_app_token`` (preserves ``__wrapped__``), + and we call the unwrapped method directly in tests. """ @patch("controllers.service_api.app.workflow.DifyAPIRepositoryFactory") @@ -542,9 +561,7 @@ class TestWorkflowRunDetailApiGet: mock_workflow_app, ): """Test successful workflow run detail retrieval.""" - mock_run = Mock() - mock_run.id = "run-1" - mock_run.status = "succeeded" + mock_run = _make_mock_workflow_run(run_id="run-1") mock_repo = Mock() mock_repo.get_workflow_run_by_id.return_value = mock_run mock_repo_factory.create_api_workflow_run_repository.return_value = mock_repo @@ -558,7 +575,8 @@ class TestWorkflowRunDetailApiGet: api = WorkflowRunDetailApi() result = _unwrap(api.get)(api, app_model=mock_workflow_app, workflow_run_id=mock_run.id) - assert result == mock_run + assert result["id"] == mock_run.id + assert result["status"] == "succeeded" @patch("controllers.service_api.app.workflow.db") def test_get_workflow_run_wrong_app_mode(self, mock_db, app): @@ -622,8 +640,7 @@ class TestWorkflowTaskStopApiPost: class TestWorkflowAppLogApiGet: """Test suite for WorkflowAppLogApi.get() endpoint. - ``get`` is wrapped by ``@validate_app_token`` and - ``@service_api_ns.marshal_with``. + ``get`` is wrapped by ``@validate_app_token``. """ @patch("controllers.service_api.app.workflow.WorkflowAppService") @@ -637,6 +654,10 @@ class TestWorkflowAppLogApiGet: ): """Test successful workflow log retrieval.""" mock_pagination = Mock() + mock_pagination.page = 1 + mock_pagination.limit = 20 + mock_pagination.total = 0 + mock_pagination.has_more = False mock_pagination.data = [] mock_svc_instance = Mock() mock_svc_instance.get_paginate_workflow_app_logs.return_value = mock_pagination @@ -661,4 +682,4 @@ class TestWorkflowAppLogApiGet: api = WorkflowAppLogApi() result = _unwrap(api.get)(api, app_model=mock_workflow_app) - assert result == mock_pagination + assert result == {"page": 1, "limit": 20, "total": 0, "has_more": False, "data": []} diff --git a/api/tests/unit_tests/controllers/service_api/app/test_workflow_fields.py b/api/tests/unit_tests/controllers/service_api/app/test_workflow_fields.py index 4b8e3a738c..eda270258d 100644 --- a/api/tests/unit_tests/controllers/service_api/app/test_workflow_fields.py +++ b/api/tests/unit_tests/controllers/service_api/app/test_workflow_fields.py @@ -1,8 +1,7 @@ from types import SimpleNamespace -from graphon.enums import WorkflowExecutionStatus - from controllers.service_api.app.workflow import WorkflowRunOutputsField, WorkflowRunStatusField +from graphon.enums import WorkflowExecutionStatus def test_workflow_run_status_field_with_enum() -> None: diff --git a/api/tests/unit_tests/controllers/service_api/test_site.py b/api/tests/unit_tests/controllers/service_api/test_site.py deleted file mode 100644 index c0b40d070a..0000000000 --- a/api/tests/unit_tests/controllers/service_api/test_site.py +++ /dev/null @@ -1,270 +0,0 @@ -""" -Unit tests for Service API Site controller -""" - -import uuid -from unittest.mock import Mock, patch - -import pytest -from werkzeug.exceptions import Forbidden - -from controllers.service_api.app.site import AppSiteApi -from models.account import TenantStatus -from models.model import App, Site -from tests.unit_tests.conftest import setup_mock_tenant_account_query - - -class TestAppSiteApi: - """Test suite for AppSiteApi""" - - @pytest.fixture - def mock_app_model(self): - """Create a mock App model with tenant.""" - app = Mock(spec=App) - app.id = str(uuid.uuid4()) - app.tenant_id = str(uuid.uuid4()) - app.status = "normal" - app.enable_api = True - - mock_tenant = Mock() - mock_tenant.id = app.tenant_id - mock_tenant.status = TenantStatus.NORMAL - app.tenant = mock_tenant - - return app - - @pytest.fixture - def mock_site(self): - """Create a mock Site model.""" - site = Mock(spec=Site) - site.id = str(uuid.uuid4()) - site.app_id = str(uuid.uuid4()) - site.title = "Test Site" - site.icon = "icon-url" - site.icon_background = "#ffffff" - site.description = "Site description" - site.copyright = "Copyright 2024" - site.privacy_policy = "Privacy policy text" - site.custom_disclaimer = "Custom disclaimer" - site.default_language = "en-US" - site.prompt_public = True - site.show_workflow_steps = True - site.use_icon_as_answer_icon = False - site.chat_color_theme = "light" - site.chat_color_theme_inverted = False - site.icon_type = "image" - site.created_at = "2024-01-01T00:00:00" - site.updated_at = "2024-01-01T00:00:00" - return site - - @patch("controllers.service_api.wraps.user_logged_in") - @patch("controllers.service_api.app.site.db") - @patch("controllers.service_api.wraps.current_app") - @patch("controllers.service_api.wraps.validate_and_get_api_token") - @patch("controllers.service_api.wraps.db") - def test_get_site_success( - self, - mock_wraps_db, - mock_validate_token, - mock_current_app, - mock_db, - mock_user_logged_in, - app, - mock_app_model, - mock_site, - ): - """Test successful retrieval of site configuration.""" - # Arrange - mock_current_app.login_manager = Mock() - - # Mock authentication - mock_api_token = Mock() - mock_api_token.app_id = mock_app_model.id - mock_api_token.tenant_id = mock_app_model.tenant_id - mock_validate_token.return_value = mock_api_token - - mock_tenant = Mock() - mock_tenant.status = TenantStatus.NORMAL - mock_app_model.tenant = mock_tenant - - # Mock wraps.db for authentication - mock_wraps_db.session.get.side_effect = [ - mock_app_model, - mock_tenant, - ] - - mock_account = Mock() - mock_account.current_tenant = mock_tenant - setup_mock_tenant_account_query(mock_wraps_db, mock_tenant, mock_account) - - # Mock site.db for site query - mock_db.session.scalar.return_value = mock_site - - # Act - with app.test_request_context("/site", method="GET", headers={"Authorization": "Bearer test_token"}): - api = AppSiteApi() - response = api.get() - - # Assert - assert response["title"] == "Test Site" - assert response["icon"] == "icon-url" - assert response["description"] == "Site description" - mock_db.session.scalar.assert_called_once() - - @patch("controllers.service_api.wraps.user_logged_in") - @patch("controllers.service_api.app.site.db") - @patch("controllers.service_api.wraps.current_app") - @patch("controllers.service_api.wraps.validate_and_get_api_token") - @patch("controllers.service_api.wraps.db") - def test_get_site_not_found( - self, - mock_wraps_db, - mock_validate_token, - mock_current_app, - mock_db, - mock_user_logged_in, - app, - mock_app_model, - ): - """Test that Forbidden is raised when site is not found.""" - # Arrange - mock_current_app.login_manager = Mock() - - # Mock authentication - mock_api_token = Mock() - mock_api_token.app_id = mock_app_model.id - mock_api_token.tenant_id = mock_app_model.tenant_id - mock_validate_token.return_value = mock_api_token - - mock_tenant = Mock() - mock_tenant.status = TenantStatus.NORMAL - mock_app_model.tenant = mock_tenant - - mock_wraps_db.session.get.side_effect = [ - mock_app_model, - mock_tenant, - ] - - mock_account = Mock() - mock_account.current_tenant = mock_tenant - setup_mock_tenant_account_query(mock_wraps_db, mock_tenant, mock_account) - - # Mock site query to return None - mock_db.session.scalar.return_value = None - - # Act & Assert - with app.test_request_context("/site", method="GET", headers={"Authorization": "Bearer test_token"}): - api = AppSiteApi() - with pytest.raises(Forbidden): - api.get() - - @patch("controllers.service_api.wraps.user_logged_in") - @patch("controllers.service_api.app.site.db") - @patch("controllers.service_api.wraps.current_app") - @patch("controllers.service_api.wraps.validate_and_get_api_token") - @patch("controllers.service_api.wraps.db") - def test_get_site_tenant_archived( - self, - mock_wraps_db, - mock_validate_token, - mock_current_app, - mock_db, - mock_user_logged_in, - app, - mock_app_model, - mock_site, - ): - """Test that Forbidden is raised when tenant is archived.""" - # Arrange - mock_current_app.login_manager = Mock() - - # Mock authentication - mock_api_token = Mock() - mock_api_token.app_id = mock_app_model.id - mock_api_token.tenant_id = mock_app_model.tenant_id - mock_validate_token.return_value = mock_api_token - - mock_tenant = Mock() - mock_tenant.status = TenantStatus.NORMAL - - mock_wraps_db.session.get.side_effect = [ - mock_app_model, - mock_tenant, - ] - - mock_account = Mock() - mock_account.current_tenant = mock_tenant - setup_mock_tenant_account_query(mock_wraps_db, mock_tenant, mock_account) - - # Mock site query - mock_db.session.scalar.return_value = mock_site - - # Set tenant status to archived AFTER authentication - mock_app_model.tenant.status = TenantStatus.ARCHIVE - - # Act & Assert - with app.test_request_context("/site", method="GET", headers={"Authorization": "Bearer test_token"}): - api = AppSiteApi() - with pytest.raises(Forbidden): - api.get() - - @patch("controllers.service_api.wraps.user_logged_in") - @patch("controllers.service_api.app.site.db") - @patch("controllers.service_api.wraps.current_app") - @patch("controllers.service_api.wraps.validate_and_get_api_token") - @patch("controllers.service_api.wraps.db") - def test_get_site_queries_by_app_id( - self, mock_wraps_db, mock_validate_token, mock_current_app, mock_db, mock_user_logged_in, app, mock_app_model - ): - """Test that site is queried using the app model's id.""" - # Arrange - mock_current_app.login_manager = Mock() - - # Mock authentication - mock_api_token = Mock() - mock_api_token.app_id = mock_app_model.id - mock_api_token.tenant_id = mock_app_model.tenant_id - mock_validate_token.return_value = mock_api_token - - mock_tenant = Mock() - mock_tenant.status = TenantStatus.NORMAL - mock_app_model.tenant = mock_tenant - - mock_wraps_db.session.get.side_effect = [ - mock_app_model, - mock_tenant, - ] - - mock_account = Mock() - mock_account.current_tenant = mock_tenant - setup_mock_tenant_account_query(mock_wraps_db, mock_tenant, mock_account) - - mock_site = Mock(spec=Site) - mock_site.id = str(uuid.uuid4()) - mock_site.app_id = mock_app_model.id - mock_site.title = "Test Site" - mock_site.icon = "icon-url" - mock_site.icon_background = "#ffffff" - mock_site.description = "Site description" - mock_site.copyright = "Copyright 2024" - mock_site.privacy_policy = "Privacy policy text" - mock_site.custom_disclaimer = "Custom disclaimer" - mock_site.default_language = "en-US" - mock_site.prompt_public = True - mock_site.show_workflow_steps = True - mock_site.use_icon_as_answer_icon = False - mock_site.chat_color_theme = "light" - mock_site.chat_color_theme_inverted = False - mock_site.icon_type = "image" - mock_site.created_at = "2024-01-01T00:00:00" - mock_site.updated_at = "2024-01-01T00:00:00" - mock_db.session.scalar.return_value = mock_site - - # Act - with app.test_request_context("/site", method="GET", headers={"Authorization": "Bearer test_token"}): - api = AppSiteApi() - api.get() - - # Assert - # The query was executed successfully (site returned), which validates the correct query was made - mock_db.session.scalar.assert_called_once() diff --git a/api/tests/unit_tests/controllers/web/test_audio.py b/api/tests/unit_tests/controllers/web/test_audio.py index cbfc8fa613..a6ca441801 100644 --- a/api/tests/unit_tests/controllers/web/test_audio.py +++ b/api/tests/unit_tests/controllers/web/test_audio.py @@ -8,7 +8,6 @@ from unittest.mock import MagicMock, patch import pytest from flask import Flask -from graphon.model_runtime.errors.invoke import InvokeError from controllers.web.audio import AudioApi, TextApi from controllers.web.error import ( @@ -22,6 +21,7 @@ from controllers.web.error import ( UnsupportedAudioTypeError, ) from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from services.errors.audio import ( AudioTooLargeServiceError, NoAudioUploadedServiceError, diff --git a/api/tests/unit_tests/controllers/web/test_completion.py b/api/tests/unit_tests/controllers/web/test_completion.py index 49039d03fe..4f8d848637 100644 --- a/api/tests/unit_tests/controllers/web/test_completion.py +++ b/api/tests/unit_tests/controllers/web/test_completion.py @@ -7,7 +7,6 @@ from unittest.mock import MagicMock, patch import pytest from flask import Flask -from graphon.model_runtime.errors.invoke import InvokeError from controllers.web.completion import ChatApi, ChatStopApi, CompletionApi, CompletionStopApi from controllers.web.error import ( @@ -19,6 +18,7 @@ from controllers.web.error import ( ProviderQuotaExceededError, ) from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError def _completion_app() -> SimpleNamespace: diff --git a/api/tests/unit_tests/controllers/web/test_message_endpoints.py b/api/tests/unit_tests/controllers/web/test_message_endpoints.py index 89ab93d8d4..da88b109a8 100644 --- a/api/tests/unit_tests/controllers/web/test_message_endpoints.py +++ b/api/tests/unit_tests/controllers/web/test_message_endpoints.py @@ -129,12 +129,6 @@ class TestMessageSuggestedQuestionApi: with pytest.raises(NotChatAppError): MessageSuggestedQuestionApi().get(_completion_app(), _end_user(), msg_id) - def test_wrong_mode_raises(self, app: Flask) -> None: - msg_id = uuid4() - with app.test_request_context(f"/messages/{msg_id}/suggested-questions"): - with pytest.raises(NotChatAppError): - MessageSuggestedQuestionApi().get(_completion_app(), _end_user(), msg_id) - @patch("controllers.web.message.MessageService.get_suggested_questions_after_answer") def test_happy_path(self, mock_suggest: MagicMock, app: Flask) -> None: msg_id = uuid4() diff --git a/api/tests/unit_tests/controllers/web/test_pydantic_models.py b/api/tests/unit_tests/controllers/web/test_pydantic_models.py index dcf8133712..bceb65b89f 100644 --- a/api/tests/unit_tests/controllers/web/test_pydantic_models.py +++ b/api/tests/unit_tests/controllers/web/test_pydantic_models.py @@ -198,7 +198,7 @@ class TestMessageListQuery: assert q.limit == 20 def test_invalid_conversation_id(self) -> None: - with pytest.raises(ValidationError, match="not a valid uuid"): + with pytest.raises(ValidationError, match="must be a valid UUID"): MessageListQuery(conversation_id="bad") def test_limit_bounds(self) -> None: @@ -216,7 +216,7 @@ class TestMessageListQuery: def test_invalid_first_id(self) -> None: cid = str(uuid4()) - with pytest.raises(ValidationError, match="not a valid uuid"): + with pytest.raises(ValidationError, match="must be a valid UUID"): MessageListQuery(conversation_id=cid, first_id="invalid") diff --git a/api/tests/unit_tests/controllers/web/test_web_login.py b/api/tests/unit_tests/controllers/web/test_web_login.py index 0661c02578..a01587d64a 100644 --- a/api/tests/unit_tests/controllers/web/test_web_login.py +++ b/api/tests/unit_tests/controllers/web/test_web_login.py @@ -4,9 +4,12 @@ from unittest.mock import MagicMock, patch import pytest from flask import Flask +from jwt import InvalidTokenError +from werkzeug.exceptions import Unauthorized import services.errors.account from controllers.web.login import EmailCodeLoginApi, EmailCodeLoginSendEmailApi, LoginApi, LoginStatusApi, LogoutApi +from services.entities.auth_entities import LoginFailureReason def encode_code(code: str) -> str: @@ -115,13 +118,18 @@ class TestLoginApi: def test_login_banned_account(self, mock_auth: MagicMock, app: Flask) -> None: from controllers.console.error import AccountBannedError - with app.test_request_context( - "/web/login", - method="POST", - json={"email": "user@example.com", "password": base64.b64encode(b"Valid1234").decode()}, - ): - with pytest.raises(AccountBannedError): - LoginApi().post() + with patch("controllers.web.login.logger.warning") as mock_log_warning: + with app.test_request_context( + "/web/login", + method="POST", + json={"email": "user@example.com", "password": base64.b64encode(b"Valid1234").decode()}, + ): + with pytest.raises(AccountBannedError): + LoginApi().post() + + assert mock_log_warning.call_count == 1 + assert mock_log_warning.call_args.args[1] == "user@example.com" + assert mock_log_warning.call_args.args[2] == LoginFailureReason.ACCOUNT_BANNED @patch( "controllers.web.login.WebAppAuthService.authenticate", @@ -130,13 +138,87 @@ class TestLoginApi: def test_login_wrong_password(self, mock_auth: MagicMock, app: Flask) -> None: from controllers.console.auth.error import AuthenticationFailedError - with app.test_request_context( - "/web/login", - method="POST", - json={"email": "user@example.com", "password": base64.b64encode(b"Valid1234").decode()}, - ): - with pytest.raises(AuthenticationFailedError): - LoginApi().post() + with patch("controllers.web.login.logger.warning") as mock_log_warning: + with app.test_request_context( + "/web/login", + method="POST", + json={"email": "user@example.com", "password": base64.b64encode(b"Valid1234").decode()}, + ): + with pytest.raises(AuthenticationFailedError): + LoginApi().post() + + assert mock_log_warning.call_count == 1 + assert mock_log_warning.call_args.args[1] == "user@example.com" + assert mock_log_warning.call_args.args[2] == LoginFailureReason.INVALID_CREDENTIALS + + @patch( + "controllers.web.login.WebAppAuthService.authenticate", + side_effect=services.errors.account.AccountNotFoundError(), + ) + def test_login_account_not_found(self, mock_auth: MagicMock, app: Flask) -> None: + from controllers.console.auth.error import AuthenticationFailedError + + with patch("controllers.web.login.logger.warning") as mock_log_warning: + with app.test_request_context( + "/web/login", + method="POST", + json={"email": "missing@example.com", "password": base64.b64encode(b"Valid1234").decode()}, + ): + with pytest.raises(AuthenticationFailedError): + LoginApi().post() + + assert mock_log_warning.call_count == 1 + assert mock_log_warning.call_args.args[1] == "missing@example.com" + assert mock_log_warning.call_args.args[2] == LoginFailureReason.ACCOUNT_NOT_FOUND + + @patch("controllers.web.login.WebAppAuthService.get_email_code_login_data", return_value=None) + def test_email_code_login_logs_invalid_token(self, mock_get_token_data: MagicMock, app: Flask) -> None: + with patch("controllers.web.login.logger.warning") as mock_log_warning: + with app.test_request_context( + "/web/email-code-login/validity", + method="POST", + json={"email": "user@example.com", "code": encode_code("123456"), "token": "token-123"}, + ): + with pytest.raises(InvalidTokenError): + EmailCodeLoginApi().post() + + mock_get_token_data.assert_called_once_with("token-123") + assert mock_log_warning.call_count == 1 + assert mock_log_warning.call_args.args[1] == "user@example.com" + assert mock_log_warning.call_args.args[2] == LoginFailureReason.INVALID_EMAIL_CODE_TOKEN + + @patch("controllers.web.login.WebAppAuthService.revoke_email_code_login_token") + @patch( + "controllers.web.login.WebAppAuthService.get_user_through_email", + side_effect=Unauthorized("Account is banned."), + ) + @patch( + "controllers.web.login.WebAppAuthService.get_email_code_login_data", + return_value={"email": "User@Example.com", "code": "123456"}, + ) + def test_email_code_login_logs_banned_account( + self, + mock_get_token_data: MagicMock, + mock_get_user: MagicMock, + mock_revoke_token: MagicMock, + app: Flask, + ) -> None: + from controllers.console.error import AccountBannedError + + with patch("controllers.web.login.logger.warning") as mock_log_warning: + with app.test_request_context( + "/web/email-code-login/validity", + method="POST", + json={"email": "User@Example.com", "code": encode_code("123456"), "token": "token-123"}, + ): + with pytest.raises(AccountBannedError): + EmailCodeLoginApi().post() + + mock_get_token_data.assert_called_once_with("token-123") + mock_revoke_token.assert_called_once_with("token-123") + assert mock_log_warning.call_count == 1 + assert mock_log_warning.call_args.args[1] == "user@example.com" + assert mock_log_warning.call_args.args[2] == LoginFailureReason.ACCOUNT_BANNED class TestLoginStatusApi: diff --git a/api/tests/unit_tests/core/agent/test_cot_agent_runner.py b/api/tests/unit_tests/core/agent/test_cot_agent_runner.py index bc7aea0ef9..cde8820e00 100644 --- a/api/tests/unit_tests/core/agent/test_cot_agent_runner.py +++ b/api/tests/unit_tests/core/agent/test_cot_agent_runner.py @@ -2,11 +2,11 @@ import json from unittest.mock import MagicMock import pytest -from graphon.model_runtime.entities.llm_entities import LLMUsage from core.agent.cot_agent_runner import CotAgentRunner from core.agent.entities import AgentScratchpadUnit from core.agent.errors import AgentMaxIterationError +from graphon.model_runtime.entities.llm_entities import LLMUsage class DummyRunner(CotAgentRunner): diff --git a/api/tests/unit_tests/core/agent/test_cot_chat_agent_runner.py b/api/tests/unit_tests/core/agent/test_cot_chat_agent_runner.py index 97206019b9..ea8cc8aa86 100644 --- a/api/tests/unit_tests/core/agent/test_cot_chat_agent_runner.py +++ b/api/tests/unit_tests/core/agent/test_cot_chat_agent_runner.py @@ -1,9 +1,9 @@ from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.entities.message_entities import TextPromptMessageContent from core.agent.cot_chat_agent_runner import CotChatAgentRunner +from graphon.model_runtime.entities.message_entities import TextPromptMessageContent from tests.unit_tests.core.agent.conftest import ( DummyAgentConfig, DummyAppConfig, diff --git a/api/tests/unit_tests/core/agent/test_cot_completion_agent_runner.py b/api/tests/unit_tests/core/agent/test_cot_completion_agent_runner.py index defc8b4b64..2f5873d865 100644 --- a/api/tests/unit_tests/core/agent/test_cot_completion_agent_runner.py +++ b/api/tests/unit_tests/core/agent/test_cot_completion_agent_runner.py @@ -1,6 +1,8 @@ import json import pytest + +from core.agent.cot_completion_agent_runner import CotCompletionAgentRunner from graphon.model_runtime.entities.message_entities import ( AssistantPromptMessage, ImagePromptMessageContent, @@ -8,8 +10,6 @@ from graphon.model_runtime.entities.message_entities import ( UserPromptMessage, ) -from core.agent.cot_completion_agent_runner import CotCompletionAgentRunner - # ----------------------------- # Fixtures # ----------------------------- diff --git a/api/tests/unit_tests/core/agent/test_fc_agent_runner.py b/api/tests/unit_tests/core/agent/test_fc_agent_runner.py index a44a0650eb..17ab5babcb 100644 --- a/api/tests/unit_tests/core/agent/test_fc_agent_runner.py +++ b/api/tests/unit_tests/core/agent/test_fc_agent_runner.py @@ -3,6 +3,11 @@ from typing import Any from unittest.mock import MagicMock import pytest + +from core.agent.errors import AgentMaxIterationError +from core.agent.fc_agent_runner import FunctionCallAgentRunner +from core.app.apps.base_app_queue_manager import PublishFrom +from core.app.entities.queue_entities import QueueMessageFileEvent from graphon.model_runtime.entities.llm_entities import LLMUsage from graphon.model_runtime.entities.message_entities import ( DocumentPromptMessageContent, @@ -11,11 +16,6 @@ from graphon.model_runtime.entities.message_entities import ( UserPromptMessage, ) -from core.agent.errors import AgentMaxIterationError -from core.agent.fc_agent_runner import FunctionCallAgentRunner -from core.app.apps.base_app_queue_manager import PublishFrom -from core.app.entities.queue_entities import QueueMessageFileEvent - # ============================== # Dummy Helper Classes # ============================== diff --git a/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_model_config_converter.py b/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_model_config_converter.py index 5ee66da94a..186b4a501d 100644 --- a/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_model_config_converter.py +++ b/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_model_config_converter.py @@ -2,8 +2,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.model_runtime.entities.llm_entities import LLMMode -from graphon.model_runtime.entities.model_entities import ModelPropertyKey from core.app.app_config.easy_ui_based_app.model_config.converter import ModelConfigConverter from core.entities.model_entities import ModelStatus @@ -12,6 +10,8 @@ from core.errors.error import ( ProviderTokenNotInitError, QuotaExceededError, ) +from graphon.model_runtime.entities.llm_entities import LLMMode +from graphon.model_runtime.entities.model_entities import ModelPropertyKey class TestModelConfigConverter: diff --git a/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_variables_manager.py b/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_variables_manager.py index e2f3c16335..d9fe7004ff 100644 --- a/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_variables_manager.py +++ b/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_variables_manager.py @@ -1,9 +1,9 @@ import pytest -from graphon.variables.input_entities import VariableEntityType from core.app.app_config.easy_ui_based_app.variables.manager import ( BasicVariablesConfigManager, ) +from graphon.variables.input_entities import VariableEntityType class TestBasicVariablesConfigManagerConvert: diff --git a/api/tests/unit_tests/core/app/app_config/features/file_upload/test_manager.py b/api/tests/unit_tests/core/app/app_config/features/file_upload/test_manager.py index 8bde9c1f97..11b53dd0f9 100644 --- a/api/tests/unit_tests/core/app/app_config/features/file_upload/test_manager.py +++ b/api/tests/unit_tests/core/app/app_config/features/file_upload/test_manager.py @@ -1,8 +1,7 @@ +from core.app.app_config.features.file_upload.manager import FileUploadConfigManager from graphon.file import FileTransferMethod, FileUploadConfig, ImageConfig from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent -from core.app.app_config.features.file_upload.manager import FileUploadConfigManager - def test_convert_with_vision(): config = { diff --git a/api/tests/unit_tests/core/app/app_config/test_entities.py b/api/tests/unit_tests/core/app/app_config/test_entities.py index 000f83cd5a..f2bc3076da 100644 --- a/api/tests/unit_tests/core/app/app_config/test_entities.py +++ b/api/tests/unit_tests/core/app/app_config/test_entities.py @@ -1,10 +1,10 @@ import pytest -from graphon.variables.input_entities import VariableEntity, VariableEntityType from core.app.app_config.entities import ( DatasetRetrieveConfigEntity, PromptTemplateEntity, ) +from graphon.variables.input_entities import VariableEntity, VariableEntityType class TestAppConfigEntities: diff --git a/api/tests/unit_tests/core/app/apps/advanced_chat/test_app_runner_conversation_variables.py b/api/tests/unit_tests/core/app/apps/advanced_chat/test_app_runner_conversation_variables.py index 061719d15a..45d4b0e321 100644 --- a/api/tests/unit_tests/core/app/apps/advanced_chat/test_app_runner_conversation_variables.py +++ b/api/tests/unit_tests/core/app/apps/advanced_chat/test_app_runner_conversation_variables.py @@ -3,12 +3,12 @@ from unittest.mock import MagicMock, patch from uuid import uuid4 -from graphon.variables import SegmentType from sqlalchemy.orm import Session from core.app.apps.advanced_chat.app_runner import AdvancedChatAppRunner from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, InvokeFrom from factories import variable_factory +from graphon.variables import SegmentType from models import ConversationVariable, Workflow MINIMAL_GRAPH = { @@ -134,6 +134,7 @@ class TestAdvancedChatAppRunnerConversationVariables: # Patch the necessary components with ( + patch("core.app.apps.advanced_chat.app_runner.sessionmaker") as mock_sessionmaker, patch("core.app.apps.advanced_chat.app_runner.Session") as mock_session_class, patch("core.app.apps.advanced_chat.app_runner.select") as mock_select, patch("core.app.apps.advanced_chat.app_runner.db") as mock_db, @@ -150,7 +151,9 @@ class TestAdvancedChatAppRunnerConversationVariables: patch("core.app.apps.advanced_chat.app_runner.RedisChannel") as mock_redis_channel_class, ): # Setup mocks - mock_session_class.return_value.__enter__.return_value = mock_session + mock_sessionmaker.return_value.begin.return_value.__enter__.return_value = mock_session + mock_sessionmaker.return_value.begin.return_value.__exit__ = MagicMock(return_value=False) + mock_session_class.return_value.__enter__.return_value = MagicMock() mock_db.session.query.return_value.where.return_value.first.return_value = MagicMock() # App exists mock_db.engine = MagicMock() @@ -177,7 +180,6 @@ class TestAdvancedChatAppRunnerConversationVariables: # Note: Since we're mocking ConversationVariable.from_variable, # we can't directly check the id, but we can verify add_all was called assert mock_session.add_all.called, "Session add_all should have been called" - assert mock_session.commit.called, "Session commit should have been called" def test_no_variables_creates_all(self): """Test that all conversation variables are created when none exist in DB.""" @@ -278,6 +280,7 @@ class TestAdvancedChatAppRunnerConversationVariables: # Patch the necessary components with ( + patch("core.app.apps.advanced_chat.app_runner.sessionmaker") as mock_sessionmaker, patch("core.app.apps.advanced_chat.app_runner.Session") as mock_session_class, patch("core.app.apps.advanced_chat.app_runner.select") as mock_select, patch("core.app.apps.advanced_chat.app_runner.db") as mock_db, @@ -295,7 +298,9 @@ class TestAdvancedChatAppRunnerConversationVariables: patch("core.app.apps.advanced_chat.app_runner.RedisChannel") as mock_redis_channel_class, ): # Setup mocks - mock_session_class.return_value.__enter__.return_value = mock_session + mock_sessionmaker.return_value.begin.return_value.__enter__.return_value = mock_session + mock_sessionmaker.return_value.begin.return_value.__exit__ = MagicMock(return_value=False) + mock_session_class.return_value.__enter__.return_value = MagicMock() mock_db.session.query.return_value.where.return_value.first.return_value = MagicMock() # App exists mock_db.engine = MagicMock() @@ -326,7 +331,6 @@ class TestAdvancedChatAppRunnerConversationVariables: # Verify that all variables were created assert len(added_items) == 2, "Should have added both variables" assert mock_session.add_all.called, "Session add_all should have been called" - assert mock_session.commit.called, "Session commit should have been called" def test_all_variables_exist_no_changes(self): """Test that no changes are made when all variables already exist in DB.""" @@ -429,6 +433,7 @@ class TestAdvancedChatAppRunnerConversationVariables: # Patch the necessary components with ( + patch("core.app.apps.advanced_chat.app_runner.sessionmaker") as mock_sessionmaker, patch("core.app.apps.advanced_chat.app_runner.Session") as mock_session_class, patch("core.app.apps.advanced_chat.app_runner.select") as mock_select, patch("core.app.apps.advanced_chat.app_runner.db") as mock_db, @@ -445,7 +450,9 @@ class TestAdvancedChatAppRunnerConversationVariables: patch("core.app.apps.advanced_chat.app_runner.RedisChannel") as mock_redis_channel_class, ): # Setup mocks - mock_session_class.return_value.__enter__.return_value = mock_session + mock_sessionmaker.return_value.begin.return_value.__enter__.return_value = mock_session + mock_sessionmaker.return_value.begin.return_value.__exit__ = MagicMock(return_value=False) + mock_session_class.return_value.__enter__.return_value = MagicMock() mock_db.session.query.return_value.where.return_value.first.return_value = MagicMock() # App exists mock_db.engine = MagicMock() @@ -465,4 +472,3 @@ class TestAdvancedChatAppRunnerConversationVariables: # Verify that no variables were added assert not mock_session.add_all.called, "Session add_all should not have been called" - assert mock_session.commit.called, "Session commit should still be called" diff --git a/api/tests/unit_tests/core/app/apps/advanced_chat/test_app_runner_input_moderation.py b/api/tests/unit_tests/core/app/apps/advanced_chat/test_app_runner_input_moderation.py index 079df0b4e6..5d8faee897 100644 --- a/api/tests/unit_tests/core/app/apps/advanced_chat/test_app_runner_input_moderation.py +++ b/api/tests/unit_tests/core/app/apps/advanced_chat/test_app_runner_input_moderation.py @@ -93,6 +93,16 @@ def _patch_common_run_deps(runner: AdvancedChatAppRunner): scalar=lambda *a, **k: MagicMock(), ), ), + sessionmaker=MagicMock( + return_value=MagicMock( + begin=MagicMock( + return_value=MagicMock( + __enter__=lambda s: MagicMock(scalars=MagicMock(return_value=MagicMock(all=lambda: []))), + __exit__=lambda *a, **k: False, + ), + ), + ), + ), select=MagicMock(), db=MagicMock(engine=MagicMock()), RedisChannel=MagicMock(), diff --git a/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_response_converter.py b/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_response_converter.py index e9fdeefee4..f2df35d7d0 100644 --- a/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_response_converter.py +++ b/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_response_converter.py @@ -1,7 +1,5 @@ from collections.abc import Generator -from graphon.enums import WorkflowNodeExecutionStatus - from core.app.apps.advanced_chat.generate_response_converter import AdvancedChatAppGenerateResponseConverter from core.app.entities.task_entities import ( ChatbotAppBlockingResponse, @@ -12,6 +10,7 @@ from core.app.entities.task_entities import ( NodeStartStreamResponse, PingStreamResponse, ) +from graphon.enums import WorkflowNodeExecutionStatus class TestAdvancedChatGenerateResponseConverter: diff --git a/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline.py b/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline.py index a6d8598955..99a386cd45 100644 --- a/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline.py +++ b/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline.py @@ -6,8 +6,6 @@ from types import SimpleNamespace from unittest import mock import pytest -from graphon.entities.pause_reason import HumanInputRequired -from graphon.enums import WorkflowExecutionStatus from core.app.apps.advanced_chat import generate_task_pipeline as pipeline_module from core.app.entities.app_invoke_entities import InvokeFrom @@ -19,6 +17,8 @@ from core.app.entities.queue_entities import ( QueueWorkflowSucceededEvent, ) from core.app.entities.task_entities import StreamEvent +from graphon.entities.pause_reason import HumanInputRequired +from graphon.enums import WorkflowExecutionStatus from models.enums import MessageStatus from models.execution_extra_content import HumanInputContent from models.model import AppMode, EndUser diff --git a/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline_core.py b/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline_core.py index 82b2e51019..29fd63c063 100644 --- a/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline_core.py +++ b/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline_core.py @@ -4,8 +4,6 @@ from contextlib import contextmanager from types import SimpleNamespace import pytest -from graphon.enums import BuiltinNodeTypes -from graphon.runtime import GraphRuntimeState, VariablePool from core.app.app_config.entities import AppAdditionalFeatures, WorkflowUIBasedAppConfig from core.app.apps.advanced_chat.generate_task_pipeline import ( @@ -49,6 +47,8 @@ from core.app.entities.task_entities import ( ) from core.base.tts.app_generator_tts_publisher import AudioTrunk from core.workflow.system_variables import build_system_variables +from graphon.enums import BuiltinNodeTypes +from graphon.runtime import GraphRuntimeState, VariablePool from libs.datetime_utils import naive_utc_now from models.enums import MessageStatus from models.model import AppMode, EndUser diff --git a/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_generator.py b/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_generator.py index 7dc4358150..80f7f94b1a 100644 --- a/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_generator.py +++ b/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_generator.py @@ -1,12 +1,12 @@ import contextlib import pytest -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from pydantic import ValidationError from core.app.apps.agent_chat.app_generator import AgentChatAppGenerator from core.app.apps.exc import GenerateTaskStoppedError from core.app.entities.app_invoke_entities import InvokeFrom +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError class DummyAccount: diff --git a/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_runner.py b/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_runner.py index 08250bc3b6..4567b35480 100644 --- a/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_runner.py +++ b/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_runner.py @@ -1,10 +1,10 @@ import pytest -from graphon.model_runtime.entities.llm_entities import LLMMode -from graphon.model_runtime.entities.model_entities import ModelFeature, ModelPropertyKey from core.agent.entities import AgentEntity from core.app.apps.agent_chat.app_runner import AgentChatAppRunner from core.moderation.base import ModerationError +from graphon.model_runtime.entities.llm_entities import LLMMode +from graphon.model_runtime.entities.model_entities import ModelFeature, ModelPropertyKey @pytest.fixture diff --git a/api/tests/unit_tests/core/app/apps/chat/test_app_generator_and_runner.py b/api/tests/unit_tests/core/app/apps/chat/test_app_generator_and_runner.py index 68bcffb0e8..8f3c41701b 100644 --- a/api/tests/unit_tests/core/app/apps/chat/test_app_generator_and_runner.py +++ b/api/tests/unit_tests/core/app/apps/chat/test_app_generator_and_runner.py @@ -2,7 +2,6 @@ from types import SimpleNamespace from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from core.app.apps.chat.app_generator import ChatAppGenerator from core.app.apps.chat.app_runner import ChatAppRunner @@ -10,6 +9,7 @@ from core.app.apps.exc import GenerateTaskStoppedError from core.app.entities.app_invoke_entities import InvokeFrom from core.app.entities.queue_entities import QueueAnnotationReplyEvent from core.moderation.base import ModerationError +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from models.model import AppMode diff --git a/api/tests/unit_tests/core/app/apps/chat/test_base_app_runner_multimodal.py b/api/tests/unit_tests/core/app/apps/chat/test_base_app_runner_multimodal.py index f255d2c7df..b3ea1a464f 100644 --- a/api/tests/unit_tests/core/app/apps/chat/test_base_app_runner_multimodal.py +++ b/api/tests/unit_tests/core/app/apps/chat/test_base_app_runner_multimodal.py @@ -4,13 +4,13 @@ from unittest.mock import MagicMock, patch from uuid import uuid4 import pytest -from graphon.file import FileTransferMethod, FileType -from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent from core.app.apps.base_app_queue_manager import PublishFrom from core.app.apps.base_app_runner import AppRunner from core.app.entities.app_invoke_entities import InvokeFrom from core.app.entities.queue_entities import QueueMessageFileEvent +from graphon.file import FileTransferMethod, FileType +from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent from models.enums import CreatorUserRole diff --git a/api/tests/unit_tests/core/app/apps/common/test_graph_runtime_state_support.py b/api/tests/unit_tests/core/app/apps/common/test_graph_runtime_state_support.py index 4a94a2b4f1..201923e0e4 100644 --- a/api/tests/unit_tests/core/app/apps/common/test_graph_runtime_state_support.py +++ b/api/tests/unit_tests/core/app/apps/common/test_graph_runtime_state_support.py @@ -1,11 +1,11 @@ from types import SimpleNamespace import pytest -from graphon.runtime import GraphRuntimeState, VariablePool from core.app.apps.common.graph_runtime_state_support import GraphRuntimeStateSupport from core.workflow.system_variables import build_system_variables from core.workflow.variable_pool_initializer import add_variables_to_pool +from graphon.runtime import GraphRuntimeState, VariablePool def _make_state(workflow_run_id: str | None) -> GraphRuntimeState: diff --git a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter.py b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter.py index 328cd12f12..3ab63aed25 100644 --- a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter.py +++ b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter.py @@ -1,10 +1,9 @@ from collections.abc import Mapping, Sequence +from core.app.apps.common.workflow_response_converter import WorkflowResponseConverter from graphon.file import FILE_MODEL_IDENTITY, File, FileTransferMethod, FileType from graphon.variables.segments import ArrayFileSegment, FileSegment -from core.app.apps.common.workflow_response_converter import WorkflowResponseConverter - class TestWorkflowResponseConverterFetchFilesFromVariableValue: """Test class for WorkflowResponseConverter._fetch_files_from_variable_value method""" diff --git a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_human_input.py b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_human_input.py index bc11bf4174..1bef6f69cd 100644 --- a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_human_input.py +++ b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_human_input.py @@ -1,13 +1,12 @@ from datetime import UTC, datetime from types import SimpleNamespace -from graphon.entities import WorkflowStartReason -from graphon.runtime import GraphRuntimeState, VariablePool - from core.app.apps.common.workflow_response_converter import WorkflowResponseConverter from core.app.entities.app_invoke_entities import InvokeFrom from core.app.entities.queue_entities import QueueHumanInputFormFilledEvent, QueueHumanInputFormTimeoutEvent from core.workflow.system_variables import build_system_variables +from graphon.entities import WorkflowStartReason +from graphon.runtime import GraphRuntimeState, VariablePool def _build_converter(): diff --git a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_resumption.py b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_resumption.py index c9e146ff12..936ac37e55 100644 --- a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_resumption.py +++ b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_resumption.py @@ -1,11 +1,10 @@ from types import SimpleNamespace -from graphon.entities import WorkflowStartReason -from graphon.runtime import GraphRuntimeState, VariablePool - from core.app.apps.common.workflow_response_converter import WorkflowResponseConverter from core.app.entities.app_invoke_entities import InvokeFrom from core.workflow.system_variables import build_system_variables +from graphon.entities import WorkflowStartReason +from graphon.runtime import GraphRuntimeState, VariablePool def _build_converter() -> WorkflowResponseConverter: diff --git a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_truncation.py b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_truncation.py index 0fde7565d2..b3c0eb74fa 100644 --- a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_truncation.py +++ b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_truncation.py @@ -10,8 +10,6 @@ from typing import Any from unittest.mock import Mock import pytest -from graphon.entities import WorkflowStartReason -from graphon.enums import BuiltinNodeTypes from core.app.app_config.entities import WorkflowUIBasedAppConfig from core.app.apps.common.workflow_response_converter import WorkflowResponseConverter @@ -27,6 +25,8 @@ from core.app.entities.queue_entities import ( QueueNodeSucceededEvent, ) from core.workflow.system_variables import build_system_variables +from graphon.entities import WorkflowStartReason +from graphon.enums import BuiltinNodeTypes from libs.datetime_utils import naive_utc_now from models import Account from models.model import AppMode diff --git a/api/tests/unit_tests/core/app/apps/completion/test_app_runner.py b/api/tests/unit_tests/core/app/apps/completion/test_app_runner.py index 619d66085a..aa2085177e 100644 --- a/api/tests/unit_tests/core/app/apps/completion/test_app_runner.py +++ b/api/tests/unit_tests/core/app/apps/completion/test_app_runner.py @@ -2,11 +2,11 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent import core.app.apps.completion.app_runner as module from core.app.apps.completion.app_runner import CompletionAppRunner from core.moderation.base import ModerationError +from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent @pytest.fixture diff --git a/api/tests/unit_tests/core/app/apps/completion/test_completion_completion_app_generator.py b/api/tests/unit_tests/core/app/apps/completion/test_completion_completion_app_generator.py index 96af9fbdee..f2e35f9900 100644 --- a/api/tests/unit_tests/core/app/apps/completion/test_completion_completion_app_generator.py +++ b/api/tests/unit_tests/core/app/apps/completion/test_completion_completion_app_generator.py @@ -3,13 +3,13 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from pydantic import ValidationError import core.app.apps.completion.app_generator as module from core.app.apps.completion.app_generator import CompletionAppGenerator from core.app.apps.exc import GenerateTaskStoppedError from core.app.entities.app_invoke_entities import InvokeFrom +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from services.errors.app import MoreLikeThisDisabledError from services.errors.message import MessageNotExistsError diff --git a/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_generate_response_converter.py b/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_generate_response_converter.py index 6cdcab29ab..cfe797aa76 100644 --- a/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_generate_response_converter.py +++ b/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_generate_response_converter.py @@ -1,7 +1,5 @@ from collections.abc import Generator -from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus - from core.app.apps.pipeline.generate_response_converter import WorkflowAppGenerateResponseConverter from core.app.entities.task_entities import ( AppStreamResponse, @@ -12,6 +10,7 @@ from core.app.entities.task_entities import ( WorkflowAppBlockingResponse, WorkflowAppStreamResponse, ) +from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus def test_convert_blocking_full_and_simple_response(): diff --git a/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_queue_manager.py b/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_queue_manager.py index 4fe82efcb3..9db83f5531 100644 --- a/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_queue_manager.py +++ b/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_queue_manager.py @@ -1,5 +1,4 @@ import pytest -from graphon.model_runtime.entities.llm_entities import LLMResult import core.app.apps.pipeline.pipeline_queue_manager as module from core.app.apps.base_app_queue_manager import PublishFrom @@ -14,6 +13,7 @@ from core.app.entities.queue_entities import ( QueueWorkflowPartialSuccessEvent, QueueWorkflowSucceededEvent, ) +from graphon.model_runtime.entities.llm_entities import LLMResult def test_publish_sets_stop_listen_and_raises_on_stopped(mocker): diff --git a/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_runner.py b/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_runner.py index c8ae288e6f..618c8fd76f 100644 --- a/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_runner.py +++ b/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_runner.py @@ -22,11 +22,11 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.graph_events import GraphRunFailedEvent import core.app.apps.pipeline.pipeline_runner as module from core.app.apps.pipeline.pipeline_runner import PipelineRunner from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom +from graphon.graph_events import GraphRunFailedEvent def _build_app_generate_entity() -> SimpleNamespace: diff --git a/api/tests/unit_tests/core/app/apps/test_base_app_generator.py b/api/tests/unit_tests/core/app/apps/test_base_app_generator.py index 6167be3bbd..b0f8b423e1 100644 --- a/api/tests/unit_tests/core/app/apps/test_base_app_generator.py +++ b/api/tests/unit_tests/core/app/apps/test_base_app_generator.py @@ -1,7 +1,7 @@ import pytest -from graphon.variables.input_entities import VariableEntity, VariableEntityType from core.app.apps.base_app_generator import BaseAppGenerator +from graphon.variables.input_entities import VariableEntity, VariableEntityType def test_validate_inputs_with_zero(): @@ -476,9 +476,8 @@ class TestBaseAppGeneratorExtras: assert converted[1] == "event: ping\n\n" def test_get_draft_var_saver_factory_debugger(self): - from graphon.enums import BuiltinNodeTypes - from core.app.entities.app_invoke_entities import InvokeFrom + from graphon.enums import BuiltinNodeTypes from models import Account base_app_generator = BaseAppGenerator() diff --git a/api/tests/unit_tests/core/app/apps/test_base_app_runner.py b/api/tests/unit_tests/core/app/apps/test_base_app_runner.py index 1dee7fdab6..17de39ca99 100644 --- a/api/tests/unit_tests/core/app/apps/test_base_app_runner.py +++ b/api/tests/unit_tests/core/app/apps/test_base_app_runner.py @@ -4,15 +4,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - ImagePromptMessageContent, - PromptMessageRole, - TextPromptMessageContent, -) -from graphon.model_runtime.entities.model_entities import ModelPropertyKey -from graphon.model_runtime.errors.invoke import InvokeBadRequestError from core.app.app_config.entities import ( AdvancedChatMessageEntity, @@ -23,6 +14,15 @@ from core.app.app_config.entities import ( from core.app.apps.base_app_runner import AppRunner from core.app.entities.app_invoke_entities import InvokeFrom from core.app.entities.queue_entities import QueueAgentMessageEvent, QueueLLMChunkEvent, QueueMessageEndEvent +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + ImagePromptMessageContent, + PromptMessageRole, + TextPromptMessageContent, +) +from graphon.model_runtime.entities.model_entities import ModelPropertyKey +from graphon.model_runtime.errors.invoke import InvokeBadRequestError from models.model import AppMode diff --git a/api/tests/unit_tests/core/app/apps/test_pause_resume.py b/api/tests/unit_tests/core/app/apps/test_pause_resume.py index a126bc85f7..a04a7b7576 100644 --- a/api/tests/unit_tests/core/app/apps/test_pause_resume.py +++ b/api/tests/unit_tests/core/app/apps/test_pause_resume.py @@ -4,6 +4,11 @@ from types import ModuleType, SimpleNamespace from typing import Any import graphon.nodes.human_input.entities # noqa: F401 +from core.app.apps.advanced_chat import app_generator as adv_app_gen_module +from core.app.apps.workflow import app_generator as wf_app_gen_module +from core.app.entities.app_invoke_entities import InvokeFrom +from core.workflow.node_factory import DifyNodeFactory +from core.workflow.system_variables import build_system_variables from graphon.entities import WorkflowStartReason from graphon.entities.base_node_data import BaseNodeData, RetryConfig from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter @@ -25,12 +30,6 @@ from graphon.nodes.base.node import Node from graphon.nodes.end.entities import EndNodeData from graphon.nodes.start.entities import StartNodeData from graphon.runtime import GraphRuntimeState, VariablePool - -from core.app.apps.advanced_chat import app_generator as adv_app_gen_module -from core.app.apps.workflow import app_generator as wf_app_gen_module -from core.app.entities.app_invoke_entities import InvokeFrom -from core.workflow.node_factory import DifyNodeFactory -from core.workflow.system_variables import build_system_variables from tests.workflow_test_utils import build_test_graph_init_params if "core.ops.ops_trace_manager" not in sys.modules: diff --git a/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_core.py b/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_core.py index de5bca161c..58c7bfa4bc 100644 --- a/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_core.py +++ b/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_core.py @@ -4,6 +4,23 @@ from datetime import UTC, datetime from types import SimpleNamespace import pytest + +from core.app.apps.workflow_app_runner import WorkflowBasedAppRunner +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom +from core.app.entities.queue_entities import ( + QueueAgentLogEvent, + QueueIterationCompletedEvent, + QueueLoopCompletedEvent, + QueueNodeExceptionEvent, + QueueNodeFailedEvent, + QueueNodeRetryEvent, + QueueNodeSucceededEvent, + QueueTextChunkEvent, + QueueWorkflowPausedEvent, + QueueWorkflowStartedEvent, + QueueWorkflowSucceededEvent, +) +from core.workflow.system_variables import default_system_variables from graphon.entities.pause_reason import HumanInputRequired from graphon.enums import BuiltinNodeTypes from graphon.graph_events import ( @@ -24,23 +41,6 @@ from graphon.node_events import NodeRunResult from graphon.runtime import GraphRuntimeState, VariablePool from graphon.variables.variables import StringVariable -from core.app.apps.workflow_app_runner import WorkflowBasedAppRunner -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom -from core.app.entities.queue_entities import ( - QueueAgentLogEvent, - QueueIterationCompletedEvent, - QueueLoopCompletedEvent, - QueueNodeExceptionEvent, - QueueNodeFailedEvent, - QueueNodeRetryEvent, - QueueNodeSucceededEvent, - QueueTextChunkEvent, - QueueWorkflowPausedEvent, - QueueWorkflowStartedEvent, - QueueWorkflowSucceededEvent, -) -from core.workflow.system_variables import default_system_variables - class TestWorkflowBasedAppRunner: def test_resolve_user_from(self): diff --git a/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_notifications.py b/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_notifications.py index aa789d9ff3..10fb2271f4 100644 --- a/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_notifications.py +++ b/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_notifications.py @@ -1,11 +1,11 @@ from unittest.mock import MagicMock import pytest -from graphon.entities.pause_reason import HumanInputRequired -from graphon.graph_events import GraphRunPausedEvent from core.app.apps.workflow_app_runner import WorkflowBasedAppRunner from core.app.entities.queue_entities import QueueWorkflowPausedEvent +from graphon.entities.pause_reason import HumanInputRequired +from graphon.graph_events import GraphRunPausedEvent class _DummyQueueManager: diff --git a/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_single_node.py b/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_single_node.py index 9e30faecf2..620a153204 100644 --- a/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_single_node.py +++ b/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_single_node.py @@ -4,14 +4,14 @@ from typing import Any from unittest.mock import MagicMock, patch import pytest -from graphon.entities.graph_config import NodeConfigDictAdapter -from graphon.runtime import GraphRuntimeState, VariablePool from core.app.apps.base_app_queue_manager import AppQueueManager from core.app.apps.workflow.app_runner import WorkflowAppRunner from core.app.apps.workflow_app_runner import WorkflowBasedAppRunner from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity from core.workflow.system_variables import default_system_variables +from graphon.entities.graph_config import NodeConfigDictAdapter +from graphon.runtime import GraphRuntimeState, VariablePool from models.workflow import Workflow diff --git a/api/tests/unit_tests/core/app/apps/test_workflow_pause_events.py b/api/tests/unit_tests/core/app/apps/test_workflow_pause_events.py index 8a717e1dcc..a3ab379b66 100644 --- a/api/tests/unit_tests/core/app/apps/test_workflow_pause_events.py +++ b/api/tests/unit_tests/core/app/apps/test_workflow_pause_events.py @@ -3,11 +3,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.entities import WorkflowStartReason -from graphon.entities.pause_reason import HumanInputRequired -from graphon.graph_events import GraphRunPausedEvent -from graphon.nodes.human_input.entities import FormInput, UserAction -from graphon.nodes.human_input.enums import FormInputType from core.app.apps.common import workflow_response_converter from core.app.apps.common.workflow_response_converter import WorkflowResponseConverter @@ -16,6 +11,11 @@ from core.app.entities.app_invoke_entities import InvokeFrom from core.app.entities.queue_entities import QueueWorkflowPausedEvent from core.app.entities.task_entities import HumanInputRequiredResponse, WorkflowPauseStreamResponse from core.workflow.system_variables import build_system_variables +from graphon.entities import WorkflowStartReason +from graphon.entities.pause_reason import HumanInputRequired +from graphon.graph_events import GraphRunPausedEvent +from graphon.nodes.human_input.entities import FormInput, UserAction +from graphon.nodes.human_input.enums import FormInputType from models.account import Account from models.human_input import RecipientType diff --git a/api/tests/unit_tests/core/app/apps/workflow/test_generate_response_converter.py b/api/tests/unit_tests/core/app/apps/workflow/test_generate_response_converter.py index b768e813bd..7dd7ffd727 100644 --- a/api/tests/unit_tests/core/app/apps/workflow/test_generate_response_converter.py +++ b/api/tests/unit_tests/core/app/apps/workflow/test_generate_response_converter.py @@ -1,7 +1,5 @@ from collections.abc import Generator -from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus - from core.app.apps.workflow.generate_response_converter import WorkflowAppGenerateResponseConverter from core.app.entities.task_entities import ( ErrorStreamResponse, @@ -11,6 +9,7 @@ from core.app.entities.task_entities import ( WorkflowAppBlockingResponse, WorkflowAppStreamResponse, ) +from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus class TestWorkflowGenerateResponseConverter: diff --git a/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline.py b/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline.py index 29df903aa8..1f6e7e12ef 100644 --- a/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline.py +++ b/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline.py @@ -2,15 +2,14 @@ import time from contextlib import contextmanager from unittest.mock import MagicMock -from graphon.entities import WorkflowStartReason -from graphon.runtime import GraphRuntimeState - from core.app.app_config.entities import WorkflowUIBasedAppConfig from core.app.apps.base_app_queue_manager import AppQueueManager from core.app.apps.workflow.generate_task_pipeline import WorkflowAppGenerateTaskPipeline from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity from core.app.entities.queue_entities import QueueWorkflowStartedEvent from core.workflow.system_variables import build_system_variables +from graphon.entities import WorkflowStartReason +from graphon.runtime import GraphRuntimeState from models.account import Account from models.model import AppMode from tests.workflow_test_utils import build_test_variable_pool diff --git a/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline_core.py b/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline_core.py index dabd2594b4..99433478d3 100644 --- a/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline_core.py +++ b/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline_core.py @@ -2,10 +2,9 @@ from __future__ import annotations from contextlib import contextmanager from types import SimpleNamespace +from unittest.mock import MagicMock import pytest -from graphon.enums import BuiltinNodeTypes, WorkflowExecutionStatus -from graphon.runtime import GraphRuntimeState, VariablePool from core.app.app_config.entities import AppAdditionalFeatures, WorkflowUIBasedAppConfig from core.app.apps.workflow.generate_task_pipeline import WorkflowAppGenerateTaskPipeline @@ -46,6 +45,8 @@ from core.app.entities.task_entities import ( ) from core.base.tts.app_generator_tts_publisher import AudioTrunk from core.workflow.system_variables import build_system_variables, system_variables_to_mapping +from graphon.enums import BuiltinNodeTypes, WorkflowExecutionStatus +from graphon.runtime import GraphRuntimeState, VariablePool from libs.datetime_utils import naive_utc_now from models.enums import CreatorUserRole from models.model import AppMode, EndUser @@ -610,33 +611,33 @@ class TestWorkflowGenerateTaskPipeline: def test_database_session_rolls_back_on_error(self, monkeypatch): pipeline = _make_pipeline() - calls = {"commit": 0, "rollback": 0} - - class _Session: - def __init__(self, *args, **kwargs): - _ = args, kwargs + calls = {"enter": 0, "exit_exc": None} + class _BeginContext: def __enter__(self): - return self + calls["enter"] += 1 + return MagicMock() def __exit__(self, exc_type, exc, tb): + calls["exit_exc"] = exc_type return False - def commit(self): - calls["commit"] += 1 + class _Sessionmaker: + def __init__(self, *args, **kwargs): + pass - def rollback(self): - calls["rollback"] += 1 + def begin(self): + return _BeginContext() - monkeypatch.setattr("core.app.apps.workflow.generate_task_pipeline.Session", _Session) + monkeypatch.setattr("core.app.apps.workflow.generate_task_pipeline.sessionmaker", _Sessionmaker) monkeypatch.setattr("core.app.apps.workflow.generate_task_pipeline.db", SimpleNamespace(engine=object())) with pytest.raises(RuntimeError, match="db error"): with pipeline._database_session(): raise RuntimeError("db error") - assert calls["commit"] == 0 - assert calls["rollback"] == 1 + assert calls["enter"] == 1 + assert calls["exit_exc"] is RuntimeError def test_node_retry_and_started_handlers_cover_none_and_value(self): pipeline = _make_pipeline() diff --git a/api/tests/unit_tests/core/app/entities/test_task_entities.py b/api/tests/unit_tests/core/app/entities/test_task_entities.py index 014a0cba72..7c79780641 100644 --- a/api/tests/unit_tests/core/app/entities/test_task_entities.py +++ b/api/tests/unit_tests/core/app/entities/test_task_entities.py @@ -1,11 +1,10 @@ -from graphon.enums import WorkflowNodeExecutionStatus - from core.app.entities.task_entities import ( NodeFinishStreamResponse, NodeRetryStreamResponse, NodeStartStreamResponse, StreamEvent, ) +from graphon.enums import WorkflowNodeExecutionStatus class TestTaskEntities: diff --git a/api/tests/unit_tests/core/app/layers/test_conversation_variable_persist_layer.py b/api/tests/unit_tests/core/app/layers/test_conversation_variable_persist_layer.py index a78c1b428f..ba55e8f695 100644 --- a/api/tests/unit_tests/core/app/layers/test_conversation_variable_persist_layer.py +++ b/api/tests/unit_tests/core/app/layers/test_conversation_variable_persist_layer.py @@ -1,6 +1,9 @@ from collections.abc import Sequence from unittest.mock import Mock +from core.app.layers.conversation_variable_persist_layer import ConversationVariablePersistenceLayer +from core.workflow.system_variables import SystemVariableKey +from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionStatus from graphon.graph_engine.command_channels import CommandChannel from graphon.graph_events import NodeRunSucceededEvent, NodeRunVariableUpdatedEvent @@ -8,10 +11,6 @@ from graphon.node_events import NodeRunResult from graphon.runtime import ReadOnlyGraphRuntimeState from graphon.variables import StringVariable from graphon.variables.segments import Segment, StringSegment - -from core.app.layers.conversation_variable_persist_layer import ConversationVariablePersistenceLayer -from core.workflow.system_variables import SystemVariableKey -from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID from libs.datetime_utils import naive_utc_now diff --git a/api/tests/unit_tests/core/app/layers/test_pause_state_persist_layer.py b/api/tests/unit_tests/core/app/layers/test_pause_state_persist_layer.py index 035e64325b..539944d683 100644 --- a/api/tests/unit_tests/core/app/layers/test_pause_state_persist_layer.py +++ b/api/tests/unit_tests/core/app/layers/test_pause_state_persist_layer.py @@ -4,6 +4,16 @@ from time import time from unittest.mock import Mock import pytest + +from core.app.app_config.entities import WorkflowUIBasedAppConfig +from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, InvokeFrom, WorkflowAppGenerateEntity +from core.app.layers.pause_state_persist_layer import ( + PauseStatePersistenceLayer, + WorkflowResumptionContext, + _AdvancedChatAppGenerateEntityWrapper, + _WorkflowGenerateEntityWrapper, +) +from core.workflow.system_variables import SystemVariableKey from graphon.entities.pause_reason import SchedulingPause from graphon.graph_engine.entities.commands import GraphEngineCommand from graphon.graph_engine.layers.base import GraphEngineLayerNotInitializedError @@ -15,16 +25,6 @@ from graphon.graph_events import ( ) from graphon.runtime import ReadOnlyVariablePool from graphon.variables.segments import Segment - -from core.app.app_config.entities import WorkflowUIBasedAppConfig -from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, InvokeFrom, WorkflowAppGenerateEntity -from core.app.layers.pause_state_persist_layer import ( - PauseStatePersistenceLayer, - WorkflowResumptionContext, - _AdvancedChatAppGenerateEntityWrapper, - _WorkflowGenerateEntityWrapper, -) -from core.workflow.system_variables import SystemVariableKey from models.model import AppMode from repositories.factory import DifyAPIRepositoryFactory diff --git a/api/tests/unit_tests/core/app/layers/test_suspend_layer.py b/api/tests/unit_tests/core/app/layers/test_suspend_layer.py index 95931f4f8b..12d49be0f1 100644 --- a/api/tests/unit_tests/core/app/layers/test_suspend_layer.py +++ b/api/tests/unit_tests/core/app/layers/test_suspend_layer.py @@ -1,6 +1,5 @@ -from graphon.graph_events import GraphRunPausedEvent - from core.app.layers.suspend_layer import SuspendLayer +from graphon.graph_events import GraphRunPausedEvent class TestSuspendLayer: diff --git a/api/tests/unit_tests/core/app/layers/test_timeslice_layer.py b/api/tests/unit_tests/core/app/layers/test_timeslice_layer.py index 7cf6eb4f31..1ac9a4d8c0 100644 --- a/api/tests/unit_tests/core/app/layers/test_timeslice_layer.py +++ b/api/tests/unit_tests/core/app/layers/test_timeslice_layer.py @@ -1,8 +1,7 @@ from unittest.mock import Mock, patch -from graphon.graph_engine.entities.commands import CommandType, GraphEngineCommand - from core.app.layers.timeslice_layer import TimeSliceLayer +from graphon.graph_engine.entities.commands import CommandType, GraphEngineCommand from services.workflow.entities import WorkflowScheduleCFSPlanEntity from services.workflow.scheduler import SchedulerCommand diff --git a/api/tests/unit_tests/core/app/layers/test_trigger_post_layer.py b/api/tests/unit_tests/core/app/layers/test_trigger_post_layer.py index aa9285789b..d3bd15b6f3 100644 --- a/api/tests/unit_tests/core/app/layers/test_trigger_post_layer.py +++ b/api/tests/unit_tests/core/app/layers/test_trigger_post_layer.py @@ -2,11 +2,10 @@ from datetime import UTC, datetime, timedelta from types import SimpleNamespace from unittest.mock import Mock, patch -from graphon.graph_events import GraphRunFailedEvent, GraphRunSucceededEvent -from graphon.runtime import VariablePool - from core.app.layers.trigger_post_layer import TriggerPostLayer from core.workflow.system_variables import build_system_variables +from graphon.graph_events import GraphRunFailedEvent, GraphRunSucceededEvent +from graphon.runtime import VariablePool from models.enums import WorkflowTriggerStatus diff --git a/api/tests/unit_tests/core/app/task_pipeline/test_based_generate_task_pipeline.py b/api/tests/unit_tests/core/app/task_pipeline/test_based_generate_task_pipeline.py index 58aa7d7478..c246f7b783 100644 --- a/api/tests/unit_tests/core/app/task_pipeline/test_based_generate_task_pipeline.py +++ b/api/tests/unit_tests/core/app/task_pipeline/test_based_generate_task_pipeline.py @@ -2,11 +2,11 @@ from types import SimpleNamespace from unittest.mock import Mock import pytest -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError from core.app.entities.queue_entities import QueueErrorEvent from core.app.task_pipeline.based_generate_task_pipeline import BasedGenerateTaskPipeline from core.errors.error import QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError from models.enums import MessageStatus diff --git a/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline.py b/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline.py index 4aaa10a81a..1c1bf391d3 100644 --- a/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline.py +++ b/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline.py @@ -2,8 +2,6 @@ from types import SimpleNamespace from unittest.mock import ANY, Mock, patch import pytest -from graphon.model_runtime.entities.llm_entities import LLMResult as RuntimeLLMResult -from graphon.model_runtime.entities.message_entities import TextPromptMessageContent from core.app.apps.base_app_queue_manager import AppQueueManager from core.app.entities.app_invoke_entities import ChatAppGenerateEntity @@ -28,6 +26,8 @@ from core.app.entities.task_entities import ( from core.app.task_pipeline.easy_ui_based_generate_task_pipeline import EasyUIBasedGenerateTaskPipeline from core.base.tts import AppGeneratorTTSPublisher from core.ops.ops_trace_manager import TraceQueueManager +from graphon.model_runtime.entities.llm_entities import LLMResult as RuntimeLLMResult +from graphon.model_runtime.entities.message_entities import TextPromptMessageContent from models.model import AppMode diff --git a/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline_core.py b/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline_core.py index f22602a400..a20d89d807 100644 --- a/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline_core.py +++ b/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline_core.py @@ -5,9 +5,6 @@ from types import SimpleNamespace from unittest.mock import Mock import pytest -from graphon.file import FileTransferMethod -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage -from graphon.model_runtime.entities.message_entities import AssistantPromptMessage, TextPromptMessageContent from core.app.app_config.entities import ( AppAdditionalFeatures, @@ -41,6 +38,9 @@ from core.app.entities.task_entities import ( ) from core.app.task_pipeline.easy_ui_based_generate_task_pipeline import EasyUIBasedGenerateTaskPipeline from core.base.tts import AudioTrunk +from graphon.file import FileTransferMethod +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage +from graphon.model_runtime.entities.message_entities import AssistantPromptMessage, TextPromptMessageContent from models.model import AppMode diff --git a/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_message_end_files.py b/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_message_end_files.py index 31b7313066..595d716666 100644 --- a/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_message_end_files.py +++ b/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_message_end_files.py @@ -17,11 +17,11 @@ import uuid from unittest.mock import MagicMock, Mock, patch import pytest -from graphon.file import FileTransferMethod, FileType from sqlalchemy.orm import Session from core.app.entities.task_entities import MessageEndStreamResponse from core.app.task_pipeline.easy_ui_based_generate_task_pipeline import EasyUIBasedGenerateTaskPipeline +from graphon.file import FileTransferMethod, FileType from models.model import MessageFile, UploadFile diff --git a/api/tests/unit_tests/core/app/test_easy_ui_model_config_manager.py b/api/tests/unit_tests/core/app/test_easy_ui_model_config_manager.py index 29df7eea86..21c761c579 100644 --- a/api/tests/unit_tests/core/app/test_easy_ui_model_config_manager.py +++ b/api/tests/unit_tests/core/app/test_easy_ui_model_config_manager.py @@ -1,10 +1,9 @@ from types import SimpleNamespace from unittest.mock import patch -from graphon.model_runtime.entities.model_entities import ModelPropertyKey - from core.app.app_config.easy_ui_based_app.model_config.manager import ModelConfigManager from core.app.app_config.entities import ModelConfigEntity +from graphon.model_runtime.entities.model_entities import ModelPropertyKey from models.provider_ids import ModelProviderID diff --git a/api/tests/unit_tests/core/app/workflow/layers/test_persistence.py b/api/tests/unit_tests/core/app/workflow/layers/test_persistence.py index dc2d82ccd6..5c50cb78da 100644 --- a/api/tests/unit_tests/core/app/workflow/layers/test_persistence.py +++ b/api/tests/unit_tests/core/app/workflow/layers/test_persistence.py @@ -2,14 +2,14 @@ from datetime import UTC, datetime from unittest.mock import Mock import pytest -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionStatus, WorkflowType -from graphon.node_events import NodeRunResult from core.app.workflow.layers.persistence import ( PersistenceWorkflowInfo, WorkflowPersistenceLayer, _NodeRuntimeSnapshot, ) +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionStatus, WorkflowType +from graphon.node_events import NodeRunResult def _build_layer() -> WorkflowPersistenceLayer: diff --git a/api/tests/unit_tests/core/app/workflow/test_file_runtime.py b/api/tests/unit_tests/core/app/workflow/test_file_runtime.py index 7be9d6ac1e..cddd03f4b0 100644 --- a/api/tests/unit_tests/core/app/workflow/test_file_runtime.py +++ b/api/tests/unit_tests/core/app/workflow/test_file_runtime.py @@ -8,13 +8,13 @@ from unittest.mock import MagicMock, patch from urllib.parse import parse_qs, urlparse import pytest -from graphon.file import File, FileTransferMethod, FileType from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.app.file_access import DatabaseFileAccessController, FileAccessScope from core.app.workflow import file_runtime from core.app.workflow.file_runtime import DifyWorkflowFileRuntime, bind_dify_workflow_file_runtime from core.workflow.file_reference import build_file_reference +from graphon.file import File, FileTransferMethod, FileType from models import ToolFile, UploadFile diff --git a/api/tests/unit_tests/core/app/workflow/test_node_factory.py b/api/tests/unit_tests/core/app/workflow/test_node_factory.py index 8497261d45..c4bfb23272 100644 --- a/api/tests/unit_tests/core/app/workflow/test_node_factory.py +++ b/api/tests/unit_tests/core/app/workflow/test_node_factory.py @@ -1,10 +1,10 @@ from types import SimpleNamespace import pytest -from graphon.enums import BuiltinNodeTypes from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom, build_dify_run_context from core.workflow.node_factory import DifyNodeFactory +from graphon.enums import BuiltinNodeTypes class DummyNode: diff --git a/api/tests/unit_tests/core/app/workflow/test_observability_layer_extra.py b/api/tests/unit_tests/core/app/workflow/test_observability_layer_extra.py index a47d3db6f5..82552470a9 100644 --- a/api/tests/unit_tests/core/app/workflow/test_observability_layer_extra.py +++ b/api/tests/unit_tests/core/app/workflow/test_observability_layer_extra.py @@ -2,9 +2,8 @@ from __future__ import annotations from types import SimpleNamespace -from graphon.enums import BuiltinNodeTypes - from core.app.workflow.layers.observability import ObservabilityLayer +from graphon.enums import BuiltinNodeTypes class TestObservabilityLayerExtras: diff --git a/api/tests/unit_tests/core/app/workflow/test_persistence_layer.py b/api/tests/unit_tests/core/app/workflow/test_persistence_layer.py index d8a68f6d00..cacb4dd4fa 100644 --- a/api/tests/unit_tests/core/app/workflow/test_persistence_layer.py +++ b/api/tests/unit_tests/core/app/workflow/test_persistence_layer.py @@ -4,6 +4,10 @@ from datetime import UTC, datetime from types import SimpleNamespace import pytest + +from core.app.entities.app_invoke_entities import WorkflowAppGenerateEntity +from core.app.workflow.layers.persistence import PersistenceWorkflowInfo, WorkflowPersistenceLayer +from core.workflow.system_variables import SystemVariableKey, build_system_variables from graphon.entities import WorkflowNodeExecution from graphon.entities.pause_reason import SchedulingPause from graphon.enums import ( @@ -29,10 +33,6 @@ from graphon.graph_events import ( from graphon.node_events import NodeRunResult from graphon.runtime import GraphRuntimeState, ReadOnlyGraphRuntimeStateWrapper, VariablePool -from core.app.entities.app_invoke_entities import WorkflowAppGenerateEntity -from core.app.workflow.layers.persistence import PersistenceWorkflowInfo, WorkflowPersistenceLayer -from core.workflow.system_variables import SystemVariableKey, build_system_variables - class _RepoRecorder: def __init__(self) -> None: diff --git a/api/tests/unit_tests/core/base/test_app_generator_tts_publisher.py b/api/tests/unit_tests/core/base/test_app_generator_tts_publisher.py index 5ff9774b52..7b433ab57b 100644 --- a/api/tests/unit_tests/core/base/test_app_generator_tts_publisher.py +++ b/api/tests/unit_tests/core/base/test_app_generator_tts_publisher.py @@ -301,6 +301,7 @@ class TestAppGeneratorTTSPublisher: publisher = AppGeneratorTTSPublisher("tenant", "voice1") publisher.executor = MagicMock() + from core.app.entities.queue_entities import QueueAgentMessageEvent from graphon.model_runtime.entities.llm_entities import LLMResultChunk, LLMResultChunkDelta from graphon.model_runtime.entities.message_entities import ( AssistantPromptMessage, @@ -308,8 +309,6 @@ class TestAppGeneratorTTSPublisher: TextPromptMessageContent, ) - from core.app.entities.queue_entities import QueueAgentMessageEvent - chunk = LLMResultChunk( model="model", delta=LLMResultChunkDelta( @@ -337,11 +336,10 @@ class TestAppGeneratorTTSPublisher: publisher = AppGeneratorTTSPublisher("tenant", "voice1") publisher.executor = MagicMock() + from core.app.entities.queue_entities import QueueAgentMessageEvent from graphon.model_runtime.entities.llm_entities import LLMResultChunk, LLMResultChunkDelta from graphon.model_runtime.entities.message_entities import AssistantPromptMessage - from core.app.entities.queue_entities import QueueAgentMessageEvent - chunk = LLMResultChunk( model="model", delta=LLMResultChunkDelta( diff --git a/api/tests/unit_tests/core/datasource/test_datasource_manager.py b/api/tests/unit_tests/core/datasource/test_datasource_manager.py index d338cadb77..81315d2508 100644 --- a/api/tests/unit_tests/core/datasource/test_datasource_manager.py +++ b/api/tests/unit_tests/core/datasource/test_datasource_manager.py @@ -2,15 +2,15 @@ import types from collections.abc import Generator import pytest -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.file import File, FileTransferMethod, FileType -from graphon.node_events import StreamChunkEvent, StreamCompletedEvent from contexts.wrapper import RecyclableContextVar from core.datasource.datasource_manager import DatasourceManager from core.datasource.entities.datasource_entities import DatasourceMessage, DatasourceProviderType from core.datasource.errors import DatasourceProviderNotFoundError from core.workflow.file_reference import parse_file_reference +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.file import File, FileTransferMethod, FileType +from graphon.node_events import StreamChunkEvent, StreamCompletedEvent def _gen_messages_text_only(text: str) -> Generator[DatasourceMessage, None, None]: diff --git a/api/tests/unit_tests/core/datasource/utils/test_message_transformer.py b/api/tests/unit_tests/core/datasource/utils/test_message_transformer.py index fbaf6d497d..0fca43cd0b 100644 --- a/api/tests/unit_tests/core/datasource/utils/test_message_transformer.py +++ b/api/tests/unit_tests/core/datasource/utils/test_message_transformer.py @@ -1,10 +1,10 @@ from unittest.mock import MagicMock, patch import pytest -from graphon.file import File, FileTransferMethod, FileType from core.datasource.entities.datasource_entities import DatasourceMessage from core.datasource.utils.message_transformer import DatasourceFileMessageTransformer +from graphon.file import File, FileTransferMethod, FileType from models.tools import ToolFile diff --git a/api/tests/unit_tests/core/entities/test_entities_execution_extra_content.py b/api/tests/unit_tests/core/entities/test_entities_execution_extra_content.py index ff9fd0d8f3..ef8f360dbf 100644 --- a/api/tests/unit_tests/core/entities/test_entities_execution_extra_content.py +++ b/api/tests/unit_tests/core/entities/test_entities_execution_extra_content.py @@ -1,12 +1,11 @@ -from graphon.nodes.human_input.entities import FormInput, UserAction -from graphon.nodes.human_input.enums import FormInputType - from core.entities.execution_extra_content import ( ExecutionExtraContentDomainModel, HumanInputContent, HumanInputFormDefinition, HumanInputFormSubmissionData, ) +from graphon.nodes.human_input.entities import FormInput, UserAction +from graphon.nodes.human_input.enums import FormInputType from models.execution_extra_content import ExecutionContentType diff --git a/api/tests/unit_tests/core/entities/test_entities_model_entities.py b/api/tests/unit_tests/core/entities/test_entities_model_entities.py index 2acd278a31..a0b2820157 100644 --- a/api/tests/unit_tests/core/entities/test_entities_model_entities.py +++ b/api/tests/unit_tests/core/entities/test_entities_model_entities.py @@ -8,9 +8,6 @@ drive provider mapping behavior. """ import pytest -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import FetchFrom, ModelType -from graphon.model_runtime.entities.provider_entities import ConfigurateMethod, ProviderEntity from core.entities.model_entities import ( DefaultModelEntity, @@ -19,6 +16,9 @@ from core.entities.model_entities import ( ProviderModelWithStatusEntity, SimpleModelProviderEntity, ) +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import FetchFrom, ModelType +from graphon.model_runtime.entities.provider_entities import ConfigurateMethod, ProviderEntity def _build_model_with_status(status: ModelStatus) -> ProviderModelWithStatusEntity: diff --git a/api/tests/unit_tests/core/entities/test_entities_provider_configuration.py b/api/tests/unit_tests/core/entities/test_entities_provider_configuration.py index 8cf0409c4c..fe2c226843 100644 --- a/api/tests/unit_tests/core/entities/test_entities_provider_configuration.py +++ b/api/tests/unit_tests/core/entities/test_entities_provider_configuration.py @@ -6,17 +6,6 @@ from typing import Any from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType -from graphon.model_runtime.entities.provider_entities import ( - ConfigurateMethod, - CredentialFormSchema, - FieldModelSchema, - FormType, - ModelCredentialSchema, - ProviderCredentialSchema, - ProviderEntity, -) from constants import HIDDEN_VALUE from core.entities.model_entities import ModelStatus @@ -35,6 +24,17 @@ from core.entities.provider_entities import ( SystemConfiguration, SystemConfigurationStatus, ) +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType +from graphon.model_runtime.entities.provider_entities import ( + ConfigurateMethod, + CredentialFormSchema, + FieldModelSchema, + FormType, + ModelCredentialSchema, + ProviderCredentialSchema, + ProviderEntity, +) from models.enums import CredentialSourceType from models.provider import ProviderType from models.provider_ids import ModelProviderID diff --git a/api/tests/unit_tests/core/entities/test_entities_provider_entities.py b/api/tests/unit_tests/core/entities/test_entities_provider_entities.py index 8685d16283..a159d3ad4d 100644 --- a/api/tests/unit_tests/core/entities/test_entities_provider_entities.py +++ b/api/tests/unit_tests/core/entities/test_entities_provider_entities.py @@ -1,5 +1,4 @@ import pytest -from graphon.model_runtime.entities.model_entities import ModelType from core.entities.parameter_entities import AppSelectorScope from core.entities.provider_entities import ( @@ -9,6 +8,7 @@ from core.entities.provider_entities import ( ProviderQuotaType, ) from core.tools.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import ModelType def test_provider_quota_type_value_of_returns_enum_member() -> None: diff --git a/api/tests/unit_tests/core/external_data_tool/test_base.py b/api/tests/unit_tests/core/external_data_tool/test_base.py index 216cda83c5..63e887f904 100644 --- a/api/tests/unit_tests/core/external_data_tool/test_base.py +++ b/api/tests/unit_tests/core/external_data_tool/test_base.py @@ -1,3 +1,5 @@ +from typing import Any + import pytest from core.extension.extensible import ExtensionModule @@ -12,10 +14,10 @@ class TestExternalDataTool: # Create a concrete subclass to test init class ConcreteTool(ExternalDataTool): @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): return super().validate_config(tenant_id, config) - def query(self, inputs: dict, query: str | None = None) -> str: + def query(self, inputs: dict[str, Any], query: str | None = None) -> str: return super().query(inputs, query) tool = ConcreteTool(tenant_id="tenant_1", app_id="app_1", variable="var_1", config={"key": "value"}) @@ -28,10 +30,10 @@ class TestExternalDataTool: # Create a concrete subclass to test init class ConcreteTool(ExternalDataTool): @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): pass - def query(self, inputs: dict, query: str | None = None) -> str: + def query(self, inputs: dict[str, Any], query: str | None = None) -> str: return "" tool = ConcreteTool(tenant_id="tenant_1", app_id="app_1", variable="var_1") @@ -43,10 +45,10 @@ class TestExternalDataTool: def test_validate_config_raises_not_implemented(self): class ConcreteTool(ExternalDataTool): @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): return super().validate_config(tenant_id, config) - def query(self, inputs: dict, query: str | None = None) -> str: + def query(self, inputs: dict[str, Any], query: str | None = None) -> str: return "" with pytest.raises(NotImplementedError): @@ -55,10 +57,10 @@ class TestExternalDataTool: def test_query_raises_not_implemented(self): class ConcreteTool(ExternalDataTool): @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): pass - def query(self, inputs: dict, query: str | None = None) -> str: + def query(self, inputs: dict[str, Any], query: str | None = None) -> str: return super().query(inputs, query) tool = ConcreteTool(tenant_id="tenant_1", app_id="app_1", variable="var_1") diff --git a/api/tests/unit_tests/core/helper/test_moderation.py b/api/tests/unit_tests/core/helper/test_moderation.py index 4a84099b74..a0dfa86d20 100644 --- a/api/tests/unit_tests/core/helper/test_moderation.py +++ b/api/tests/unit_tests/core/helper/test_moderation.py @@ -2,11 +2,11 @@ from types import SimpleNamespace from typing import cast import pytest -from graphon.model_runtime.errors.invoke import InvokeBadRequestError from pytest_mock import MockerFixture from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.helper.moderation import check_moderation +from graphon.model_runtime.errors.invoke import InvokeBadRequestError from models.provider import ProviderType diff --git a/api/tests/unit_tests/core/llm_generator/output_parser/test_structured_output.py b/api/tests/unit_tests/core/llm_generator/output_parser/test_structured_output.py index b45f6fd9a7..6ed9ddb476 100644 --- a/api/tests/unit_tests/core/llm_generator/output_parser/test_structured_output.py +++ b/api/tests/unit_tests/core/llm_generator/output_parser/test_structured_output.py @@ -2,20 +2,6 @@ import json from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.entities.llm_entities import ( - LLMResult, - LLMResultChunk, - LLMResultChunkDelta, - LLMResultWithStructuredOutput, - LLMUsage, -) -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - SystemPromptMessage, - TextPromptMessageContent, - UserPromptMessage, -) -from graphon.model_runtime.entities.model_entities import AIModelEntity, ParameterRule, ParameterType from core.llm_generator.output_parser.errors import OutputParserError from core.llm_generator.output_parser.structured_output import ( @@ -30,6 +16,20 @@ from core.llm_generator.output_parser.structured_output import ( remove_additional_properties, ) from core.model_manager import ModelInstance +from graphon.model_runtime.entities.llm_entities import ( + LLMResult, + LLMResultChunk, + LLMResultChunkDelta, + LLMResultWithStructuredOutput, + LLMUsage, +) +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + SystemPromptMessage, + TextPromptMessageContent, + UserPromptMessage, +) +from graphon.model_runtime.entities.model_entities import AIModelEntity, ParameterRule, ParameterType class TestStructuredOutput: diff --git a/api/tests/unit_tests/core/llm_generator/test_llm_generator.py b/api/tests/unit_tests/core/llm_generator/test_llm_generator.py index 7cdfb31189..2716f4712c 100644 --- a/api/tests/unit_tests/core/llm_generator/test_llm_generator.py +++ b/api/tests/unit_tests/core/llm_generator/test_llm_generator.py @@ -2,12 +2,12 @@ import json from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.entities.llm_entities import LLMMode, LLMResult -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError from core.app.app_config.entities import ModelConfig from core.llm_generator.entities import RuleCodeGeneratePayload, RuleGeneratePayload, RuleStructuredOutputPayload from core.llm_generator.llm_generator import LLMGenerator +from graphon.model_runtime.entities.llm_entities import LLMMode, LLMResult +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError class TestLLMGenerator: diff --git a/api/tests/unit_tests/core/mcp/auth/test_auth_flow.py b/api/tests/unit_tests/core/mcp/auth/test_auth_flow.py index fe533e62af..1f5fdd2657 100644 --- a/api/tests/unit_tests/core/mcp/auth/test_auth_flow.py +++ b/api/tests/unit_tests/core/mcp/auth/test_auth_flow.py @@ -862,6 +862,15 @@ class TestAuthOrchestration: result = discover_protected_resource_metadata(None, "https://api.example.com") assert result is None + # JSONDecodeError (non-JSON 200 response) + mock_get.side_effect = None + bad_json_response = Mock() + bad_json_response.status_code = 200 + bad_json_response.json.side_effect = json.JSONDecodeError("Expecting value", "", 0) + mock_get.return_value = bad_json_response + result = discover_protected_resource_metadata(None, "https://api.example.com") + assert result is None + @patch("core.helper.ssrf_proxy.get") def test_discover_oauth_authorization_server_metadata(self, mock_get): # Success @@ -892,6 +901,14 @@ class TestAuthOrchestration: result = discover_oauth_authorization_server_metadata(None, "https://api.example.com") assert result is None + # JSONDecodeError (non-JSON 200 response) + bad_json_response = Mock() + bad_json_response.status_code = 200 + bad_json_response.json.side_effect = json.JSONDecodeError("Expecting value", "", 0) + mock_get.return_value = bad_json_response + result = discover_oauth_authorization_server_metadata(None, "https://api.example.com") + assert result is None + def test_get_effective_scope(self): prm = ProtectedResourceMetadata( resource="https://api.example.com", @@ -997,6 +1014,24 @@ class TestAuthOrchestration: supported, url = check_support_resource_discovery("https://api") assert supported is False + # Case 6: JSONDecodeError (non-JSON 200 response) + mock_get.side_effect = None + bad_json_res = Mock() + bad_json_res.status_code = 200 + bad_json_res.json.side_effect = json.JSONDecodeError("Expecting value", "", 0) + mock_get.return_value = bad_json_res + supported, url = check_support_resource_discovery("https://api") + assert supported is False + assert url == "" + + # Case 7: Empty authorization_servers array (IndexError) + empty_res = Mock() + empty_res.status_code = 200 + empty_res.json.return_value = {"authorization_servers": []} + mock_get.return_value = empty_res + supported, url = check_support_resource_discovery("https://api") + assert supported is False + def test_discover_oauth_metadata(self): with patch("core.mcp.auth.auth_flow.discover_protected_resource_metadata") as mock_prm: with patch("core.mcp.auth.auth_flow.discover_oauth_authorization_server_metadata") as mock_asm: diff --git a/api/tests/unit_tests/core/mcp/server/test_streamable_http.py b/api/tests/unit_tests/core/mcp/server/test_streamable_http.py index 9a815fb94d..57456085c3 100644 --- a/api/tests/unit_tests/core/mcp/server/test_streamable_http.py +++ b/api/tests/unit_tests/core/mcp/server/test_streamable_http.py @@ -3,7 +3,6 @@ from unittest.mock import Mock, patch import jsonschema import pytest -from graphon.variables.input_entities import VariableEntity, VariableEntityType from core.app.features.rate_limiting.rate_limit import RateLimitGenerator from core.mcp import types @@ -19,6 +18,7 @@ from core.mcp.server.streamable_http import ( prepare_tool_arguments, process_mapping_response, ) +from graphon.variables.input_entities import VariableEntity, VariableEntityType from models.model import App, AppMCPServer, AppMode, EndUser diff --git a/api/tests/unit_tests/core/mcp/test_entities.py b/api/tests/unit_tests/core/mcp/test_entities.py index 3fede55916..e99c38285c 100644 --- a/api/tests/unit_tests/core/mcp/test_entities.py +++ b/api/tests/unit_tests/core/mcp/test_entities.py @@ -4,9 +4,7 @@ from unittest.mock import Mock from core.mcp.entities import ( SUPPORTED_PROTOCOL_VERSIONS, - LifespanContextT, RequestContext, - SessionT, ) from core.mcp.session.base_session import BaseSession from core.mcp.types import LATEST_PROTOCOL_VERSION, RequestParams @@ -198,42 +196,3 @@ class TestRequestContext: assert "RequestContext" in repr_str assert "test-123" in repr_str assert "MockSession" in repr_str - - -class TestTypeVariables: - """Test type variables defined in the module.""" - - def test_session_type_var(self): - """Test SessionT type variable.""" - - # Create a custom session class - class CustomSession(BaseSession): - pass - - # Use in generic context - def process_session(session: SessionT) -> SessionT: - return session - - mock_session = Mock(spec=CustomSession) - result = process_session(mock_session) - assert result == mock_session - - def test_lifespan_context_type_var(self): - """Test LifespanContextT type variable.""" - - # Use in generic context - def process_lifespan(context: LifespanContextT) -> LifespanContextT: - return context - - # Test with different types - str_context = "string-context" - assert process_lifespan(str_context) == str_context - - dict_context = {"key": "value"} - assert process_lifespan(dict_context) == dict_context - - class CustomContext: - pass - - custom_context = CustomContext() - assert process_lifespan(custom_context) == custom_context diff --git a/api/tests/unit_tests/core/memory/test_token_buffer_memory.py b/api/tests/unit_tests/core/memory/test_token_buffer_memory.py index 9a5fb319d7..f459250b8e 100644 --- a/api/tests/unit_tests/core/memory/test_token_buffer_memory.py +++ b/api/tests/unit_tests/core/memory/test_token_buffer_memory.py @@ -4,6 +4,8 @@ from unittest.mock import MagicMock, patch from uuid import uuid4 import pytest + +from core.memory.token_buffer_memory import TokenBufferMemory from graphon.model_runtime.entities import ( AssistantPromptMessage, ImagePromptMessageContent, @@ -11,8 +13,6 @@ from graphon.model_runtime.entities import ( TextPromptMessageContent, UserPromptMessage, ) - -from core.memory.token_buffer_memory import TokenBufferMemory from models.model import AppMode # --------------------------------------------------------------------------- diff --git a/api/tests/unit_tests/core/model_runtime/test_model_provider_factory.py b/api/tests/unit_tests/core/model_runtime/test_model_provider_factory.py index 6a672fdfd5..249ecb5006 100644 --- a/api/tests/unit_tests/core/model_runtime/test_model_provider_factory.py +++ b/api/tests/unit_tests/core/model_runtime/test_model_provider_factory.py @@ -1,6 +1,7 @@ from unittest.mock import Mock import pytest + from graphon.model_runtime.entities.common_entities import I18nObject from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType from graphon.model_runtime.entities.provider_entities import ( diff --git a/api/tests/unit_tests/core/moderation/test_content_moderation.py b/api/tests/unit_tests/core/moderation/test_content_moderation.py index 3a97ad5c5d..4c668ee96b 100644 --- a/api/tests/unit_tests/core/moderation/test_content_moderation.py +++ b/api/tests/unit_tests/core/moderation/test_content_moderation.py @@ -10,6 +10,7 @@ This module tests all aspects of the content moderation system including: - Configuration validation """ +from typing import Any from unittest.mock import MagicMock, Mock, patch import pytest @@ -28,7 +29,7 @@ class TestKeywordsModeration: """Test suite for custom keyword-based content moderation.""" @pytest.fixture - def keywords_config(self) -> dict: + def keywords_config(self) -> dict[str, Any]: """ Fixture providing a standard keywords moderation configuration. @@ -48,7 +49,7 @@ class TestKeywordsModeration: } @pytest.fixture - def keywords_moderation(self, keywords_config: dict) -> KeywordsModeration: + def keywords_moderation(self, keywords_config: dict[str, Any]) -> KeywordsModeration: """ Fixture providing a KeywordsModeration instance. @@ -64,7 +65,7 @@ class TestKeywordsModeration: config=keywords_config, ) - def test_validate_config_success(self, keywords_config: dict): + def test_validate_config_success(self, keywords_config: dict[str, Any]): """Test successful validation of keywords moderation configuration.""" # Should not raise any exception KeywordsModeration.validate_config("test-tenant", keywords_config) @@ -274,7 +275,7 @@ class TestOpenAIModeration: """Test suite for OpenAI-based content moderation.""" @pytest.fixture - def openai_config(self) -> dict: + def openai_config(self) -> dict[str, Any]: """ Fixture providing OpenAI moderation configuration. @@ -293,7 +294,7 @@ class TestOpenAIModeration: } @pytest.fixture - def openai_moderation(self, openai_config: dict) -> OpenAIModeration: + def openai_moderation(self, openai_config: dict[str, Any]) -> OpenAIModeration: """ Fixture providing an OpenAIModeration instance. @@ -309,7 +310,7 @@ class TestOpenAIModeration: config=openai_config, ) - def test_validate_config_success(self, openai_config: dict): + def test_validate_config_success(self, openai_config: dict[str, Any]): """Test successful validation of OpenAI moderation configuration.""" # Should not raise any exception OpenAIModeration.validate_config("test-tenant", openai_config) diff --git a/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace.py b/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace.py index 62d631a754..c2324fdec4 100644 --- a/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace.py +++ b/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace.py @@ -5,8 +5,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.entities import WorkflowNodeExecution -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from opentelemetry.trace import Link, SpanContext, SpanKind, Status, StatusCode, TraceFlags import core.ops.aliyun_trace.aliyun_trace as aliyun_trace_module @@ -36,6 +34,8 @@ from core.ops.entities.trace_entity import ( ToolTraceInfo, WorkflowTraceInfo, ) +from graphon.entities import WorkflowNodeExecution +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey class RecordingTraceClient: diff --git a/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace_utils.py b/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace_utils.py index 2d2be12f05..e4d8f2d5ea 100644 --- a/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace_utils.py +++ b/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace_utils.py @@ -1,8 +1,6 @@ import json from unittest.mock import MagicMock -from graphon.entities import WorkflowNodeExecution -from graphon.enums import WorkflowNodeExecutionStatus from opentelemetry.trace import Link, StatusCode from core.ops.aliyun_trace.entities.semconv import ( @@ -26,6 +24,8 @@ from core.ops.aliyun_trace.utils import ( serialize_json_data, ) from core.rag.models.document import Document +from graphon.entities import WorkflowNodeExecution +from graphon.enums import WorkflowNodeExecutionStatus from models import EndUser diff --git a/api/tests/unit_tests/core/ops/langfuse_trace/test_langfuse_trace.py b/api/tests/unit_tests/core/ops/langfuse_trace/test_langfuse_trace.py index 374371fb42..a0bcc92795 100644 --- a/api/tests/unit_tests/core/ops/langfuse_trace/test_langfuse_trace.py +++ b/api/tests/unit_tests/core/ops/langfuse_trace/test_langfuse_trace.py @@ -5,7 +5,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.enums import BuiltinNodeTypes from core.ops.entities.config_entity import LangfuseConfig from core.ops.entities.trace_entity import ( @@ -26,6 +25,7 @@ from core.ops.langfuse_trace.entities.langfuse_trace_entity import ( UnitEnum, ) from core.ops.langfuse_trace.langfuse_trace import LangFuseDataTrace +from graphon.enums import BuiltinNodeTypes from models import EndUser from models.enums import MessageStatus diff --git a/api/tests/unit_tests/core/ops/langsmith_trace/test_langsmith_trace.py b/api/tests/unit_tests/core/ops/langsmith_trace/test_langsmith_trace.py index bfe916f018..34c64c54a1 100644 --- a/api/tests/unit_tests/core/ops/langsmith_trace/test_langsmith_trace.py +++ b/api/tests/unit_tests/core/ops/langsmith_trace/test_langsmith_trace.py @@ -3,7 +3,6 @@ from datetime import datetime, timedelta from unittest.mock import MagicMock import pytest -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from core.ops.entities.config_entity import LangSmithConfig from core.ops.entities.trace_entity import ( @@ -22,6 +21,7 @@ from core.ops.langsmith_trace.entities.langsmith_trace_entity import ( LangSmithRunUpdateModel, ) from core.ops.langsmith_trace.langsmith_trace import LangSmithDataTrace +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from models import EndUser diff --git a/api/tests/unit_tests/core/ops/mlflow_trace/test_mlflow_trace.py b/api/tests/unit_tests/core/ops/mlflow_trace/test_mlflow_trace.py index f4c485a9fc..afc5726ede 100644 --- a/api/tests/unit_tests/core/ops/mlflow_trace/test_mlflow_trace.py +++ b/api/tests/unit_tests/core/ops/mlflow_trace/test_mlflow_trace.py @@ -9,7 +9,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch import pytest -from graphon.enums import BuiltinNodeTypes from core.ops.entities.config_entity import DatabricksConfig, MLflowConfig from core.ops.entities.trace_entity import ( @@ -22,6 +21,7 @@ from core.ops.entities.trace_entity import ( WorkflowTraceInfo, ) from core.ops.mlflow_trace.mlflow_trace import MLflowDataTrace, datetime_to_nanoseconds +from graphon.enums import BuiltinNodeTypes # ── Helpers ────────────────────────────────────────────────────────────────── diff --git a/api/tests/unit_tests/core/ops/opik_trace/test_opik_trace.py b/api/tests/unit_tests/core/ops/opik_trace/test_opik_trace.py index 1cb32f2ee0..c02ac413f2 100644 --- a/api/tests/unit_tests/core/ops/opik_trace/test_opik_trace.py +++ b/api/tests/unit_tests/core/ops/opik_trace/test_opik_trace.py @@ -5,7 +5,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from core.ops.entities.config_entity import OpikConfig from core.ops.entities.trace_entity import ( @@ -19,6 +18,7 @@ from core.ops.entities.trace_entity import ( WorkflowTraceInfo, ) from core.ops.opik_trace.opik_trace import OpikDataTrace, prepare_opik_uuid, wrap_dict, wrap_metadata +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from models import EndUser from models.enums import MessageStatus diff --git a/api/tests/unit_tests/core/ops/tencent_trace/test_span_builder.py b/api/tests/unit_tests/core/ops/tencent_trace/test_span_builder.py index 696f859b6f..6113e5c6c8 100644 --- a/api/tests/unit_tests/core/ops/tencent_trace/test_span_builder.py +++ b/api/tests/unit_tests/core/ops/tencent_trace/test_span_builder.py @@ -1,8 +1,6 @@ from datetime import datetime from unittest.mock import MagicMock, patch -from graphon.entities import WorkflowNodeExecution -from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus from opentelemetry.trace import StatusCode from core.ops.entities.trace_entity import ( @@ -27,6 +25,8 @@ from core.ops.tencent_trace.entities.semconv import ( ) from core.ops.tencent_trace.span_builder import TencentSpanBuilder from core.rag.models.document import Document +from graphon.entities import WorkflowNodeExecution +from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus class TestTencentSpanBuilder: diff --git a/api/tests/unit_tests/core/ops/tencent_trace/test_tencent_trace.py b/api/tests/unit_tests/core/ops/tencent_trace/test_tencent_trace.py index f67abba807..7afd0b824a 100644 --- a/api/tests/unit_tests/core/ops/tencent_trace/test_tencent_trace.py +++ b/api/tests/unit_tests/core/ops/tencent_trace/test_tencent_trace.py @@ -2,8 +2,6 @@ import logging from unittest.mock import MagicMock, patch import pytest -from graphon.entities import WorkflowNodeExecution -from graphon.enums import BuiltinNodeTypes from core.ops.entities.config_entity import TencentConfig from core.ops.entities.trace_entity import ( @@ -16,6 +14,8 @@ from core.ops.entities.trace_entity import ( WorkflowTraceInfo, ) from core.ops.tencent_trace.tencent_trace import TencentDataTrace +from graphon.entities import WorkflowNodeExecution +from graphon.enums import BuiltinNodeTypes from models import Account, App, TenantAccountJoin logger = logging.getLogger(__name__) diff --git a/api/tests/unit_tests/core/ops/test_arize_phoenix_trace.py b/api/tests/unit_tests/core/ops/test_arize_phoenix_trace.py index 6b5cb5b09a..4b925390d9 100644 --- a/api/tests/unit_tests/core/ops/test_arize_phoenix_trace.py +++ b/api/tests/unit_tests/core/ops/test_arize_phoenix_trace.py @@ -1,7 +1,7 @@ -from graphon.enums import BUILT_IN_NODE_TYPES, BuiltinNodeTypes from openinference.semconv.trace import OpenInferenceSpanKindValues from core.ops.arize_phoenix_trace.arize_phoenix_trace import _NODE_TYPE_TO_SPAN_KIND, _get_node_span_kind +from graphon.enums import BUILT_IN_NODE_TYPES, BuiltinNodeTypes class TestGetNodeSpanKind: diff --git a/api/tests/unit_tests/core/ops/test_langfuse_trace.py b/api/tests/unit_tests/core/ops/test_langfuse_trace.py new file mode 100644 index 0000000000..017ac8c891 --- /dev/null +++ b/api/tests/unit_tests/core/ops/test_langfuse_trace.py @@ -0,0 +1,136 @@ +"""Tests for Langfuse TTFT reporting support.""" + +from datetime import datetime, timedelta +from types import SimpleNamespace +from unittest.mock import MagicMock, patch + +from core.ops.entities.config_entity import LangfuseConfig +from core.ops.entities.trace_entity import MessageTraceInfo, WorkflowTraceInfo +from core.ops.langfuse_trace.langfuse_trace import LangFuseDataTrace +from graphon.enums import BuiltinNodeTypes + + +def _create_trace_instance() -> LangFuseDataTrace: + with patch("core.ops.langfuse_trace.langfuse_trace.Langfuse", autospec=True): + return LangFuseDataTrace( + LangfuseConfig( + public_key="public-key", + secret_key="secret-key", + host="https://cloud.langfuse.com", + ) + ) + + +class TestLangFuseDataTraceCompletionStartTime: + def test_message_trace_reports_completion_start_time(self): + trace = _create_trace_instance() + start_time = datetime(2026, 3, 11, 13, 0, 0) + trace_info = MessageTraceInfo( + trace_id="trace-123", + message_id="message-123", + message_data=SimpleNamespace( + id="message-123", + from_account_id="account-1", + from_end_user_id=None, + conversation_id="conversation-1", + model_id="gpt-4o-mini", + answer="hi there", + status="normal", + error="", + total_price=0.12, + provider_response_latency=3.5, + ), + conversation_model="chat", + message_tokens=10, + answer_tokens=20, + total_tokens=30, + error="", + inputs="hello", + outputs="hi there", + file_list=[], + start_time=start_time, + end_time=start_time + timedelta(seconds=3.5), + metadata={}, + message_file_data=None, + conversation_mode="chat", + gen_ai_server_time_to_first_token=1.2, + llm_streaming_time_to_generate=2.3, + is_streaming_request=True, + ) + + with patch.object(trace, "add_trace"), patch.object(trace, "add_generation") as add_generation: + trace.message_trace(trace_info) + + generation = add_generation.call_args.args[0] + assert generation.completion_start_time == start_time + timedelta(seconds=1.2) + + def test_workflow_trace_reports_completion_start_time_from_llm_usage(self): + trace = _create_trace_instance() + start_time = datetime(2026, 3, 11, 13, 0, 0) + node_execution = SimpleNamespace( + id="node-exec-1", + title="Chat LLM", + node_type=BuiltinNodeTypes.LLM, + status="succeeded", + process_data={ + "model_mode": "chat", + "model_name": "gpt-4o-mini", + "usage": { + "prompt_tokens": 10, + "completion_tokens": 20, + "time_to_first_token": 1.2, + }, + }, + inputs={"question": "hello"}, + outputs={"text": "hi there"}, + created_at=start_time, + elapsed_time=3.5, + metadata={}, + ) + trace_info = WorkflowTraceInfo( + trace_id="trace-123", + workflow_data={}, + conversation_id=None, + workflow_app_log_id=None, + workflow_id="workflow-1", + tenant_id="tenant-1", + workflow_run_id="workflow-run-1", + workflow_run_elapsed_time=3.5, + workflow_run_status="succeeded", + workflow_run_inputs={"question": "hello"}, + workflow_run_outputs={"answer": "hi there"}, + workflow_run_version="1", + error="", + total_tokens=30, + file_list=[], + query="hello", + metadata={"app_id": "app-1", "user_id": "user-1"}, + start_time=start_time, + end_time=start_time + timedelta(seconds=3.5), + ) + repository = MagicMock() + repository.get_by_workflow_execution.return_value = [node_execution] + + with ( + patch.object(trace, "add_trace"), + patch.object(trace, "add_span"), + patch.object(trace, "add_generation") as add_generation, + patch.object(trace, "get_service_account_with_tenant", return_value=MagicMock()), + patch("core.ops.langfuse_trace.langfuse_trace.db", MagicMock()), + patch( + "core.ops.langfuse_trace.langfuse_trace.DifyCoreRepositoryFactory.create_workflow_node_execution_repository", + return_value=repository, + ), + ): + trace.workflow_trace(trace_info) + + generation = add_generation.call_args.kwargs["langfuse_generation_data"] + assert generation.completion_start_time == start_time + timedelta(seconds=1.2) + + def test_ignores_invalid_ttft_values(self): + trace = _create_trace_instance() + start_time = datetime(2026, 3, 11, 13, 0, 0) + + assert trace._get_completion_start_time(start_time, None) is None + assert trace._get_completion_start_time(start_time, -1) is None + assert trace._get_completion_start_time(start_time, "invalid") is None diff --git a/api/tests/unit_tests/core/ops/weave_trace/test_weave_trace.py b/api/tests/unit_tests/core/ops/weave_trace/test_weave_trace.py index 5014f40afc..531c7de05f 100644 --- a/api/tests/unit_tests/core/ops/weave_trace/test_weave_trace.py +++ b/api/tests/unit_tests/core/ops/weave_trace/test_weave_trace.py @@ -7,7 +7,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch import pytest -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from weave.trace_server.trace_server_interface import TraceStatus from core.ops.entities.config_entity import WeaveConfig @@ -23,6 +22,7 @@ from core.ops.entities.trace_entity import ( ) from core.ops.weave_trace.entities.weave_trace_entity import WeaveTraceModel from core.ops.weave_trace.weave_trace import WeaveDataTrace +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey # ── Helpers ────────────────────────────────────────────────────────────────── diff --git a/api/tests/unit_tests/core/plugin/test_backwards_invocation_model.py b/api/tests/unit_tests/core/plugin/test_backwards_invocation_model.py index 543b278715..c24d3ac012 100644 --- a/api/tests/unit_tests/core/plugin/test_backwards_invocation_model.py +++ b/api/tests/unit_tests/core/plugin/test_backwards_invocation_model.py @@ -1,10 +1,9 @@ from types import SimpleNamespace from unittest.mock import patch -from graphon.model_runtime.entities.message_entities import UserPromptMessage - from core.plugin.backwards_invocation.model import PluginModelBackwardsInvocation from core.plugin.entities.request import RequestInvokeSummary +from graphon.model_runtime.entities.message_entities import UserPromptMessage def test_system_model_helpers_forward_user_id(): diff --git a/api/tests/unit_tests/core/plugin/test_model_runtime_adapter.py b/api/tests/unit_tests/core/plugin/test_model_runtime_adapter.py index f8d0e127b1..68aa130518 100644 --- a/api/tests/unit_tests/core/plugin/test_model_runtime_adapter.py +++ b/api/tests/unit_tests/core/plugin/test_model_runtime_adapter.py @@ -6,15 +6,15 @@ from types import SimpleNamespace from unittest.mock import Mock, sentinel import pytest -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType -from graphon.model_runtime.entities.provider_entities import ConfigurateMethod, ProviderEntity from core.plugin.entities.plugin_daemon import PluginModelProviderEntity from core.plugin.impl import model_runtime as model_runtime_module from core.plugin.impl.model import PluginModelClient from core.plugin.impl.model_runtime import TENANT_SCOPE_SCHEMA_CACHE_USER_ID, PluginModelRuntime from core.plugin.impl.model_runtime_factory import create_plugin_model_runtime +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType +from graphon.model_runtime.entities.provider_entities import ConfigurateMethod, ProviderEntity def _build_model_schema() -> AIModelEntity: diff --git a/api/tests/unit_tests/core/plugin/test_plugin_entities.py b/api/tests/unit_tests/core/plugin/test_plugin_entities.py index a812b01c5b..f1c4c7e700 100644 --- a/api/tests/unit_tests/core/plugin/test_plugin_entities.py +++ b/api/tests/unit_tests/core/plugin/test_plugin_entities.py @@ -4,12 +4,6 @@ from enum import StrEnum import pytest from flask import Response -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - SystemPromptMessage, - ToolPromptMessage, - UserPromptMessage, -) from pydantic import ValidationError from core.plugin.entities.endpoint import EndpointEntityWithInstance @@ -31,6 +25,12 @@ from core.plugin.entities.request import ( ) from core.plugin.utils.http_parser import serialize_response from core.tools.entities.common_entities import I18nObject +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + SystemPromptMessage, + ToolPromptMessage, + UserPromptMessage, +) class TestEndpointEntity: diff --git a/api/tests/unit_tests/core/plugin/test_plugin_runtime.py b/api/tests/unit_tests/core/plugin/test_plugin_runtime.py index a3b1e5f6b0..704b82adc0 100644 --- a/api/tests/unit_tests/core/plugin/test_plugin_runtime.py +++ b/api/tests/unit_tests/core/plugin/test_plugin_runtime.py @@ -17,14 +17,6 @@ from unittest.mock import MagicMock, patch import httpx import pytest -from graphon.model_runtime.errors.invoke import ( - InvokeAuthorizationError, - InvokeBadRequestError, - InvokeConnectionError, - InvokeRateLimitError, - InvokeServerUnavailableError, -) -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from pydantic import BaseModel from core.plugin.entities.plugin_daemon import ( @@ -45,6 +37,14 @@ from core.plugin.impl.exc import ( ) from core.plugin.impl.plugin import PluginInstaller from core.plugin.impl.tool import PluginToolManager +from graphon.model_runtime.errors.invoke import ( + InvokeAuthorizationError, + InvokeBadRequestError, + InvokeConnectionError, + InvokeRateLimitError, + InvokeServerUnavailableError, +) +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError @pytest.fixture(autouse=True) diff --git a/api/tests/unit_tests/core/plugin/utils/test_chunk_merger.py b/api/tests/unit_tests/core/plugin/utils/test_chunk_merger.py index 90730dff5a..d49b6e4b71 100644 --- a/api/tests/unit_tests/core/plugin/utils/test_chunk_merger.py +++ b/api/tests/unit_tests/core/plugin/utils/test_chunk_merger.py @@ -1,12 +1,12 @@ from collections.abc import Generator import pytest -from graphon.file import File, FileTransferMethod, FileType from core.agent.entities import AgentInvokeMessage from core.plugin.utils.chunk_merger import FileChunk, merge_blob_chunks from core.plugin.utils.converter import convert_parameters_to_plugin_format from core.tools.entities.tool_entities import ToolInvokeMessage, ToolParameter, ToolSelector +from graphon.file import File, FileTransferMethod, FileType class TestChunkMerger: diff --git a/api/tests/unit_tests/core/prompt/test_advanced_prompt_transform.py b/api/tests/unit_tests/core/prompt/test_advanced_prompt_transform.py index 2b280dd674..395d392127 100644 --- a/api/tests/unit_tests/core/prompt/test_advanced_prompt_transform.py +++ b/api/tests/unit_tests/core/prompt/test_advanced_prompt_transform.py @@ -2,6 +2,13 @@ from typing import cast from unittest.mock import MagicMock, patch import pytest + +from configs import dify_config +from core.app.app_config.entities import ModelConfigEntity +from core.memory.token_buffer_memory import TokenBufferMemory +from core.prompt.advanced_prompt_transform import AdvancedPromptTransform +from core.prompt.entities.advanced_prompt_entities import ChatModelMessage, CompletionModelPromptTemplate, MemoryConfig +from core.prompt.utils.prompt_template_parser import PromptTemplateParser from graphon.file import File, FileTransferMethod, FileType from graphon.model_runtime.entities.message_entities import ( AssistantPromptMessage, @@ -11,13 +18,6 @@ from graphon.model_runtime.entities.message_entities import ( TextPromptMessageContent, UserPromptMessage, ) - -from configs import dify_config -from core.app.app_config.entities import ModelConfigEntity -from core.memory.token_buffer_memory import TokenBufferMemory -from core.prompt.advanced_prompt_transform import AdvancedPromptTransform -from core.prompt.entities.advanced_prompt_entities import ChatModelMessage, CompletionModelPromptTemplate, MemoryConfig -from core.prompt.utils.prompt_template_parser import PromptTemplateParser from models.model import Conversation diff --git a/api/tests/unit_tests/core/prompt/test_agent_history_prompt_transform.py b/api/tests/unit_tests/core/prompt/test_agent_history_prompt_transform.py index 4a54649b28..803afa54d7 100644 --- a/api/tests/unit_tests/core/prompt/test_agent_history_prompt_transform.py +++ b/api/tests/unit_tests/core/prompt/test_agent_history_prompt_transform.py @@ -1,19 +1,18 @@ from unittest.mock import MagicMock -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - SystemPromptMessage, - ToolPromptMessage, - UserPromptMessage, -) -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel - from core.app.entities.app_invoke_entities import ( ModelConfigWithCredentialsEntity, ) from core.entities.provider_configuration import ProviderModelBundle from core.memory.token_buffer_memory import TokenBufferMemory from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransform +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + SystemPromptMessage, + ToolPromptMessage, + UserPromptMessage, +) +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from models.model import Conversation diff --git a/api/tests/unit_tests/core/prompt/test_prompt_message.py b/api/tests/unit_tests/core/prompt/test_prompt_message.py index a4b3960b0a..5d865d934c 100644 --- a/api/tests/unit_tests/core/prompt/test_prompt_message.py +++ b/api/tests/unit_tests/core/prompt/test_prompt_message.py @@ -1,3 +1,5 @@ +from core.prompt.simple_prompt_transform import ModelMode +from core.prompt.utils.prompt_message_util import PromptMessageUtil from graphon.model_runtime.entities.message_entities import ( AssistantPromptMessage, AudioPromptMessageContent, @@ -7,9 +9,6 @@ from graphon.model_runtime.entities.message_entities import ( UserPromptMessage, ) -from core.prompt.simple_prompt_transform import ModelMode -from core.prompt.utils.prompt_message_util import PromptMessageUtil - def test_build_prompt_message_with_prompt_message_contents(): prompt = UserPromptMessage(content=[TextPromptMessageContent(data="Hello, World!")]) diff --git a/api/tests/unit_tests/core/prompt/test_prompt_transform.py b/api/tests/unit_tests/core/prompt/test_prompt_transform.py index e35ce2c48a..9f9ea33695 100644 --- a/api/tests/unit_tests/core/prompt/test_prompt_transform.py +++ b/api/tests/unit_tests/core/prompt/test_prompt_transform.py @@ -2,9 +2,9 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.entities.model_entities import ModelPropertyKey from core.prompt.prompt_transform import PromptTransform +from graphon.model_runtime.entities.model_entities import ModelPropertyKey # from core.app.app_config.entities import ModelConfigEntity # from core.entities.provider_configuration import ProviderConfiguration, ProviderModelBundle diff --git a/api/tests/unit_tests/core/prompt/test_simple_prompt_transform.py b/api/tests/unit_tests/core/prompt/test_simple_prompt_transform.py index 3f188cfbb4..0dc74b33df 100644 --- a/api/tests/unit_tests/core/prompt/test_simple_prompt_transform.py +++ b/api/tests/unit_tests/core/prompt/test_simple_prompt_transform.py @@ -2,12 +2,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - ImagePromptMessageContent, - TextPromptMessageContent, - UserPromptMessage, -) from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.memory.token_buffer_memory import TokenBufferMemory @@ -24,6 +18,12 @@ from core.prompt.prompt_templates.advanced_prompt_templates import ( CONTEXT, ) from core.prompt.simple_prompt_transform import SimplePromptTransform +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + ImagePromptMessageContent, + TextPromptMessageContent, + UserPromptMessage, +) from models.model import AppMode, Conversation diff --git a/api/tests/unit_tests/core/rag/data_post_processor/test_data_post_processor.py b/api/tests/unit_tests/core/rag/data_post_processor/test_data_post_processor.py index 006b4e7345..1f3247590c 100644 --- a/api/tests/unit_tests/core/rag/data_post_processor/test_data_post_processor.py +++ b/api/tests/unit_tests/core/rag/data_post_processor/test_data_post_processor.py @@ -1,13 +1,12 @@ from unittest.mock import MagicMock, patch -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError - from core.rag.data_post_processor.data_post_processor import DataPostProcessor from core.rag.data_post_processor.reorder import ReorderRunner from core.rag.index_processor.constant.query_type import QueryType from core.rag.models.document import Document from core.rag.rerank.rerank_type import RerankMode +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError def _doc(content: str) -> Document: diff --git a/api/tests/unit_tests/core/rag/datasource/keyword/jieba/test_jieba.py b/api/tests/unit_tests/core/rag/datasource/keyword/jieba/test_jieba.py index bbdd476914..136ac0c72a 100644 --- a/api/tests/unit_tests/core/rag/datasource/keyword/jieba/test_jieba.py +++ b/api/tests/unit_tests/core/rag/datasource/keyword/jieba/test_jieba.py @@ -1,5 +1,6 @@ import json from types import SimpleNamespace +from typing import Any from unittest.mock import MagicMock import pytest @@ -57,7 +58,7 @@ class _FakeSelect: return self -def _dataset_keyword_table(data_source_type: str = "database", keyword_table_dict: dict | None = None): +def _dataset_keyword_table(data_source_type: str = "database", keyword_table_dict: dict[str, Any] | None = None): return SimpleNamespace( data_source_type=data_source_type, keyword_table_dict=keyword_table_dict, diff --git a/api/tests/unit_tests/core/rag/datasource/test_datasource_retrieval.py b/api/tests/unit_tests/core/rag/datasource/test_datasource_retrieval.py index 8b104597a8..0baf85c314 100644 --- a/api/tests/unit_tests/core/rag/datasource/test_datasource_retrieval.py +++ b/api/tests/unit_tests/core/rag/datasource/test_datasource_retrieval.py @@ -1,4 +1,5 @@ from types import SimpleNamespace +from typing import Any from unittest.mock import MagicMock, Mock, call, patch from uuid import uuid4 @@ -20,7 +21,7 @@ def create_mock_document( doc_id: str, score: float = 0.8, provider: str = "dify", - additional_metadata: dict | None = None, + additional_metadata: dict[str, Any] | None = None, ) -> Document: """ Create a mock Document object for testing. diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/pgvector/__init__.py b/api/tests/unit_tests/core/rag/datasource/vdb/pgvector/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/test_vector_factory.py b/api/tests/unit_tests/core/rag/datasource/vdb/test_vector_factory.py index 4e9ceddda9..dc21d378a2 100644 --- a/api/tests/unit_tests/core/rag/datasource/vdb/test_vector_factory.py +++ b/api/tests/unit_tests/core/rag/datasource/vdb/test_vector_factory.py @@ -21,6 +21,9 @@ def _register_fake_factory_module(monkeypatch, module_path: str, class_name: str def vector_factory_module(): import importlib + from core.rag.datasource.vdb import vector_backend_registry as reg + + reg.clear_vector_factory_cache() import core.rag.datasource.vdb.vector_factory as module return importlib.reload(module) @@ -41,61 +44,62 @@ def test_gen_index_struct_dict(vector_factory_module): @pytest.mark.parametrize( ("vector_type", "module_path", "class_name"), [ - ("CHROMA", "core.rag.datasource.vdb.chroma.chroma_vector", "ChromaVectorFactory"), - ("MILVUS", "core.rag.datasource.vdb.milvus.milvus_vector", "MilvusVectorFactory"), + ("CHROMA", "dify_vdb_chroma.chroma_vector", "ChromaVectorFactory"), + ("MILVUS", "dify_vdb_milvus.milvus_vector", "MilvusVectorFactory"), ( "ALIBABACLOUD_MYSQL", - "core.rag.datasource.vdb.alibabacloud_mysql.alibabacloud_mysql_vector", + "dify_vdb_alibabacloud_mysql.alibabacloud_mysql_vector", "AlibabaCloudMySQLVectorFactory", ), - ("MYSCALE", "core.rag.datasource.vdb.myscale.myscale_vector", "MyScaleVectorFactory"), - ("PGVECTOR", "core.rag.datasource.vdb.pgvector.pgvector", "PGVectorFactory"), - ("VASTBASE", "core.rag.datasource.vdb.pyvastbase.vastbase_vector", "VastbaseVectorFactory"), - ("PGVECTO_RS", "core.rag.datasource.vdb.pgvecto_rs.pgvecto_rs", "PGVectoRSFactory"), - ("QDRANT", "core.rag.datasource.vdb.qdrant.qdrant_vector", "QdrantVectorFactory"), - ("RELYT", "core.rag.datasource.vdb.relyt.relyt_vector", "RelytVectorFactory"), + ("MYSCALE", "dify_vdb_myscale.myscale_vector", "MyScaleVectorFactory"), + ("PGVECTOR", "dify_vdb_pgvector.pgvector", "PGVectorFactory"), + ("VASTBASE", "dify_vdb_vastbase.vastbase_vector", "VastbaseVectorFactory"), + ("PGVECTO_RS", "dify_vdb_pgvecto_rs.pgvecto_rs", "PGVectoRSFactory"), + ("QDRANT", "dify_vdb_qdrant.qdrant_vector", "QdrantVectorFactory"), + ("RELYT", "dify_vdb_relyt.relyt_vector", "RelytVectorFactory"), ( "ELASTICSEARCH", - "core.rag.datasource.vdb.elasticsearch.elasticsearch_vector", + "dify_vdb_elasticsearch.elasticsearch_vector", "ElasticSearchVectorFactory", ), ( "ELASTICSEARCH_JA", - "core.rag.datasource.vdb.elasticsearch.elasticsearch_ja_vector", + "dify_vdb_elasticsearch.elasticsearch_ja_vector", "ElasticSearchJaVectorFactory", ), - ("TIDB_VECTOR", "core.rag.datasource.vdb.tidb_vector.tidb_vector", "TiDBVectorFactory"), - ("WEAVIATE", "core.rag.datasource.vdb.weaviate.weaviate_vector", "WeaviateVectorFactory"), - ("TENCENT", "core.rag.datasource.vdb.tencent.tencent_vector", "TencentVectorFactory"), - ("ORACLE", "core.rag.datasource.vdb.oracle.oraclevector", "OracleVectorFactory"), + ("TIDB_VECTOR", "dify_vdb_tidb_vector.tidb_vector", "TiDBVectorFactory"), + ("WEAVIATE", "dify_vdb_weaviate.weaviate_vector", "WeaviateVectorFactory"), + ("TENCENT", "dify_vdb_tencent.tencent_vector", "TencentVectorFactory"), + ("ORACLE", "dify_vdb_oracle.oraclevector", "OracleVectorFactory"), ( "OPENSEARCH", - "core.rag.datasource.vdb.opensearch.opensearch_vector", + "dify_vdb_opensearch.opensearch_vector", "OpenSearchVectorFactory", ), - ("ANALYTICDB", "core.rag.datasource.vdb.analyticdb.analyticdb_vector", "AnalyticdbVectorFactory"), - ("COUCHBASE", "core.rag.datasource.vdb.couchbase.couchbase_vector", "CouchbaseVectorFactory"), - ("BAIDU", "core.rag.datasource.vdb.baidu.baidu_vector", "BaiduVectorFactory"), - ("VIKINGDB", "core.rag.datasource.vdb.vikingdb.vikingdb_vector", "VikingDBVectorFactory"), - ("UPSTASH", "core.rag.datasource.vdb.upstash.upstash_vector", "UpstashVectorFactory"), + ("ANALYTICDB", "dify_vdb_analyticdb.analyticdb_vector", "AnalyticdbVectorFactory"), + ("COUCHBASE", "dify_vdb_couchbase.couchbase_vector", "CouchbaseVectorFactory"), + ("BAIDU", "dify_vdb_baidu.baidu_vector", "BaiduVectorFactory"), + ("VIKINGDB", "dify_vdb_vikingdb.vikingdb_vector", "VikingDBVectorFactory"), + ("UPSTASH", "dify_vdb_upstash.upstash_vector", "UpstashVectorFactory"), ( "TIDB_ON_QDRANT", - "core.rag.datasource.vdb.tidb_on_qdrant.tidb_on_qdrant_vector", + "dify_vdb_tidb_on_qdrant.tidb_on_qdrant_vector", "TidbOnQdrantVectorFactory", ), - ("LINDORM", "core.rag.datasource.vdb.lindorm.lindorm_vector", "LindormVectorStoreFactory"), - ("OCEANBASE", "core.rag.datasource.vdb.oceanbase.oceanbase_vector", "OceanBaseVectorFactory"), - ("SEEKDB", "core.rag.datasource.vdb.oceanbase.oceanbase_vector", "OceanBaseVectorFactory"), - ("OPENGAUSS", "core.rag.datasource.vdb.opengauss.opengauss", "OpenGaussFactory"), - ("TABLESTORE", "core.rag.datasource.vdb.tablestore.tablestore_vector", "TableStoreVectorFactory"), + ("LINDORM", "dify_vdb_lindorm.lindorm_vector", "LindormVectorStoreFactory"), + ("OCEANBASE", "dify_vdb_oceanbase.oceanbase_vector", "OceanBaseVectorFactory"), + ("SEEKDB", "dify_vdb_oceanbase.oceanbase_vector", "OceanBaseVectorFactory"), + ("OPENGAUSS", "dify_vdb_opengauss.opengauss", "OpenGaussFactory"), + ("TABLESTORE", "dify_vdb_tablestore.tablestore_vector", "TableStoreVectorFactory"), ( "HUAWEI_CLOUD", - "core.rag.datasource.vdb.huawei.huawei_cloud_vector", + "dify_vdb_huawei_cloud.huawei_cloud_vector", "HuaweiCloudVectorFactory", ), - ("MATRIXONE", "core.rag.datasource.vdb.matrixone.matrixone_vector", "MatrixoneVectorFactory"), - ("CLICKZETTA", "core.rag.datasource.vdb.clickzetta.clickzetta_vector", "ClickzettaVectorFactory"), - ("IRIS", "core.rag.datasource.vdb.iris.iris_vector", "IrisVectorFactory"), + ("MATRIXONE", "dify_vdb_matrixone.matrixone_vector", "MatrixoneVectorFactory"), + ("CLICKZETTA", "dify_vdb_clickzetta.clickzetta_vector", "ClickzettaVectorFactory"), + ("IRIS", "dify_vdb_iris.iris_vector", "IrisVectorFactory"), + ("HOLOGRES", "dify_vdb_hologres.hologres_vector", "HologresVectorFactory"), ], ) def test_get_vector_factory_supported(vector_factory_module, monkeypatch, vector_type, module_path, class_name): @@ -111,6 +115,34 @@ def test_get_vector_factory_unsupported(vector_factory_module): vector_factory_module.Vector.get_vector_factory("unknown") +class _PluginChromaFactory: + """Stub used only for entry-point override test.""" + + +def test_get_vector_factory_entry_point_overrides_builtin(vector_factory_module, monkeypatch): + from importlib.metadata import EntryPoint + + from core.rag.datasource.vdb import vector_backend_registry as reg + + reg.clear_vector_factory_cache() + ep = EntryPoint( + name="chroma", + value=f"{__name__}:_PluginChromaFactory", + group="dify.vector_backends", + ) + + class _FakeGroups: + def select(self, *, group: str): + if group == "dify.vector_backends": + return (ep,) + return () + + monkeypatch.setattr(reg, "entry_points", lambda: _FakeGroups()) + + result_cls = vector_factory_module.Vector.get_vector_factory(vector_factory_module.VectorType.CHROMA) + assert result_cls is _PluginChromaFactory + + def test_vector_init_uses_default_and_custom_attributes(vector_factory_module): dataset = SimpleNamespace(id="dataset-1") @@ -121,7 +153,18 @@ def test_vector_init_uses_default_and_custom_attributes(vector_factory_module): default_vector = vector_factory_module.Vector(dataset) custom_vector = vector_factory_module.Vector(dataset, attributes=["doc_id"]) - assert default_vector._attributes == ["doc_id", "dataset_id", "document_id", "doc_hash", "doc_type"] + # `is_summary` and `original_chunk_id` must be in the default return-properties + # projection so summary index retrieval works on backends that honor the list + # as an explicit projection (e.g. Weaviate). See #34884. + assert default_vector._attributes == [ + "doc_id", + "dataset_id", + "document_id", + "doc_hash", + "doc_type", + "is_summary", + "original_chunk_id", + ] assert custom_vector._attributes == ["doc_id"] assert default_vector._embeddings == "embeddings" assert default_vector._vector_processor == "processor" diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/tidb_on_qdrant/__init__.py b/api/tests/unit_tests/core/rag/datasource/vdb/tidb_on_qdrant/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/unit_tests/core/rag/datasource/vdb/weaviate/__init__.py b/api/tests/unit_tests/core/rag/datasource/vdb/weaviate/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/tests/unit_tests/core/rag/docstore/test_dataset_docstore.py b/api/tests/unit_tests/core/rag/docstore/test_dataset_docstore.py index a7b7c1595b..007a76aa66 100644 --- a/api/tests/unit_tests/core/rag/docstore/test_dataset_docstore.py +++ b/api/tests/unit_tests/core/rag/docstore/test_dataset_docstore.py @@ -721,6 +721,30 @@ class TestDatasetDocumentStoreMultimodelBinding: mock_db.session.add.assert_not_called() + def test_add_multimodel_documents_binding_with_none_document_id(self): + """Test that no bindings are added when document_id is None.""" + + mock_dataset = MagicMock(spec=Dataset) + mock_dataset.id = "test-dataset-id" + mock_dataset.tenant_id = "tenant-1" + + mock_attachment = MagicMock(spec=AttachmentDocument) + mock_attachment.metadata = {"doc_id": "attachment-1"} + + with patch("core.rag.docstore.dataset_docstore.db") as mock_db: + mock_session = MagicMock() + mock_db.session = mock_session + + store = DatasetDocumentStore( + dataset=mock_dataset, + user_id="test-user-id", + document_id=None, + ) + + store.add_multimodel_documents_binding("seg-1", [mock_attachment]) + + mock_db.session.add.assert_not_called() + class TestDatasetDocumentStoreAddDocumentsUpdateChild: """Tests for add_documents when updating existing documents with children.""" diff --git a/api/tests/unit_tests/core/rag/embedding/test_cached_embedding.py b/api/tests/unit_tests/core/rag/embedding/test_cached_embedding.py index 3563186186..051a1455ae 100644 --- a/api/tests/unit_tests/core/rag/embedding/test_cached_embedding.py +++ b/api/tests/unit_tests/core/rag/embedding/test_cached_embedding.py @@ -12,11 +12,11 @@ from unittest.mock import Mock, patch import numpy as np import pytest -from graphon.model_runtime.entities.model_entities import ModelPropertyKey -from graphon.model_runtime.entities.text_embedding_entities import EmbeddingResult, EmbeddingUsage from sqlalchemy.exc import IntegrityError from core.rag.embedding.cached_embedding import CacheEmbedding +from graphon.model_runtime.entities.model_entities import ModelPropertyKey +from graphon.model_runtime.entities.text_embedding_entities import EmbeddingResult, EmbeddingUsage from models.dataset import Embedding diff --git a/api/tests/unit_tests/core/rag/embedding/test_embedding_service.py b/api/tests/unit_tests/core/rag/embedding/test_embedding_service.py index 408cf14a51..4b8175b0b4 100644 --- a/api/tests/unit_tests/core/rag/embedding/test_embedding_service.py +++ b/api/tests/unit_tests/core/rag/embedding/test_embedding_service.py @@ -49,6 +49,10 @@ from unittest.mock import Mock, patch import numpy as np import pytest +from sqlalchemy.exc import IntegrityError + +from core.entities.embedding_type import EmbeddingInputType +from core.rag.embedding.cached_embedding import CacheEmbedding from graphon.model_runtime.entities.model_entities import ModelPropertyKey from graphon.model_runtime.entities.text_embedding_entities import EmbeddingResult, EmbeddingUsage from graphon.model_runtime.errors.invoke import ( @@ -56,10 +60,6 @@ from graphon.model_runtime.errors.invoke import ( InvokeConnectionError, InvokeRateLimitError, ) -from sqlalchemy.exc import IntegrityError - -from core.entities.embedding_type import EmbeddingInputType -from core.rag.embedding.cached_embedding import CacheEmbedding from models.dataset import Embedding diff --git a/api/tests/unit_tests/core/rag/indexing/processor/test_paragraph_index_processor.py b/api/tests/unit_tests/core/rag/indexing/processor/test_paragraph_index_processor.py index d4b987c832..4ba4d54fa0 100644 --- a/api/tests/unit_tests/core/rag/indexing/processor/test_paragraph_index_processor.py +++ b/api/tests/unit_tests/core/rag/indexing/processor/test_paragraph_index_processor.py @@ -1,15 +1,16 @@ from types import SimpleNamespace +from typing import Any from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage -from graphon.model_runtime.entities.message_entities import AssistantPromptMessage, ImagePromptMessageContent -from graphon.model_runtime.entities.model_entities import ModelFeature from core.entities.knowledge_entities import PreviewDetail from core.rag.index_processor.constant.index_type import IndexTechniqueType from core.rag.index_processor.processor.paragraph_index_processor import ParagraphIndexProcessor from core.rag.models.document import AttachmentDocument, Document +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage +from graphon.model_runtime.entities.message_entities import AssistantPromptMessage, ImagePromptMessageContent +from graphon.model_runtime.entities.model_entities import ModelFeature class TestParagraphIndexProcessor: @@ -71,7 +72,9 @@ class TestParagraphIndexProcessor: with pytest.raises(ValueError, match="No rules found in process rule"): processor.transform([Document(page_content="text", metadata={})], process_rule={"mode": "custom"}) - def test_transform_validates_segmentation(self, processor: ParagraphIndexProcessor, process_rule: dict) -> None: + def test_transform_validates_segmentation( + self, processor: ParagraphIndexProcessor, process_rule: dict[str, Any] + ) -> None: rules_without_segmentation = SimpleNamespace(segmentation=None) with patch( @@ -84,7 +87,9 @@ class TestParagraphIndexProcessor: process_rule={"mode": "custom", "rules": {"enabled": True}}, ) - def test_transform_builds_split_documents(self, processor: ParagraphIndexProcessor, process_rule: dict) -> None: + def test_transform_builds_split_documents( + self, processor: ParagraphIndexProcessor, process_rule: dict[str, Any] + ) -> None: source_document = Document(page_content="source", metadata={"dataset_id": "dataset-1", "document_id": "doc-1"}) splitter = Mock() splitter.split_documents.return_value = [ diff --git a/api/tests/unit_tests/core/rag/indexing/processor/test_parent_child_index_processor.py b/api/tests/unit_tests/core/rag/indexing/processor/test_parent_child_index_processor.py index c241b44d52..8ef0e046ef 100644 --- a/api/tests/unit_tests/core/rag/indexing/processor/test_parent_child_index_processor.py +++ b/api/tests/unit_tests/core/rag/indexing/processor/test_parent_child_index_processor.py @@ -258,10 +258,10 @@ class TestParentChildIndexProcessor: session.commit.assert_called_once() def test_clean_deletes_summaries_when_requested(self, processor: ParentChildIndexProcessor, dataset: Mock) -> None: - segment_query = Mock() - segment_query.filter.return_value.all.return_value = [SimpleNamespace(id="seg-1")] + scalars_result = Mock() + scalars_result.all.return_value = [SimpleNamespace(id="seg-1")] session = Mock() - session.query.return_value = segment_query + session.scalars.return_value = scalars_result session_ctx = MagicMock() session_ctx.__enter__.return_value = session session_ctx.__exit__.return_value = False diff --git a/api/tests/unit_tests/core/rag/indexing/processor/test_qa_index_processor.py b/api/tests/unit_tests/core/rag/indexing/processor/test_qa_index_processor.py index 98c47bec8f..bfae9001b7 100644 --- a/api/tests/unit_tests/core/rag/indexing/processor/test_qa_index_processor.py +++ b/api/tests/unit_tests/core/rag/indexing/processor/test_qa_index_processor.py @@ -1,4 +1,5 @@ from types import SimpleNamespace +from typing import Any from unittest.mock import MagicMock, Mock, patch import pandas as pd @@ -77,7 +78,7 @@ class TestQAIndexProcessor: processor.transform([Document(page_content="text", metadata={})], process_rule={"mode": "custom"}) def test_transform_preview_calls_formatter_once( - self, processor: QAIndexProcessor, process_rule: dict, fake_flask_app + self, processor: QAIndexProcessor, process_rule: dict[str, Any], fake_flask_app ) -> None: document = Document(page_content="raw text", metadata={"dataset_id": "dataset-1", "document_id": "doc-1"}) split_node = Document(page_content=".question", metadata={}) @@ -119,7 +120,7 @@ class TestQAIndexProcessor: mock_format.assert_called_once() def test_transform_non_preview_uses_thread_batches( - self, processor: QAIndexProcessor, process_rule: dict, fake_flask_app + self, processor: QAIndexProcessor, process_rule: dict[str, Any], fake_flask_app ) -> None: documents = [ Document(page_content="doc-1", metadata={"document_id": "doc-1", "dataset_id": "dataset-1"}), @@ -220,10 +221,10 @@ class TestQAIndexProcessor: self, processor: QAIndexProcessor, dataset: Mock ) -> None: mock_segment = SimpleNamespace(id="seg-1") - mock_query = Mock() - mock_query.filter.return_value.all.return_value = [mock_segment] + scalars_result = Mock() + scalars_result.all.return_value = [mock_segment] mock_session = Mock() - mock_session.query.return_value = mock_query + mock_session.scalars.return_value = scalars_result session_context = MagicMock() session_context.__enter__.return_value = mock_session session_context.__exit__.return_value = False diff --git a/api/tests/unit_tests/core/rag/indexing/test_indexing_runner.py b/api/tests/unit_tests/core/rag/indexing/test_indexing_runner.py index 641c5d9ba0..7c4defc180 100644 --- a/api/tests/unit_tests/core/rag/indexing/test_indexing_runner.py +++ b/api/tests/unit_tests/core/rag/indexing/test_indexing_runner.py @@ -53,7 +53,6 @@ from typing import Any from unittest.mock import MagicMock, Mock, patch import pytest -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy.orm.exc import ObjectDeletedError from core.errors.error import ProviderTokenNotInitError @@ -64,6 +63,7 @@ from core.indexing_runner import ( ) from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType from core.rag.models.document import ChildDocument, Document +from graphon.model_runtime.entities.model_entities import ModelType from libs.datetime_utils import naive_utc_now from models.dataset import Dataset, DatasetProcessRule from models.dataset import Document as DatasetDocument diff --git a/api/tests/unit_tests/core/rag/rerank/test_reranker.py b/api/tests/unit_tests/core/rag/rerank/test_reranker.py index c279b00d3b..8bc7dbf70d 100644 --- a/api/tests/unit_tests/core/rag/rerank/test_reranker.py +++ b/api/tests/unit_tests/core/rag/rerank/test_reranker.py @@ -17,7 +17,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock, Mock, patch import pytest -from graphon.model_runtime.entities.rerank_entities import RerankDocument, RerankResult from core.model_manager import ModelInstance from core.rag.index_processor.constant.doc_type import DocType @@ -29,6 +28,7 @@ from core.rag.rerank.rerank_factory import RerankRunnerFactory from core.rag.rerank.rerank_model import RerankModelRunner from core.rag.rerank.rerank_type import RerankMode from core.rag.rerank.weight_rerank import WeightRerankRunner +from graphon.model_runtime.entities.rerank_entities import RerankDocument, RerankResult def create_mock_model_instance() -> ModelInstance: diff --git a/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval.py b/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval.py index 40d138df90..89830f7517 100644 --- a/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval.py +++ b/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval.py @@ -1,14 +1,12 @@ import threading from contextlib import contextmanager, nullcontext from types import SimpleNamespace +from typing import Any from unittest.mock import MagicMock, Mock, patch from uuid import uuid4 import pytest from flask import Flask, current_app -from graphon.model_runtime.entities.llm_entities import LLMUsage -from graphon.model_runtime.entities.model_entities import ModelFeature -from sqlalchemy import column from core.app.app_config.entities import ( DatasetEntity, @@ -35,6 +33,8 @@ from core.rag.retrieval.dataset_retrieval import DatasetRetrieval from core.rag.retrieval.retrieval_methods import RetrievalMethod from core.workflow.nodes.knowledge_retrieval import exc from core.workflow.nodes.knowledge_retrieval.retrieval import KnowledgeRetrievalRequest +from graphon.model_runtime.entities.llm_entities import LLMUsage +from graphon.model_runtime.entities.model_entities import ModelFeature from models.dataset import Dataset from models.enums import CreatorUserRole @@ -46,7 +46,7 @@ def create_mock_document( doc_id: str, score: float = 0.8, provider: str = "dify", - additional_metadata: dict | None = None, + additional_metadata: dict[str, Any] | None = None, ) -> Document: """ Create a mock Document object for testing. @@ -2022,7 +2022,7 @@ def create_mock_document_methods( doc_id: str, score: float = 0.8, provider: str = "dify", - additional_metadata: dict | None = None, + additional_metadata: dict[str, Any] | None = None, ) -> Document: """ Create a mock Document object for testing. @@ -4039,21 +4039,9 @@ class TestDatasetRetrievalAdditionalHelpers: def test_get_available_datasets(self, retrieval: DatasetRetrieval) -> None: session = Mock() - subquery_query = Mock() - subquery_query.where.return_value = subquery_query - subquery_query.group_by.return_value = subquery_query - subquery_query.having.return_value = subquery_query - subquery_query.subquery.return_value = SimpleNamespace( - c=SimpleNamespace( - dataset_id=column("dataset_id"), available_document_count=column("available_document_count") - ) - ) - - dataset_query = Mock() - dataset_query.outerjoin.return_value = dataset_query - dataset_query.where.return_value = dataset_query - dataset_query.all.return_value = [SimpleNamespace(id="d1"), None, SimpleNamespace(id="d2")] - session.query.side_effect = [subquery_query, dataset_query] + scalars_result = Mock() + scalars_result.all.return_value = [SimpleNamespace(id="d1"), None, SimpleNamespace(id="d2")] + session.scalars.return_value = scalars_result session_ctx = MagicMock() session_ctx.__enter__.return_value = session @@ -4104,7 +4092,7 @@ def _doc( dataset_id: str = "dataset-1", document_id: str = "document-1", doc_id: str = "node-1", - extra: dict | None = None, + extra: dict[str, Any] | None = None, ) -> Document: metadata = { "score": score, @@ -4902,22 +4890,21 @@ class TestInternalHooksCoverage: _scalars(segments), _scalars(bindings), ] - query = Mock() - query.where.return_value = query - session.query.return_value = query session_ctx = MagicMock() session_ctx.__enter__.return_value = session session_ctx.__exit__.return_value = False + sessionmaker_ctx = MagicMock() + sessionmaker_ctx.begin.return_value = session_ctx + with ( patch("core.rag.retrieval.dataset_retrieval.db", SimpleNamespace(engine=Mock())), - patch("core.rag.retrieval.dataset_retrieval.Session", return_value=session_ctx), + patch("core.rag.retrieval.dataset_retrieval.sessionmaker", return_value=sessionmaker_ctx), patch.object(retrieval, "_send_trace_task") as mock_trace, ): retrieval._on_retrieval_end(flask_app=app, documents=docs, message_id="m1", timer={"cost": 1}) - query.update.assert_called_once() - session.commit.assert_called_once() + session.execute.assert_called_once() mock_trace.assert_called_once() def test_retriever_variants(self, retrieval: DatasetRetrieval) -> None: diff --git a/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval_methods.py b/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval_methods.py index 48782515d0..90feb4cf01 100644 --- a/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval_methods.py +++ b/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval_methods.py @@ -1,3 +1,4 @@ +from typing import Any from unittest.mock import MagicMock, Mock, patch from uuid import uuid4 @@ -55,7 +56,7 @@ def create_mock_document( doc_id: str, score: float = 0.8, provider: str = "dify", - additional_metadata: dict | None = None, + additional_metadata: dict[str, Any] | None = None, ) -> Document: """ Create a mock Document object for testing. diff --git a/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_function_call_router.py b/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_function_call_router.py index 5a2ecb8220..43c521dcfd 100644 --- a/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_function_call_router.py +++ b/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_function_call_router.py @@ -1,8 +1,7 @@ from unittest.mock import Mock -from graphon.model_runtime.entities.llm_entities import LLMUsage - from core.rag.retrieval.router.multi_dataset_function_call_router import FunctionCallMultiDatasetRouter +from graphon.model_runtime.entities.llm_entities import LLMUsage class TestFunctionCallMultiDatasetRouter: diff --git a/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_react_route.py b/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_react_route.py index 539ac0f849..c56528cf55 100644 --- a/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_react_route.py +++ b/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_react_route.py @@ -1,13 +1,12 @@ from types import SimpleNamespace from unittest.mock import Mock, patch +from core.rag.retrieval.output_parser.react_output import ReactAction, ReactFinish +from core.rag.retrieval.router.multi_dataset_react_route import ReactMultiDatasetRouter from graphon.model_runtime.entities.llm_entities import LLMUsage from graphon.model_runtime.entities.message_entities import PromptMessageRole from graphon.model_runtime.entities.model_entities import ModelType -from core.rag.retrieval.output_parser.react_output import ReactAction, ReactFinish -from core.rag.retrieval.router.multi_dataset_react_route import ReactMultiDatasetRouter - class TestReactMultiDatasetRouter: def test_invoke_returns_none_when_no_tools(self) -> None: diff --git a/api/tests/unit_tests/core/repositories/test_celery_workflow_execution_repository.py b/api/tests/unit_tests/core/repositories/test_celery_workflow_execution_repository.py index e229d5fc1a..3d3322094e 100644 --- a/api/tests/unit_tests/core/repositories/test_celery_workflow_execution_repository.py +++ b/api/tests/unit_tests/core/repositories/test_celery_workflow_execution_repository.py @@ -9,10 +9,10 @@ from unittest.mock import Mock, patch from uuid import uuid4 import pytest -from graphon.entities import WorkflowExecution -from graphon.enums import WorkflowType from core.repositories.celery_workflow_execution_repository import CeleryWorkflowExecutionRepository +from graphon.entities import WorkflowExecution +from graphon.enums import WorkflowType from libs.datetime_utils import naive_utc_now from models import Account, EndUser from models.enums import WorkflowRunTriggeredFrom diff --git a/api/tests/unit_tests/core/repositories/test_celery_workflow_node_execution_repository.py b/api/tests/unit_tests/core/repositories/test_celery_workflow_node_execution_repository.py index 7dbf78d0f0..05b4f3a053 100644 --- a/api/tests/unit_tests/core/repositories/test_celery_workflow_node_execution_repository.py +++ b/api/tests/unit_tests/core/repositories/test_celery_workflow_node_execution_repository.py @@ -9,14 +9,14 @@ from unittest.mock import Mock, patch from uuid import uuid4 import pytest + +from core.repositories.celery_workflow_node_execution_repository import CeleryWorkflowNodeExecutionRepository +from core.repositories.factory import OrderConfig from graphon.entities.workflow_node_execution import ( WorkflowNodeExecution, WorkflowNodeExecutionStatus, ) from graphon.enums import BuiltinNodeTypes - -from core.repositories.celery_workflow_node_execution_repository import CeleryWorkflowNodeExecutionRepository -from core.repositories.factory import OrderConfig from libs.datetime_utils import naive_utc_now from models import Account, EndUser from models.workflow import WorkflowNodeExecutionTriggeredFrom diff --git a/api/tests/unit_tests/core/repositories/test_human_input_form_repository_impl.py b/api/tests/unit_tests/core/repositories/test_human_input_form_repository_impl.py index 0fc82dda53..8be1ac318c 100644 --- a/api/tests/unit_tests/core/repositories/test_human_input_form_repository_impl.py +++ b/api/tests/unit_tests/core/repositories/test_human_input_form_repository_impl.py @@ -7,11 +7,6 @@ from datetime import datetime from types import SimpleNamespace import pytest -from graphon.nodes.human_input.entities import ( - FormDefinition, - UserAction, -) -from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from core.repositories.human_input_repository import ( HumanInputFormRecord, @@ -26,6 +21,11 @@ from core.workflow.human_input_compat import ( ExternalRecipient, MemberRecipient, ) +from graphon.nodes.human_input.entities import ( + FormDefinition, + UserAction, +) +from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from libs.datetime_utils import naive_utc_now from models.human_input import ( EmailExternalRecipientPayload, diff --git a/api/tests/unit_tests/core/repositories/test_human_input_repository.py b/api/tests/unit_tests/core/repositories/test_human_input_repository.py index 8ff0e40587..1297a95df1 100644 --- a/api/tests/unit_tests/core/repositories/test_human_input_repository.py +++ b/api/tests/unit_tests/core/repositories/test_human_input_repository.py @@ -9,8 +9,6 @@ from typing import Any from unittest.mock import MagicMock import pytest -from graphon.nodes.human_input.entities import HumanInputNodeData, UserAction -from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from core.repositories.human_input_repository import ( FormCreateParams, @@ -31,6 +29,8 @@ from core.workflow.human_input_compat import ( MemberRecipient, WebAppDeliveryMethod, ) +from graphon.nodes.human_input.entities import HumanInputNodeData, UserAction +from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from libs.datetime_utils import naive_utc_now from models.human_input import HumanInputFormRecipient, RecipientType diff --git a/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_execution_repository.py b/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_execution_repository.py index e5c3e85487..a08c5729cb 100644 --- a/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_execution_repository.py +++ b/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_execution_repository.py @@ -3,12 +3,12 @@ from unittest.mock import MagicMock from uuid import uuid4 import pytest -from graphon.entities import WorkflowExecution -from graphon.enums import WorkflowExecutionStatus, WorkflowType from sqlalchemy.engine import Engine from sqlalchemy.orm import sessionmaker from core.repositories.sqlalchemy_workflow_execution_repository import SQLAlchemyWorkflowExecutionRepository +from graphon.entities import WorkflowExecution +from graphon.enums import WorkflowExecutionStatus, WorkflowType from models import Account, CreatorUserRole, EndUser, WorkflowRun from models.enums import WorkflowRunTriggeredFrom diff --git a/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_node_execution_repository.py b/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_node_execution_repository.py index 5b4d26b780..6af7b02d4c 100644 --- a/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_node_execution_repository.py +++ b/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_node_execution_repository.py @@ -10,12 +10,6 @@ from unittest.mock import MagicMock, Mock import psycopg2.errors import pytest -from graphon.entities import WorkflowNodeExecution -from graphon.enums import ( - BuiltinNodeTypes, - WorkflowNodeExecutionMetadataKey, - WorkflowNodeExecutionStatus, -) from sqlalchemy import Engine, create_engine from sqlalchemy.exc import IntegrityError from sqlalchemy.orm import sessionmaker @@ -29,6 +23,12 @@ from core.repositories.sqlalchemy_workflow_node_execution_repository import ( _find_first, _replace_or_append_offload, ) +from graphon.entities import WorkflowNodeExecution +from graphon.enums import ( + BuiltinNodeTypes, + WorkflowNodeExecutionMetadataKey, + WorkflowNodeExecutionStatus, +) from models import Account, EndUser from models.enums import ExecutionOffLoadType from models.workflow import WorkflowNodeExecutionModel, WorkflowNodeExecutionOffload, WorkflowNodeExecutionTriggeredFrom diff --git a/api/tests/unit_tests/core/repositories/test_workflow_node_execution_conflict_handling.py b/api/tests/unit_tests/core/repositories/test_workflow_node_execution_conflict_handling.py index 84fe522388..abdbc72085 100644 --- a/api/tests/unit_tests/core/repositories/test_workflow_node_execution_conflict_handling.py +++ b/api/tests/unit_tests/core/repositories/test_workflow_node_execution_conflict_handling.py @@ -4,17 +4,17 @@ from unittest.mock import MagicMock, Mock import psycopg2.errors import pytest -from graphon.entities.workflow_node_execution import ( - WorkflowNodeExecution, - WorkflowNodeExecutionStatus, -) -from graphon.enums import BuiltinNodeTypes from sqlalchemy.exc import IntegrityError from sqlalchemy.orm import sessionmaker from core.repositories.sqlalchemy_workflow_node_execution_repository import ( SQLAlchemyWorkflowNodeExecutionRepository, ) +from graphon.entities.workflow_node_execution import ( + WorkflowNodeExecution, + WorkflowNodeExecutionStatus, +) +from graphon.enums import BuiltinNodeTypes from libs.datetime_utils import naive_utc_now from models import Account, WorkflowNodeExecutionTriggeredFrom diff --git a/api/tests/unit_tests/core/repositories/test_workflow_node_execution_truncation.py b/api/tests/unit_tests/core/repositories/test_workflow_node_execution_truncation.py index 27729e7f06..5af1376a0a 100644 --- a/api/tests/unit_tests/core/repositories/test_workflow_node_execution_truncation.py +++ b/api/tests/unit_tests/core/repositories/test_workflow_node_execution_truncation.py @@ -11,17 +11,17 @@ from datetime import UTC, datetime from typing import Any from unittest.mock import MagicMock -from graphon.entities.workflow_node_execution import ( - WorkflowNodeExecution, - WorkflowNodeExecutionStatus, -) -from graphon.enums import BuiltinNodeTypes from sqlalchemy import Engine from configs import dify_config from core.repositories.sqlalchemy_workflow_node_execution_repository import ( SQLAlchemyWorkflowNodeExecutionRepository, ) +from graphon.entities.workflow_node_execution import ( + WorkflowNodeExecution, + WorkflowNodeExecutionStatus, +) +from graphon.enums import BuiltinNodeTypes from models import Account, WorkflowNodeExecutionTriggeredFrom from models.enums import ExecutionOffLoadType from models.workflow import WorkflowNodeExecutionModel, WorkflowNodeExecutionOffload diff --git a/api/tests/unit_tests/core/test_file.py b/api/tests/unit_tests/core/test_file.py index ac65d0c02b..f17927f16b 100644 --- a/api/tests/unit_tests/core/test_file.py +++ b/api/tests/unit_tests/core/test_file.py @@ -1,7 +1,6 @@ import json from graphon.file import File, FileTransferMethod, FileType, FileUploadConfig - from models.workflow import Workflow diff --git a/api/tests/unit_tests/core/test_model_manager.py b/api/tests/unit_tests/core/test_model_manager.py index f5efb78b61..afea9144c0 100644 --- a/api/tests/unit_tests/core/test_model_manager.py +++ b/api/tests/unit_tests/core/test_model_manager.py @@ -2,12 +2,12 @@ from unittest.mock import MagicMock, patch import pytest import redis -from graphon.model_runtime.entities.model_entities import ModelType from pytest_mock import MockerFixture from core.entities.provider_entities import ModelLoadBalancingConfiguration from core.model_manager import LBModelManager from extensions.ext_redis import redis_client +from graphon.model_runtime.entities.model_entities import ModelType @pytest.fixture diff --git a/api/tests/unit_tests/core/test_provider_configuration.py b/api/tests/unit_tests/core/test_provider_configuration.py index 331166fe63..b19a21d7f4 100644 --- a/api/tests/unit_tests/core/test_provider_configuration.py +++ b/api/tests/unit_tests/core/test_provider_configuration.py @@ -1,15 +1,6 @@ from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.entities.provider_entities import ( - ConfigurateMethod, - CredentialFormSchema, - FormOption, - FormType, - ProviderEntity, -) from core.entities.provider_configuration import ProviderConfiguration, SystemConfigurationStatus from core.entities.provider_entities import ( @@ -21,6 +12,15 @@ from core.entities.provider_entities import ( RestrictModel, SystemConfiguration, ) +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.entities.provider_entities import ( + ConfigurateMethod, + CredentialFormSchema, + FormOption, + FormType, + ProviderEntity, +) from models.provider import Provider, ProviderType diff --git a/api/tests/unit_tests/core/test_provider_manager.py b/api/tests/unit_tests/core/test_provider_manager.py index ee26172459..f45b43082c 100644 --- a/api/tests/unit_tests/core/test_provider_manager.py +++ b/api/tests/unit_tests/core/test_provider_manager.py @@ -2,12 +2,12 @@ from types import SimpleNamespace from unittest.mock import MagicMock, Mock, PropertyMock, patch import pytest -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import ModelType from pytest_mock import MockerFixture from core.entities.provider_entities import ModelSettings from core.provider_manager import ProviderManager +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import ModelType from models.provider import LoadBalancingModelConfig, ProviderModelSetting, TenantDefaultModel from models.provider_ids import ModelProviderID diff --git a/api/tests/unit_tests/core/tools/test_builtin_tool_base.py b/api/tests/unit_tests/core/tools/test_builtin_tool_base.py index 5d744f88c9..1ff81f6120 100644 --- a/api/tests/unit_tests/core/tools/test_builtin_tool_base.py +++ b/api/tests/unit_tests/core/tools/test_builtin_tool_base.py @@ -6,13 +6,13 @@ from typing import Any from unittest.mock import patch import pytest -from graphon.model_runtime.entities.message_entities import UserPromptMessage from core.app.entities.app_invoke_entities import InvokeFrom from core.tools.__base.tool_runtime import ToolRuntime from core.tools.builtin_tool.tool import BuiltinTool from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolEntity, ToolIdentity, ToolInvokeMessage, ToolProviderType +from graphon.model_runtime.entities.message_entities import UserPromptMessage class _BuiltinDummyTool(BuiltinTool): diff --git a/api/tests/unit_tests/core/tools/test_builtin_tools_extra.py b/api/tests/unit_tests/core/tools/test_builtin_tools_extra.py index ee0ce51eec..c7829fc0d7 100644 --- a/api/tests/unit_tests/core/tools/test_builtin_tools_extra.py +++ b/api/tests/unit_tests/core/tools/test_builtin_tools_extra.py @@ -6,8 +6,6 @@ from datetime import date from types import SimpleNamespace import pytest -from graphon.file import FileType -from graphon.model_runtime.entities.model_entities import ModelPropertyKey from core.app.entities.app_invoke_entities import InvokeFrom from core.tools.__base.tool_runtime import ToolRuntime @@ -29,6 +27,8 @@ from core.tools.builtin_tool.tool import BuiltinTool from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolEntity, ToolIdentity, ToolInvokeMessage from core.tools.errors import ToolInvokeError +from graphon.file import FileType +from graphon.model_runtime.entities.model_entities import ModelPropertyKey def _build_builtin_tool(tool_cls: type[BuiltinTool]) -> BuiltinTool: diff --git a/api/tests/unit_tests/core/tools/test_custom_tool.py b/api/tests/unit_tests/core/tools/test_custom_tool.py index 79b8eaaa87..f35546b025 100644 --- a/api/tests/unit_tests/core/tools/test_custom_tool.py +++ b/api/tests/unit_tests/core/tools/test_custom_tool.py @@ -1,6 +1,7 @@ from __future__ import annotations from types import SimpleNamespace +from typing import Any import httpx import pytest @@ -14,7 +15,7 @@ from core.tools.entities.tool_entities import ToolEntity, ToolIdentity, ToolInvo from core.tools.errors import ToolInvokeError, ToolParameterValidationError, ToolProviderCredentialValidationError -def _build_tool(*, openapi: dict | None = None) -> ApiTool: +def _build_tool(*, openapi: dict[str, Any] | None = None) -> ApiTool: entity = ToolEntity( identity=ToolIdentity( author="author", diff --git a/api/tests/unit_tests/core/tools/test_tool_engine.py b/api/tests/unit_tests/core/tools/test_tool_engine.py index 40c107667c..cd16557ef6 100644 --- a/api/tests/unit_tests/core/tools/test_tool_engine.py +++ b/api/tests/unit_tests/core/tools/test_tool_engine.py @@ -260,6 +260,28 @@ def test_agent_invoke_engine_meta_error(): assert error_meta.error == "meta failure" +def test_convert_tool_response_excludes_variable_messages(): + """Regression test for issue #34723. + + WorkflowTool._invoke yields VARIABLE, TEXT, and suppressed-JSON messages. + _convert_tool_response_to_str must skip VARIABLE messages so that the + returned string contains only the TEXT representation and not a + duplicated, garbled Pydantic repr of the same data. + """ + tool = _build_tool() + outputs = {"reports": "hello"} + messages = [ + tool.create_variable_message(variable_name="reports", variable_value="hello"), + tool.create_text_message('{"reports": "hello"}'), + tool.create_json_message(outputs, suppress_output=True), + ] + + result = ToolEngine._convert_tool_response_to_str(messages) + + assert result == '{"reports": "hello"}' + assert "variable_name" not in result + + def test_agent_invoke_tool_invoke_error(): tool = _build_tool(with_llm_parameter=True) callback = Mock() diff --git a/api/tests/unit_tests/core/tools/test_tool_file_manager.py b/api/tests/unit_tests/core/tools/test_tool_file_manager.py index 7fcebde3c5..ccffdf16d1 100644 --- a/api/tests/unit_tests/core/tools/test_tool_file_manager.py +++ b/api/tests/unit_tests/core/tools/test_tool_file_manager.py @@ -12,9 +12,9 @@ from unittest.mock import MagicMock, Mock, patch import httpx import pytest -from graphon.file import FileTransferMethod from core.tools.tool_file_manager import ToolFileManager +from graphon.file import FileTransferMethod def _setup_tool_file_signing(monkeypatch: pytest.MonkeyPatch) -> dict[str, str]: @@ -129,7 +129,7 @@ def test_get_file_binary_returns_none_when_not_found() -> None: # Arrange manager = ToolFileManager() session = Mock() - session.query.return_value.where.return_value.first.return_value = None + session.scalar.return_value = None # Act with _patch_session_factory(session): @@ -144,7 +144,7 @@ def test_get_file_binary_returns_bytes_when_found() -> None: manager = ToolFileManager() tool_file = SimpleNamespace(file_key="k1", mimetype="text/plain") session = Mock() - session.query.return_value.where.return_value.first.return_value = tool_file + session.scalar.return_value = tool_file # Act with patch("core.tools.tool_file_manager.storage") as storage: @@ -160,11 +160,7 @@ def test_get_file_binary_by_message_file_id_when_messagefile_missing() -> None: # Arrange manager = ToolFileManager() session = Mock() - first_query = Mock() - second_query = Mock() - first_query.where.return_value.first.return_value = None - second_query.where.return_value.first.return_value = None - session.query.side_effect = [first_query, second_query] + session.scalar.side_effect = [None, None] # Act with _patch_session_factory(session): @@ -179,11 +175,7 @@ def test_get_file_binary_by_message_file_id_when_url_is_none() -> None: manager = ToolFileManager() message_file = SimpleNamespace(url=None) session = Mock() - first_query = Mock() - second_query = Mock() - first_query.where.return_value.first.return_value = message_file - second_query.where.return_value.first.return_value = None - session.query.side_effect = [first_query, second_query] + session.scalar.side_effect = [message_file, None] # Act with _patch_session_factory(session): @@ -199,11 +191,7 @@ def test_get_file_binary_by_message_file_id_returns_bytes_when_found() -> None: message_file = SimpleNamespace(url="https://x/files/tools/tool123.png") tool_file = SimpleNamespace(file_key="k2", mimetype="image/png") session = Mock() - first_query = Mock() - second_query = Mock() - first_query.where.return_value.first.return_value = message_file - second_query.where.return_value.first.return_value = tool_file - session.query.side_effect = [first_query, second_query] + session.scalar.side_effect = [message_file, tool_file] # Act with patch("core.tools.tool_file_manager.storage") as storage: @@ -219,7 +207,7 @@ def test_get_file_generator_returns_none_when_toolfile_missing() -> None: # Arrange manager = ToolFileManager() session = Mock() - session.query.return_value.where.return_value.first.return_value = None + session.scalar.return_value = None # Act with _patch_session_factory(session): @@ -242,7 +230,7 @@ def test_get_file_generator_returns_stream_when_found() -> None: size=12, ) session = Mock() - session.query.return_value.where.return_value.first.return_value = tool_file + session.scalar.return_value = tool_file # Act with patch("core.tools.tool_file_manager.storage") as storage: diff --git a/api/tests/unit_tests/core/tools/test_tool_label_manager.py b/api/tests/unit_tests/core/tools/test_tool_label_manager.py index 8c0e7e9419..e13f430f9b 100644 --- a/api/tests/unit_tests/core/tools/test_tool_label_manager.py +++ b/api/tests/unit_tests/core/tools/test_tool_label_manager.py @@ -2,7 +2,7 @@ from __future__ import annotations from types import SimpleNamespace from typing import Any -from unittest.mock import PropertyMock, patch +from unittest.mock import MagicMock, PropertyMock, patch import pytest @@ -12,11 +12,13 @@ from core.tools.tool_label_manager import ToolLabelManager from core.tools.workflow_as_tool.provider import WorkflowToolProviderController +# Create a mock class for testing abstract/base classes class _ConcreteBuiltinToolProviderController(BuiltinToolProviderController): def _validate_credentials(self, user_id: str, credentials: dict[str, Any]): return None +# Factory function to create a "lightweight" controller for testing def _api_controller(provider_id: str = "api-1") -> ApiToolProviderController: controller = object.__new__(ApiToolProviderController) controller.provider_id = provider_id @@ -29,6 +31,7 @@ def _workflow_controller(provider_id: str = "wf-1") -> WorkflowToolProviderContr return controller +# Test pure logic: filtering and deduplication def test_tool_label_manager_filter_tool_labels(): filtered = ToolLabelManager.filter_tool_labels(["search", "search", "invalid", "news"]) assert set(filtered) == {"search", "news"} @@ -36,22 +39,68 @@ def test_tool_label_manager_filter_tool_labels(): def test_tool_label_manager_update_tool_labels_db(): + """ + Test the database update logic for tool labels. + Focus: Verify that labels are filtered, de-duplicated, and safely handled within a database session. + """ + # 1. Setup expected data from the controller controller = _api_controller("api-1") - with patch("core.tools.tool_label_manager.db") as mock_db: + expected_id = controller.provider_id + expected_type = controller.provider_type + + # 2. Patching External Dependencies + # - We patch 'db' to prevent Flask from trying to access a real database. + # - We patch 'sessionmaker' to intercept and control the creation of SQLAlchemy sessions. + with ( + patch("core.tools.tool_label_manager.db"), + patch("core.tools.tool_label_manager.sessionmaker") as mock_sessionmaker, + ): + # 3. Constructing the "Mocking Chain" + # In the business logic, we use: with sessionmaker(db.engine).begin() as _session: + # We need to link our 'mock_session' to the end of this complex context manager chain: + # Step A: sessionmaker(db.engine) -> returns an object (mock_sessionmaker.return_value) + # Step B: .begin() -> returns a context manager (begin.return_value) + # Step C: with ... as _session: -> calls __enter__(), and _session gets the __enter__.return_value + mock_session = MagicMock() + mock_sessionmaker.return_value.begin.return_value.__enter__.return_value = mock_session + + # 4. Trigger the logic under test + # Input: ["search", "search", "invalid"] + # Logic: + # - "invalid" should be filtered out (not in default_tool_label_name_list). + # - The duplicate "search" should be merged (unique labels). ToolLabelManager.update_tool_labels(controller, ["search", "search", "invalid"]) - mock_db.session.execute.assert_called_once() - # only one valid unique label should be inserted. - assert mock_db.session.add.call_count == 1 - mock_db.session.commit.assert_called_once() + # 5. Behavior Assertion: DELETE operation + # Verify that the manager first attempts to clear existing labels for this specific tool. + # This ensures the update is idempotent. + mock_session.execute.assert_called_once() + + # 6. Behavior Assertion: INSERT operation + # Verify that only ONE valid label ("search") was added after filtering and deduplication. + # If call_count == 1, it proves filter_tool_labels() worked as expected. + assert mock_session.add.call_count == 1 + + # 7. State Assertion: Data Integrity & Isolation + # Inspect the actual object passed to session.add() to ensure it has correct properties. + # This confirms that the data isolation (tool_id + tool_type) we refactored is active. + call_args = mock_session.add.call_args + added_label = call_args[0][0] # Retrieve the ToolLabelBinding instance + + assert added_label.label_name == "search", "The label name should be 'search' after filtering." + assert added_label.tool_id == expected_id, "The tool_id must match the provider_id for correct binding." + assert added_label.tool_type == expected_type, "Isolation failed: tool_type must be verified during update." +# Test error handling def test_tool_label_manager_update_tool_labels_unsupported(): with pytest.raises(ValueError, match="Unsupported tool type"): ToolLabelManager.update_tool_labels(object(), ["search"]) # type: ignore[arg-type] +# Test retrieval logic def test_tool_label_manager_get_tool_labels_for_builtin_and_db(): + # Mocking a property (@property) using PropertyMock with patch.object( _ConcreteBuiltinToolProviderController, "tool_labels", @@ -62,29 +111,67 @@ def test_tool_label_manager_get_tool_labels_for_builtin_and_db(): assert ToolLabelManager.get_tool_labels(builtin) == ["search", "news"] api = _api_controller("api-1") - with patch("core.tools.tool_label_manager.db") as mock_db: - mock_db.session.scalars.return_value.all.return_value = ["search", "news"] - labels = ToolLabelManager.get_tool_labels(api) - assert labels == ["search", "news"] + with ( + patch("core.tools.tool_label_manager.db"), + patch("core.tools.tool_label_manager.sessionmaker") as mock_sessionmaker, + ): + mock_session = MagicMock() + mock_sessionmaker.return_value.begin.return_value.__enter__.return_value = mock_session + # Inject mock data into the query result: session.scalars(stmt).all() + mock_session.scalars.return_value.all.return_value = ["search", "news"] + + labels = ToolLabelManager.get_tool_labels(api) + assert labels == ["search", "news"] + + +def test_tool_label_manager_get_tool_labels_unsupported(): + """ + Negative Test: Ensure get_tool_labels raises ValueError for unsupported controller types. + This protects the internal API contract against accidental regressions during refactoring. + """ + # Passing a generic object() which doesn't match Api, Workflow, or Builtin controllers. with pytest.raises(ValueError, match="Unsupported tool type"): ToolLabelManager.get_tool_labels(object()) # type: ignore[arg-type] +# Test batch processing and mapping def test_tool_label_manager_get_tools_labels_batch(): assert ToolLabelManager.get_tools_labels([]) == {} api = _api_controller("api-1") wf = _workflow_controller("wf-1") + + # SimpleNamespace is a quick way to simulate SQLAlchemy row objects records = [ SimpleNamespace(tool_id="api-1", label_name="search"), SimpleNamespace(tool_id="api-1", label_name="news"), SimpleNamespace(tool_id="wf-1", label_name="utilities"), ] - with patch("core.tools.tool_label_manager.db") as mock_db: - mock_db.session.scalars.return_value.all.return_value = records + + with ( + patch("core.tools.tool_label_manager.db"), + patch("core.tools.tool_label_manager.sessionmaker") as mock_sessionmaker, + ): + mock_session = MagicMock() + mock_sessionmaker.return_value.begin.return_value.__enter__.return_value = mock_session + + # Simulating the batch query result + mock_session.scalars.return_value.all.return_value = records + labels = ToolLabelManager.get_tools_labels([api, wf]) + + # Verify the final dictionary mapping assert labels == {"api-1": ["search", "news"], "wf-1": ["utilities"]} + +def test_tool_label_manager_get_tools_labels_unsupported(): + """ + Negative Test: Ensure get_tools_labels raises ValueError if the list contains + unsupported controller types, even alongside valid ones. + """ + api = _api_controller("api-1") + + # Passing a list with one valid controller and one invalid object() with pytest.raises(ValueError, match="Unsupported tool type"): ToolLabelManager.get_tools_labels([api, object()]) # type: ignore[list-item] diff --git a/api/tests/unit_tests/core/tools/test_tool_manager.py b/api/tests/unit_tests/core/tools/test_tool_manager.py index 31b68f0b3f..9ebaa0417b 100644 --- a/api/tests/unit_tests/core/tools/test_tool_manager.py +++ b/api/tests/unit_tests/core/tools/test_tool_manager.py @@ -637,7 +637,7 @@ def test_list_default_builtin_providers_for_postgres_and_mysql(): for scheme in ("postgresql", "mysql"): session = Mock() session.execute.return_value.all.return_value = [SimpleNamespace(id="id-1"), SimpleNamespace(id="id-2")] - session.query.return_value.where.return_value.all.return_value = provider_records + session.scalars.return_value = iter(provider_records) with patch("core.tools.tool_manager.dify_config", SimpleNamespace(SQLALCHEMY_DATABASE_URI_SCHEME=scheme)): with patch("core.tools.tool_manager.db") as mock_db: diff --git a/api/tests/unit_tests/core/tools/utils/test_message_transformer.py b/api/tests/unit_tests/core/tools/utils/test_message_transformer.py index 6454a5bcd1..5f34135af4 100644 --- a/api/tests/unit_tests/core/tools/utils/test_message_transformer.py +++ b/api/tests/unit_tests/core/tools/utils/test_message_transformer.py @@ -1,3 +1,5 @@ +from typing import Any + import pytest import core.tools.utils.message_transformer as mt @@ -13,7 +15,7 @@ class _FakeToolFile: class _FakeToolFileManager: """Fake ToolFileManager to capture the mimetype passed in.""" - last_call: dict | None = None + last_call: dict[str, Any] | None = None def __init__(self, *args, **kwargs): pass diff --git a/api/tests/unit_tests/core/tools/utils/test_model_invocation_utils.py b/api/tests/unit_tests/core/tools/utils/test_model_invocation_utils.py index 52f262e1cf..44785f939c 100644 --- a/api/tests/unit_tests/core/tools/utils/test_model_invocation_utils.py +++ b/api/tests/unit_tests/core/tools/utils/test_model_invocation_utils.py @@ -10,9 +10,12 @@ from __future__ import annotations from decimal import Decimal from types import SimpleNamespace +from typing import Any from unittest.mock import Mock, patch import pytest + +from core.tools.utils.model_invocation_utils import InvokeModelError, ModelInvocationUtils from graphon.model_runtime.entities.model_entities import ModelPropertyKey from graphon.model_runtime.errors.invoke import ( InvokeAuthorizationError, @@ -22,10 +25,8 @@ from graphon.model_runtime.errors.invoke import ( InvokeServerUnavailableError, ) -from core.tools.utils.model_invocation_utils import InvokeModelError, ModelInvocationUtils - -def _mock_model_instance(*, schema: dict | None = None) -> SimpleNamespace: +def _mock_model_instance(*, schema: dict[str, Any] | None = None) -> SimpleNamespace: model_type_instance = Mock() model_type_instance.get_model_schema.return_value = ( SimpleNamespace(model_properties=schema or {}) if schema is not None else None diff --git a/api/tests/unit_tests/core/tools/utils/test_parser.py b/api/tests/unit_tests/core/tools/utils/test_parser.py index 40f91b12a0..032b1377a4 100644 --- a/api/tests/unit_tests/core/tools/utils/test_parser.py +++ b/api/tests/unit_tests/core/tools/utils/test_parser.py @@ -1,4 +1,5 @@ from json.decoder import JSONDecodeError +from typing import Any from unittest.mock import Mock, patch import pytest @@ -259,8 +260,8 @@ def test_parse_openapi_to_tool_bundle_server_env_and_refs(app): }, } - extra_info: dict = {} - warning: dict = {} + extra_info: dict[str, Any] = {} + warning: dict[str, Any] = {} with app.test_request_context(headers={"X-Request-Env": "prod"}): bundles = ApiBasedToolSchemaParser.parse_openapi_to_tool_bundle(openapi, extra_info=extra_info, warning=warning) @@ -298,7 +299,7 @@ def test_parse_swagger_to_openapi_branches(): } ) - warning: dict = {"seed": True} + warning: dict[str, Any] = {"seed": True} converted = ApiBasedToolSchemaParser.parse_swagger_to_openapi( { "servers": [{"url": "https://x"}], diff --git a/api/tests/unit_tests/core/tools/utils/test_workflow_configuration_sync.py b/api/tests/unit_tests/core/tools/utils/test_workflow_configuration_sync.py index 0e3a7e623a..43f3fbd5c9 100644 --- a/api/tests/unit_tests/core/tools/utils/test_workflow_configuration_sync.py +++ b/api/tests/unit_tests/core/tools/utils/test_workflow_configuration_sync.py @@ -1,9 +1,9 @@ import pytest -from graphon.variables.input_entities import VariableEntity, VariableEntityType from core.tools.entities.tool_entities import ToolParameter, WorkflowToolParameterConfiguration from core.tools.errors import WorkflowToolHumanInputNotSupportedError from core.tools.utils.workflow_configuration_sync import WorkflowToolConfigurationUtils +from graphon.variables.input_entities import VariableEntity, VariableEntityType def test_ensure_no_human_input_nodes_passes_for_non_human_input(): diff --git a/api/tests/unit_tests/core/tools/workflow_as_tool/test_provider.py b/api/tests/unit_tests/core/tools/workflow_as_tool/test_provider.py index 2607861b59..5a585c609a 100644 --- a/api/tests/unit_tests/core/tools/workflow_as_tool/test_provider.py +++ b/api/tests/unit_tests/core/tools/workflow_as_tool/test_provider.py @@ -4,7 +4,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock, Mock, patch import pytest -from graphon.variables.input_entities import VariableEntity, VariableEntityType from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ( @@ -14,6 +13,7 @@ from core.tools.entities.tool_entities import ( ToolProviderType, ) from core.tools.workflow_as_tool.provider import WorkflowToolProviderController +from graphon.variables.input_entities import VariableEntity, VariableEntityType def _controller() -> WorkflowToolProviderController: @@ -43,7 +43,7 @@ def test_get_db_provider_tool_builds_entity(): controller = _controller() session = Mock() workflow = SimpleNamespace(graph_dict={"nodes": []}, features_dict={}) - session.query.return_value.where.return_value.first.return_value = workflow + session.scalar.return_value = workflow app = SimpleNamespace(id="app-1") db_provider = SimpleNamespace( id="provider-1", @@ -136,7 +136,7 @@ def test_from_db_builds_controller(): parameter_configurations=[], ) session = _mock_session_with_begin() - session.query.return_value.where.return_value.first.return_value = db_provider + session.scalar.return_value = db_provider session.get.side_effect = [app, user] fake_cm = MagicMock() fake_cm.__enter__.return_value = session @@ -163,7 +163,7 @@ def test_get_tools_returns_empty_when_provider_missing(): mock_db.engine = object() with patch("core.tools.workflow_as_tool.provider.Session") as session_cls: session = _mock_session_with_begin() - session.query.return_value.where.return_value.first.return_value = None + session.scalar.return_value = None session_cls.return_value.__enter__.return_value = session assert controller.get_tools("tenant-1") == [] @@ -189,7 +189,7 @@ def test_get_tools_raises_when_app_missing(): mock_db.engine = object() with patch("core.tools.workflow_as_tool.provider.Session") as session_cls: session = _mock_session_with_begin() - session.query.return_value.where.return_value.first.return_value = db_provider + session.scalar.return_value = db_provider session.get.return_value = None session_cls.return_value.__enter__.return_value = session with pytest.raises(ValueError, match="app not found"): diff --git a/api/tests/unit_tests/core/tools/workflow_as_tool/test_tool.py b/api/tests/unit_tests/core/tools/workflow_as_tool/test_tool.py index c20edd7400..72a73dd936 100644 --- a/api/tests/unit_tests/core/tools/workflow_as_tool/test_tool.py +++ b/api/tests/unit_tests/core/tools/workflow_as_tool/test_tool.py @@ -11,7 +11,6 @@ from typing import Any from unittest.mock import MagicMock, Mock, patch import pytest -from graphon.file import FILE_MODEL_IDENTITY, FileTransferMethod, FileType from core.app.entities.app_invoke_entities import InvokeFrom from core.tools.__base.tool_runtime import ToolRuntime @@ -25,6 +24,7 @@ from core.tools.entities.tool_entities import ( ) from core.tools.errors import ToolInvokeError from core.tools.workflow_as_tool.tool import WorkflowTool +from graphon.file import FILE_MODEL_IDENTITY, FileTransferMethod, FileType class StubScalars: diff --git a/api/tests/unit_tests/core/trigger/debug/test_debug_event_selectors.py b/api/tests/unit_tests/core/trigger/debug/test_debug_event_selectors.py index 78622b78b6..fb7dc52838 100644 --- a/api/tests/unit_tests/core/trigger/debug/test_debug_event_selectors.py +++ b/api/tests/unit_tests/core/trigger/debug/test_debug_event_selectors.py @@ -8,10 +8,10 @@ and select_trigger_debug_events orchestrator. from __future__ import annotations from datetime import datetime +from typing import Any from unittest.mock import MagicMock, patch import pytest -from graphon.enums import BuiltinNodeTypes, NodeType from core.plugin.entities.request import TriggerInvokeEventResponse from core.trigger.constants import ( @@ -27,10 +27,11 @@ from core.trigger.debug.event_selectors import ( select_trigger_debug_events, ) from core.trigger.debug.events import PluginTriggerDebugEvent, WebhookDebugEvent +from graphon.enums import BuiltinNodeTypes, NodeType from tests.unit_tests.core.trigger.conftest import VALID_PROVIDER_ID -def _make_poller_args(node_config: dict | None = None) -> dict: +def _make_poller_args(node_config: dict[str, Any] | None = None) -> dict[str, Any]: return { "tenant_id": "t1", "user_id": "u1", diff --git a/api/tests/unit_tests/core/variables/test_segment.py b/api/tests/unit_tests/core/variables/test_segment.py index 7406b88270..72052c8c05 100644 --- a/api/tests/unit_tests/core/variables/test_segment.py +++ b/api/tests/unit_tests/core/variables/test_segment.py @@ -2,6 +2,11 @@ import dataclasses import orjson import pytest +from pydantic import BaseModel + +from core.helper import encrypter +from core.workflow.system_variables import build_bootstrap_variables, build_system_variables +from core.workflow.variable_pool_initializer import add_variables_to_pool from graphon.file import File, FileTransferMethod, FileType from graphon.runtime import VariablePool from graphon.variables.segment_group import SegmentGroup @@ -42,11 +47,6 @@ from graphon.variables.variables import ( StringVariable, Variable, ) -from pydantic import BaseModel - -from core.helper import encrypter -from core.workflow.system_variables import build_bootstrap_variables, build_system_variables -from core.workflow.variable_pool_initializer import add_variables_to_pool def _build_variable_pool( diff --git a/api/tests/unit_tests/core/variables/test_segment_type.py b/api/tests/unit_tests/core/variables/test_segment_type.py index 37ecd2890b..d4e862220a 100644 --- a/api/tests/unit_tests/core/variables/test_segment_type.py +++ b/api/tests/unit_tests/core/variables/test_segment_type.py @@ -1,4 +1,5 @@ import pytest + from graphon.variables.segment_group import SegmentGroup from graphon.variables.segments import StringSegment from graphon.variables.types import ArrayValidation, SegmentType diff --git a/api/tests/unit_tests/core/variables/test_segment_type_validation.py b/api/tests/unit_tests/core/variables/test_segment_type_validation.py index 09254e17a3..94e788edb2 100644 --- a/api/tests/unit_tests/core/variables/test_segment_type_validation.py +++ b/api/tests/unit_tests/core/variables/test_segment_type_validation.py @@ -9,6 +9,7 @@ from dataclasses import dataclass from typing import Any import pytest + from graphon.file import File, FileTransferMethod, FileType from graphon.variables.segment_group import SegmentGroup from graphon.variables.segments import ( diff --git a/api/tests/unit_tests/core/variables/test_variables.py b/api/tests/unit_tests/core/variables/test_variables.py index 75b01bf42e..dae5e1ce98 100644 --- a/api/tests/unit_tests/core/variables/test_variables.py +++ b/api/tests/unit_tests/core/variables/test_variables.py @@ -1,4 +1,6 @@ import pytest +from pydantic import ValidationError + from graphon.variables import ( ArrayFileVariable, ArrayVariable, @@ -10,7 +12,6 @@ from graphon.variables import ( StringVariable, ) from graphon.variables.variables import VariableBase -from pydantic import ValidationError def test_frozen_variables(): diff --git a/api/tests/unit_tests/core/workflow/graph_engine/layers/conftest.py b/api/tests/unit_tests/core/workflow/graph_engine/layers/conftest.py index 41627f5e0b..025d79b25d 100644 --- a/api/tests/unit_tests/core/workflow/graph_engine/layers/conftest.py +++ b/api/tests/unit_tests/core/workflow/graph_engine/layers/conftest.py @@ -5,12 +5,13 @@ Shared fixtures for ObservabilityLayer tests. from unittest.mock import MagicMock, patch import pytest -from graphon.enums import BuiltinNodeTypes from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import SimpleSpanProcessor from opentelemetry.sdk.trace.export.in_memory_span_exporter import InMemorySpanExporter from opentelemetry.trace import set_tracer_provider +from graphon.enums import BuiltinNodeTypes + @pytest.fixture def memory_span_exporter(): @@ -61,9 +62,8 @@ def mock_llm_node(): @pytest.fixture def mock_tool_node(): """Create a mock Tool Node with tool-specific attributes.""" - from graphon.nodes.tool.entities import ToolNodeData - from core.tools.entities.tool_entities import ToolProviderType + from graphon.nodes.tool.entities import ToolNodeData node = MagicMock() node.id = "test-tool-node-id" diff --git a/api/tests/unit_tests/core/workflow/graph_engine/layers/test_llm_quota.py b/api/tests/unit_tests/core/workflow/graph_engine/layers/test_llm_quota.py index 99d131737e..5d6667257f 100644 --- a/api/tests/unit_tests/core/workflow/graph_engine/layers/test_llm_quota.py +++ b/api/tests/unit_tests/core/workflow/graph_engine/layers/test_llm_quota.py @@ -3,17 +3,16 @@ from datetime import datetime from types import SimpleNamespace from unittest.mock import MagicMock, patch +from core.app.entities.app_invoke_entities import DifyRunContext, InvokeFrom, UserFrom +from core.app.workflow.layers.llm_quota import LLMQuotaLayer +from core.errors.error import QuotaExceededError +from core.model_manager import ModelInstance from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionStatus from graphon.graph_engine.entities.commands import CommandType from graphon.graph_events import NodeRunSucceededEvent from graphon.model_runtime.entities.llm_entities import LLMUsage from graphon.node_events import NodeRunResult -from core.app.entities.app_invoke_entities import DifyRunContext, InvokeFrom, UserFrom -from core.app.workflow.layers.llm_quota import LLMQuotaLayer -from core.errors.error import QuotaExceededError -from core.model_manager import ModelInstance - def _build_dify_context() -> DifyRunContext: return DifyRunContext( diff --git a/api/tests/unit_tests/core/workflow/graph_engine/layers/test_observability.py b/api/tests/unit_tests/core/workflow/graph_engine/layers/test_observability.py index 9cf72763ee..919f15efd0 100644 --- a/api/tests/unit_tests/core/workflow/graph_engine/layers/test_observability.py +++ b/api/tests/unit_tests/core/workflow/graph_engine/layers/test_observability.py @@ -13,10 +13,10 @@ Test coverage: from unittest.mock import patch import pytest -from graphon.enums import BuiltinNodeTypes from opentelemetry.trace import StatusCode from core.app.workflow.layers.observability import ObservabilityLayer +from graphon.enums import BuiltinNodeTypes class TestObservabilityLayerInitialization: diff --git a/api/tests/unit_tests/core/workflow/graph_engine/test_mock_factory.py b/api/tests/unit_tests/core/workflow/graph_engine/test_mock_factory.py index 88989db856..76b2984a4b 100644 --- a/api/tests/unit_tests/core/workflow/graph_engine/test_mock_factory.py +++ b/api/tests/unit_tests/core/workflow/graph_engine/test_mock_factory.py @@ -7,12 +7,11 @@ requiring external services (LLM, Agent, Tool, Knowledge Retrieval, HTTP Request from typing import TYPE_CHECKING, Any +from core.workflow.node_factory import DifyNodeFactory from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter from graphon.enums import BuiltinNodeTypes, NodeType from graphon.nodes.base.node import Node -from core.workflow.node_factory import DifyNodeFactory - from .test_mock_nodes import ( MockAgentNode, MockCodeNode, diff --git a/api/tests/unit_tests/core/workflow/graph_engine/test_mock_nodes.py b/api/tests/unit_tests/core/workflow/graph_engine/test_mock_nodes.py index 8b7fbd1b30..971b9b2bbf 100644 --- a/api/tests/unit_tests/core/workflow/graph_engine/test_mock_nodes.py +++ b/api/tests/unit_tests/core/workflow/graph_engine/test_mock_nodes.py @@ -10,6 +10,10 @@ from collections.abc import Generator, Mapping from typing import TYPE_CHECKING, Any, Optional from unittest.mock import MagicMock +from core.model_manager import ModelInstance +from core.workflow.node_runtime import DifyToolNodeRuntime +from core.workflow.nodes.agent import AgentNode +from core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node import KnowledgeRetrievalNode from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus from graphon.model_runtime.entities.llm_entities import LLMUsage from graphon.node_events import NodeRunResult, StreamChunkEvent, StreamCompletedEvent @@ -27,11 +31,6 @@ from graphon.nodes.template_transform import TemplateTransformNode from graphon.nodes.tool import ToolNode from graphon.template_rendering import Jinja2TemplateRenderer, TemplateRenderError -from core.model_manager import ModelInstance -from core.workflow.node_runtime import DifyToolNodeRuntime -from core.workflow.nodes.agent import AgentNode -from core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node import KnowledgeRetrievalNode - if TYPE_CHECKING: from graphon.entities import GraphInitParams from graphon.runtime import GraphRuntimeState diff --git a/api/tests/unit_tests/core/workflow/graph_engine/test_parallel_human_input_join_resume.py b/api/tests/unit_tests/core/workflow/graph_engine/test_parallel_human_input_join_resume.py index 8311a1e847..55a329eba9 100644 --- a/api/tests/unit_tests/core/workflow/graph_engine/test_parallel_human_input_join_resume.py +++ b/api/tests/unit_tests/core/workflow/graph_engine/test_parallel_human_input_join_resume.py @@ -4,6 +4,13 @@ from dataclasses import dataclass from datetime import datetime, timedelta from typing import Any, Protocol +from core.repositories.human_input_repository import ( + FormCreateParams, + HumanInputFormEntity, + HumanInputFormRepository, +) +from core.workflow.node_runtime import DifyHumanInputNodeRuntime +from core.workflow.system_variables import build_system_variables from graphon.entities import WorkflowStartReason from graphon.graph import Graph from graphon.graph_engine import GraphEngine, GraphEngineConfig @@ -23,14 +30,6 @@ from graphon.nodes.human_input.human_input_node import HumanInputNode from graphon.nodes.start.entities import StartNodeData from graphon.nodes.start.start_node import StartNode from graphon.runtime import GraphRuntimeState, VariablePool - -from core.repositories.human_input_repository import ( - FormCreateParams, - HumanInputFormEntity, - HumanInputFormRepository, -) -from core.workflow.node_runtime import DifyHumanInputNodeRuntime -from core.workflow.system_variables import build_system_variables from libs.datetime_utils import naive_utc_now from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/graph_engine/test_table_runner.py b/api/tests/unit_tests/core/workflow/graph_engine/test_table_runner.py index b11f957677..7d23b63049 100644 --- a/api/tests/unit_tests/core/workflow/graph_engine/test_table_runner.py +++ b/api/tests/unit_tests/core/workflow/graph_engine/test_table_runner.py @@ -19,6 +19,11 @@ from functools import lru_cache from pathlib import Path from typing import Any +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom +from core.tools.utils.yaml_utils import _load_yaml_file +from core.workflow.node_factory import DifyNodeFactory, get_default_root_node_id +from core.workflow.system_variables import build_bootstrap_variables, build_system_variables +from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add_variables_to_pool from graphon.entities import GraphInitParams from graphon.graph import Graph from graphon.graph_engine import GraphEngine, GraphEngineConfig @@ -39,12 +44,6 @@ from graphon.variables import ( StringVariable, ) -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom -from core.tools.utils.yaml_utils import _load_yaml_file -from core.workflow.node_factory import DifyNodeFactory, get_default_root_node_id -from core.workflow.system_variables import build_bootstrap_variables, build_system_variables -from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add_variables_to_pool - from .test_mock_config import MockConfig from .test_mock_factory import MockNodeFactory diff --git a/api/tests/unit_tests/core/workflow/nodes/agent/test_message_transformer.py b/api/tests/unit_tests/core/workflow/nodes/agent/test_message_transformer.py index cbc920705c..1f4509af9a 100644 --- a/api/tests/unit_tests/core/workflow/nodes/agent/test_message_transformer.py +++ b/api/tests/unit_tests/core/workflow/nodes/agent/test_message_transformer.py @@ -1,9 +1,8 @@ from unittest.mock import patch -from graphon.enums import BuiltinNodeTypes - from core.tools.utils.message_transformer import ToolFileMessageTransformer from core.workflow.nodes.agent.message_transformer import AgentMessageTransformer +from graphon.enums import BuiltinNodeTypes def test_transform_passes_conversation_id_to_tool_file_message_transformer() -> None: diff --git a/api/tests/unit_tests/core/workflow/nodes/agent/test_runtime_support.py b/api/tests/unit_tests/core/workflow/nodes/agent/test_runtime_support.py index 59dd763b59..c86de7f6e6 100644 --- a/api/tests/unit_tests/core/workflow/nodes/agent/test_runtime_support.py +++ b/api/tests/unit_tests/core/workflow/nodes/agent/test_runtime_support.py @@ -1,9 +1,8 @@ from types import SimpleNamespace from unittest.mock import Mock, patch -from graphon.model_runtime.entities.model_entities import ModelType - from core.workflow.nodes.agent.runtime_support import AgentRuntimeSupport +from graphon.model_runtime.entities.model_entities import ModelType def test_fetch_model_reuses_single_model_assembly(): diff --git a/api/tests/unit_tests/core/workflow/nodes/answer/test_answer.py b/api/tests/unit_tests/core/workflow/nodes/answer/test_answer.py index 7195471eb6..9c0ad25b58 100644 --- a/api/tests/unit_tests/core/workflow/nodes/answer/test_answer.py +++ b/api/tests/unit_tests/core/workflow/nodes/answer/test_answer.py @@ -2,15 +2,14 @@ import time import uuid from unittest.mock import MagicMock -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.graph import Graph -from graphon.nodes.answer.answer_node import AnswerNode -from graphon.runtime import GraphRuntimeState, VariablePool - from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.workflow.node_factory import DifyNodeFactory from core.workflow.system_variables import build_system_variables from extensions.ext_database import db +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.graph import Graph +from graphon.nodes.answer.answer_node import AnswerNode +from graphon.runtime import GraphRuntimeState, VariablePool from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/base/test_base_node.py b/api/tests/unit_tests/core/workflow/nodes/base/test_base_node.py index 343bcd3919..ec4cef1955 100644 --- a/api/tests/unit_tests/core/workflow/nodes/base/test_base_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/base/test_base_node.py @@ -1,10 +1,10 @@ import pytest + +from core.workflow.node_factory import get_node_type_classes_mapping from graphon.entities.base_node_data import BaseNodeData from graphon.enums import BuiltinNodeTypes, NodeType from graphon.nodes.base.node import Node -from core.workflow.node_factory import get_node_type_classes_mapping - # Ensures that all production node classes are imported and registered. _ = get_node_type_classes_mapping() diff --git a/api/tests/unit_tests/core/workflow/nodes/base/test_get_node_type_classes_mapping.py b/api/tests/unit_tests/core/workflow/nodes/base/test_get_node_type_classes_mapping.py index b9371a34f4..ef0df55995 100644 --- a/api/tests/unit_tests/core/workflow/nodes/base/test_get_node_type_classes_mapping.py +++ b/api/tests/unit_tests/core/workflow/nodes/base/test_get_node_type_classes_mapping.py @@ -1,6 +1,7 @@ import types from collections.abc import Mapping +from core.workflow.node_factory import get_node_type_classes_mapping from graphon.entities.base_node_data import BaseNodeData from graphon.enums import BuiltinNodeTypes, NodeType from graphon.nodes.base.node import Node @@ -13,8 +14,6 @@ from graphon.nodes.variable_assigner.v2.node import ( VariableAssignerNode as VariableAssignerV2, ) -from core.workflow.node_factory import get_node_type_classes_mapping - def test_variable_assigner_latest_prefers_highest_numeric_version(): # Act diff --git a/api/tests/unit_tests/core/workflow/nodes/code/code_node_spec.py b/api/tests/unit_tests/core/workflow/nodes/code/code_node_spec.py index d155124c50..ce0c9b79c6 100644 --- a/api/tests/unit_tests/core/workflow/nodes/code/code_node_spec.py +++ b/api/tests/unit_tests/core/workflow/nodes/code/code_node_spec.py @@ -1,3 +1,4 @@ +from configs import dify_config from graphon.nodes.code.code_node import CodeNode from graphon.nodes.code.entities import CodeLanguage, CodeNodeData from graphon.nodes.code.exc import ( @@ -8,8 +9,6 @@ from graphon.nodes.code.exc import ( from graphon.nodes.code.limits import CodeNodeLimits from graphon.variables.types import SegmentType -from configs import dify_config - CodeNode._limits = CodeNodeLimits( max_string_length=dify_config.CODE_MAX_STRING_LENGTH, max_number=dify_config.CODE_MAX_NUMBER, diff --git a/api/tests/unit_tests/core/workflow/nodes/datasource/test_datasource_node.py b/api/tests/unit_tests/core/workflow/nodes/datasource/test_datasource_node.py index fb03ae9998..9cceadde49 100644 --- a/api/tests/unit_tests/core/workflow/nodes/datasource/test_datasource_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/datasource/test_datasource_node.py @@ -1,8 +1,7 @@ -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.node_events import NodeRunResult, StreamChunkEvent, StreamCompletedEvent - from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY from core.workflow.nodes.datasource.datasource_node import DatasourceNode +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.node_events import NodeRunResult, StreamChunkEvent, StreamCompletedEvent class _VarSeg: diff --git a/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_executor.py b/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_executor.py index a5026b40cf..be7cc073db 100644 --- a/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_executor.py +++ b/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_executor.py @@ -1,4 +1,8 @@ import pytest + +from configs import dify_config +from core.helper.ssrf_proxy import ssrf_proxy +from core.workflow.system_variables import default_system_variables from graphon.file.file_manager import file_manager from graphon.nodes.http_request import ( BodyData, @@ -12,10 +16,6 @@ from graphon.nodes.http_request.exc import AuthorizationConfigError from graphon.nodes.http_request.executor import Executor from graphon.runtime import VariablePool -from configs import dify_config -from core.helper.ssrf_proxy import ssrf_proxy -from core.workflow.system_variables import default_system_variables - HTTP_REQUEST_CONFIG = HttpRequestNodeConfig( max_connect_timeout=dify_config.HTTP_REQUEST_MAX_CONNECT_TIMEOUT, max_read_timeout=dify_config.HTTP_REQUEST_MAX_READ_TIMEOUT, diff --git a/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_node.py b/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_node.py index 4705b3f76e..a3cadc0681 100644 --- a/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_node.py @@ -3,17 +3,17 @@ from typing import Any import httpx import pytest -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.file.file_manager import file_manager -from graphon.nodes.http_request import HTTP_REQUEST_CONFIG_FILTER_KEY, HttpRequestNode, HttpRequestNodeConfig -from graphon.nodes.http_request.entities import HttpRequestNodeTimeout, Response -from graphon.runtime import GraphRuntimeState, VariablePool from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.helper.ssrf_proxy import ssrf_proxy from core.tools.tool_file_manager import ToolFileManager from core.workflow.node_runtime import DifyFileReferenceFactory from core.workflow.system_variables import build_system_variables +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.file.file_manager import file_manager +from graphon.nodes.http_request import HTTP_REQUEST_CONFIG_FILTER_KEY, HttpRequestNode, HttpRequestNodeConfig +from graphon.nodes.http_request.entities import HttpRequestNodeTimeout, Response +from graphon.runtime import GraphRuntimeState, VariablePool from tests.workflow_test_utils import build_test_graph_init_params HTTP_REQUEST_CONFIG = HttpRequestNodeConfig( diff --git a/api/tests/unit_tests/core/workflow/nodes/human_input/test_email_delivery_config.py b/api/tests/unit_tests/core/workflow/nodes/human_input/test_email_delivery_config.py index d16e1233ac..1d6a4da7c4 100644 --- a/api/tests/unit_tests/core/workflow/nodes/human_input/test_email_delivery_config.py +++ b/api/tests/unit_tests/core/workflow/nodes/human_input/test_email_delivery_config.py @@ -1,6 +1,5 @@ -from graphon.runtime import VariablePool - from core.workflow.human_input_compat import EmailDeliveryConfig, EmailRecipients +from graphon.runtime import VariablePool def test_render_body_template_replaces_variable_values(): diff --git a/api/tests/unit_tests/core/workflow/nodes/human_input/test_entities.py b/api/tests/unit_tests/core/workflow/nodes/human_input/test_entities.py index a2cdbbf132..c0e21d0bf7 100644 --- a/api/tests/unit_tests/core/workflow/nodes/human_input/test_entities.py +++ b/api/tests/unit_tests/core/workflow/nodes/human_input/test_entities.py @@ -10,24 +10,6 @@ from typing import Any from unittest.mock import MagicMock import pytest -from graphon.entities import GraphInitParams -from graphon.node_events import PauseRequestedEvent -from graphon.node_events.node import StreamCompletedEvent -from graphon.nodes.human_input.entities import ( - FormInput, - FormInputDefault, - HumanInputNodeData, - UserAction, -) -from graphon.nodes.human_input.enums import ( - ButtonStyle, - FormInputType, - HumanInputFormStatus, - PlaceholderType, - TimeoutUnit, -) -from graphon.nodes.human_input.human_input_node import HumanInputNode -from graphon.runtime import GraphRuntimeState, VariablePool from pydantic import ValidationError from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY @@ -50,6 +32,24 @@ from core.workflow.human_input_compat import ( ) from core.workflow.node_runtime import DifyHumanInputNodeRuntime from core.workflow.system_variables import build_system_variables +from graphon.entities import GraphInitParams +from graphon.node_events import PauseRequestedEvent +from graphon.node_events.node import StreamCompletedEvent +from graphon.nodes.human_input.entities import ( + FormInput, + FormInputDefault, + HumanInputNodeData, + UserAction, +) +from graphon.nodes.human_input.enums import ( + ButtonStyle, + FormInputType, + HumanInputFormStatus, + PlaceholderType, + TimeoutUnit, +) +from graphon.nodes.human_input.human_input_node import HumanInputNode +from graphon.runtime import GraphRuntimeState, VariablePool from libs.datetime_utils import naive_utc_now diff --git a/api/tests/unit_tests/core/workflow/nodes/human_input/test_human_input_form_filled_event.py b/api/tests/unit_tests/core/workflow/nodes/human_input/test_human_input_form_filled_event.py index 52802c7ce1..bc98028d5b 100644 --- a/api/tests/unit_tests/core/workflow/nodes/human_input/test_human_input_form_filled_event.py +++ b/api/tests/unit_tests/core/workflow/nodes/human_input/test_human_input_form_filled_event.py @@ -1,6 +1,9 @@ import datetime from types import SimpleNamespace +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom +from core.workflow.node_runtime import DifyHumanInputNodeRuntime +from core.workflow.system_variables import default_system_variables from graphon.entities import GraphInitParams from graphon.enums import BuiltinNodeTypes from graphon.graph_events import ( @@ -11,10 +14,6 @@ from graphon.graph_events import ( from graphon.nodes.human_input.enums import HumanInputFormStatus from graphon.nodes.human_input.human_input_node import HumanInputNode from graphon.runtime import GraphRuntimeState, VariablePool - -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom -from core.workflow.node_runtime import DifyHumanInputNodeRuntime -from core.workflow.system_variables import default_system_variables from libs.datetime_utils import naive_utc_now diff --git a/api/tests/unit_tests/core/workflow/nodes/iteration/test_iteration_child_engine_errors.py b/api/tests/unit_tests/core/workflow/nodes/iteration/test_iteration_child_engine_errors.py index bbfe350f7e..82cc734274 100644 --- a/api/tests/unit_tests/core/workflow/nodes/iteration/test_iteration_child_engine_errors.py +++ b/api/tests/unit_tests/core/workflow/nodes/iteration/test_iteration_child_engine_errors.py @@ -2,6 +2,8 @@ from collections.abc import Mapping from typing import Any import pytest + +from core.workflow.system_variables import default_system_variables from graphon.entities import GraphInitParams from graphon.nodes.iteration.exc import IterationGraphNotFoundError from graphon.nodes.iteration.iteration_node import IterationNode @@ -11,8 +13,6 @@ from graphon.runtime import ( GraphRuntimeState, VariablePool, ) - -from core.workflow.system_variables import default_system_variables from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/knowledge_index/test_knowledge_index_node.py b/api/tests/unit_tests/core/workflow/nodes/knowledge_index/test_knowledge_index_node.py index f8802138b5..a6fca1bfb4 100644 --- a/api/tests/unit_tests/core/workflow/nodes/knowledge_index/test_knowledge_index_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/knowledge_index/test_knowledge_index_node.py @@ -3,9 +3,6 @@ import uuid from unittest.mock import Mock import pytest -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.runtime import GraphRuntimeState, VariablePool -from graphon.variables.segments import StringSegment from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.rag.index_processor.constant.index_type import IndexTechniqueType @@ -19,6 +16,9 @@ from core.workflow.nodes.knowledge_index.protocols import ( SummaryIndexServiceProtocol, ) from core.workflow.system_variables import SystemVariableKey, build_system_variables +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.runtime import GraphRuntimeState, VariablePool +from graphon.variables.segments import StringSegment from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/knowledge_retrieval/test_knowledge_retrieval_node.py b/api/tests/unit_tests/core/workflow/nodes/knowledge_retrieval/test_knowledge_retrieval_node.py index ab64be59ad..45e8ae7d20 100644 --- a/api/tests/unit_tests/core/workflow/nodes/knowledge_retrieval/test_knowledge_retrieval_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/knowledge_retrieval/test_knowledge_retrieval_node.py @@ -3,10 +3,6 @@ import uuid from unittest.mock import Mock import pytest -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.model_runtime.entities.llm_entities import LLMUsage -from graphon.runtime import GraphRuntimeState, VariablePool -from graphon.variables import StringSegment from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.workflow.nodes.knowledge_retrieval.entities import ( @@ -21,6 +17,10 @@ from core.workflow.nodes.knowledge_retrieval.exc import RateLimitExceededError from core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node import KnowledgeRetrievalNode from core.workflow.nodes.knowledge_retrieval.retrieval import RAGRetrievalProtocol, Source from core.workflow.system_variables import build_system_variables +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.model_runtime.entities.llm_entities import LLMUsage +from graphon.runtime import GraphRuntimeState, VariablePool +from graphon.variables import StringSegment from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/list_operator/node_spec.py b/api/tests/unit_tests/core/workflow/nodes/list_operator/node_spec.py index fdf1706765..eca34f05be 100644 --- a/api/tests/unit_tests/core/workflow/nodes/list_operator/node_spec.py +++ b/api/tests/unit_tests/core/workflow/nodes/list_operator/node_spec.py @@ -1,14 +1,14 @@ from unittest.mock import MagicMock import pytest + +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY from graphon.entities import GraphInitParams from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionStatus from graphon.nodes.list_operator.node import ListOperatorNode from graphon.runtime import GraphRuntimeState from graphon.variables import ArrayNumberSegment, ArrayStringSegment -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY - class TestListOperatorNode: """Comprehensive tests for ListOperatorNode.""" diff --git a/api/tests/unit_tests/core/workflow/nodes/llm/test_llm_utils.py b/api/tests/unit_tests/core/workflow/nodes/llm/test_llm_utils.py index c784f805c0..4186bbdc93 100644 --- a/api/tests/unit_tests/core/workflow/nodes/llm/test_llm_utils.py +++ b/api/tests/unit_tests/core/workflow/nodes/llm/test_llm_utils.py @@ -1,6 +1,8 @@ from unittest import mock import pytest + +from core.model_manager import ModelInstance from graphon.file import File, FileTransferMethod, FileType from graphon.model_runtime.entities import ( ImagePromptMessageContent, @@ -33,8 +35,6 @@ from graphon.nodes.llm.exc import ( from graphon.runtime import VariablePool from graphon.variables import ArrayAnySegment, ArrayFileSegment, NoneSegment -from core.model_manager import ModelInstance - def _build_model_schema( *, diff --git a/api/tests/unit_tests/core/workflow/nodes/llm/test_node.py b/api/tests/unit_tests/core/workflow/nodes/llm/test_node.py index 7841bf05ad..b1f81b6c48 100644 --- a/api/tests/unit_tests/core/workflow/nodes/llm/test_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/llm/test_node.py @@ -5,6 +5,19 @@ from collections.abc import Sequence from unittest import mock import pytest + +from core.app.entities.app_invoke_entities import DifyRunContext, InvokeFrom, ModelConfigWithCredentialsEntity, UserFrom +from core.app.llm.model_access import ( + DifyCredentialsProvider, + DifyModelFactory, + build_dify_model_access, + fetch_model_config, +) +from core.entities.provider_configuration import ProviderConfiguration, ProviderModelBundle +from core.entities.provider_entities import CustomConfiguration, SystemConfiguration +from core.plugin.impl.model_runtime_factory import create_plugin_model_runtime +from core.prompt.entities.advanced_prompt_entities import MemoryConfig +from core.workflow.system_variables import default_system_variables from graphon.entities import GraphInitParams from graphon.file import File, FileTransferMethod, FileType from graphon.model_runtime.entities.common_entities import I18nObject @@ -67,19 +80,6 @@ from graphon.nodes.llm.runtime_protocols import PromptMessageSerializerProtocol from graphon.runtime import GraphRuntimeState, VariablePool from graphon.template_rendering import TemplateRenderError from graphon.variables import ArrayAnySegment, ArrayFileSegment, NoneSegment - -from core.app.entities.app_invoke_entities import DifyRunContext, InvokeFrom, ModelConfigWithCredentialsEntity, UserFrom -from core.app.llm.model_access import ( - DifyCredentialsProvider, - DifyModelFactory, - build_dify_model_access, - fetch_model_config, -) -from core.entities.provider_configuration import ProviderConfiguration, ProviderModelBundle -from core.entities.provider_entities import CustomConfiguration, SystemConfiguration -from core.plugin.impl.model_runtime_factory import create_plugin_model_runtime -from core.prompt.entities.advanced_prompt_entities import MemoryConfig -from core.workflow.system_variables import default_system_variables from models.provider import ProviderType from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/parameter_extractor/test_parameter_extractor_node.py b/api/tests/unit_tests/core/workflow/nodes/parameter_extractor/test_parameter_extractor_node.py index 1c362a0a03..8f8ec49f14 100644 --- a/api/tests/unit_tests/core/workflow/nodes/parameter_extractor/test_parameter_extractor_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/parameter_extractor/test_parameter_extractor_node.py @@ -6,6 +6,8 @@ from dataclasses import dataclass from typing import Any import pytest + +from factories.variable_factory import build_segment_with_type from graphon.model_runtime.entities import LLMMode from graphon.nodes.llm import ModelConfig, VisionConfig from graphon.nodes.parameter_extractor.entities import ParameterConfig, ParameterExtractorNodeData @@ -18,8 +20,6 @@ from graphon.nodes.parameter_extractor.exc import ( from graphon.nodes.parameter_extractor.parameter_extractor_node import ParameterExtractorNode from graphon.variables.types import SegmentType -from factories.variable_factory import build_segment_with_type - @dataclass class ValidTestCase: diff --git a/api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py b/api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py index d86e0efe02..bc44ececd8 100644 --- a/api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py +++ b/api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py @@ -1,6 +1,8 @@ from unittest.mock import MagicMock import pytest + +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from graphon.enums import BuiltinNodeTypes, ErrorStrategy, WorkflowNodeExecutionStatus from graphon.graph import Graph from graphon.nodes.base.entities import VariableSelector @@ -8,8 +10,6 @@ from graphon.nodes.template_transform.entities import TemplateTransformNodeData from graphon.nodes.template_transform.template_transform_node import TemplateTransformNode from graphon.runtime import GraphRuntimeState from graphon.template_rendering import TemplateRenderError - -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/template_transform/test_template_transform_node.py b/api/tests/unit_tests/core/workflow/nodes/template_transform/test_template_transform_node.py index bd22a8e318..636237e56e 100644 --- a/api/tests/unit_tests/core/workflow/nodes/template_transform/test_template_transform_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/template_transform/test_template_transform_node.py @@ -1,14 +1,14 @@ from unittest.mock import MagicMock import pytest + +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from graphon.nodes.base.entities import VariableSelector from graphon.nodes.template_transform.template_transform_node import ( DEFAULT_TEMPLATE_TRANSFORM_MAX_OUTPUT_LENGTH, TemplateTransformNode, ) from graphon.runtime import GraphRuntimeState - -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from tests.workflow_test_utils import build_test_graph_init_params from .template_transform_node_spec import TestTemplateTransformNode # noqa: F401 diff --git a/api/tests/unit_tests/core/workflow/nodes/test_base_node.py b/api/tests/unit_tests/core/workflow/nodes/test_base_node.py index e11ebf6eb8..0522dd9d14 100644 --- a/api/tests/unit_tests/core/workflow/nodes/test_base_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/test_base_node.py @@ -1,16 +1,16 @@ from collections.abc import Mapping import pytest + +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom +from core.workflow.node_runtime import resolve_dify_run_context +from core.workflow.system_variables import build_system_variables from graphon.entities import GraphInitParams from graphon.entities.base_node_data import BaseNodeData from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter from graphon.enums import BuiltinNodeTypes from graphon.nodes.base.node import Node from graphon.runtime import GraphRuntimeState, VariablePool - -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom -from core.workflow.node_runtime import resolve_dify_run_context -from core.workflow.system_variables import build_system_variables from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/test_document_extractor_node.py b/api/tests/unit_tests/core/workflow/nodes/test_document_extractor_node.py index 555ff0c945..87ec2d5bce 100644 --- a/api/tests/unit_tests/core/workflow/nodes/test_document_extractor_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/test_document_extractor_node.py @@ -4,6 +4,8 @@ from unittest.mock import Mock, patch import pandas as pd import pytest from docx.oxml.text.paragraph import CT_P + +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from graphon.entities import GraphInitParams from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionStatus from graphon.file import File, FileTransferMethod @@ -19,8 +21,6 @@ from graphon.nodes.document_extractor.node import ( from graphon.variables import ArrayFileSegment from graphon.variables.segments import ArrayStringSegment from graphon.variables.variables import StringVariable - -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/test_if_else.py b/api/tests/unit_tests/core/workflow/nodes/test_if_else.py index 1b14f0ab13..782750e02e 100644 --- a/api/tests/unit_tests/core/workflow/nodes/test_if_else.py +++ b/api/tests/unit_tests/core/workflow/nodes/test_if_else.py @@ -3,6 +3,11 @@ import uuid from unittest.mock import MagicMock, Mock import pytest + +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom +from core.workflow.node_factory import DifyNodeFactory +from core.workflow.system_variables import build_system_variables +from extensions.ext_database import db from graphon.enums import WorkflowNodeExecutionStatus from graphon.file import File, FileTransferMethod, FileType from graphon.graph import Graph @@ -11,11 +16,6 @@ from graphon.nodes.if_else.if_else_node import IfElseNode from graphon.runtime import GraphRuntimeState, VariablePool from graphon.utils.condition.entities import Condition, SubCondition, SubVariableCondition from graphon.variables import ArrayFileSegment - -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom -from core.workflow.node_factory import DifyNodeFactory -from core.workflow.system_variables import build_system_variables -from extensions.ext_database import db from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/test_list_operator.py b/api/tests/unit_tests/core/workflow/nodes/test_list_operator.py index d28c3e01e5..b217e4e8e7 100644 --- a/api/tests/unit_tests/core/workflow/nodes/test_list_operator.py +++ b/api/tests/unit_tests/core/workflow/nodes/test_list_operator.py @@ -1,6 +1,8 @@ from unittest.mock import MagicMock import pytest + +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom from graphon.enums import WorkflowNodeExecutionStatus from graphon.file import File, FileTransferMethod, FileType from graphon.nodes.list_operator.entities import ( @@ -16,8 +18,6 @@ from graphon.nodes.list_operator.exc import InvalidKeyError from graphon.nodes.list_operator.node import ListOperatorNode, _get_file_extract_string_func from graphon.variables import ArrayFileSegment -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom - @pytest.fixture def list_operator_node(): diff --git a/api/tests/unit_tests/core/workflow/nodes/test_start_node_json_object.py b/api/tests/unit_tests/core/workflow/nodes/test_start_node_json_object.py index 833c303052..543f9878de 100644 --- a/api/tests/unit_tests/core/workflow/nodes/test_start_node_json_object.py +++ b/api/tests/unit_tests/core/workflow/nodes/test_start_node_json_object.py @@ -2,16 +2,16 @@ import json import time import pytest +from pydantic import ValidationError as PydanticValidationError + +from core.workflow.system_variables import build_system_variables +from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, ENVIRONMENT_VARIABLE_NODE_ID from graphon.nodes.start.entities import StartNodeData from graphon.nodes.start.start_node import StartNode from graphon.runtime import GraphRuntimeState from graphon.variables import build_segment, segment_to_variable from graphon.variables.input_entities import VariableEntity, VariableEntityType from graphon.variables.variables import Variable -from pydantic import ValidationError as PydanticValidationError - -from core.workflow.system_variables import build_system_variables -from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, ENVIRONMENT_VARIABLE_NODE_ID from tests.workflow_test_utils import build_test_graph_init_params, build_test_variable_pool diff --git a/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node.py b/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node.py index 1587014802..c806181340 100644 --- a/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node.py @@ -8,14 +8,14 @@ from typing import TYPE_CHECKING, Any from unittest.mock import MagicMock import pytest + +from core.workflow.system_variables import build_system_variables from graphon.file import File, FileTransferMethod, FileType from graphon.model_runtime.entities.llm_entities import LLMUsage from graphon.node_events import StreamChunkEvent, StreamCompletedEvent from graphon.nodes.tool_runtime_entities import ToolRuntimeHandle, ToolRuntimeMessage from graphon.runtime import GraphRuntimeState, VariablePool from graphon.variables.segments import ArrayFileSegment - -from core.workflow.system_variables import build_system_variables from tests.workflow_test_utils import build_test_graph_init_params if TYPE_CHECKING: # pragma: no cover - imported for type checking only diff --git a/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node_runtime.py b/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node_runtime.py index c4dfc5a179..438af211f3 100644 --- a/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node_runtime.py +++ b/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node_runtime.py @@ -6,11 +6,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.entities.llm_entities import LLMUsage -from graphon.nodes.tool.entities import ToolNodeData, ToolProviderType -from graphon.nodes.tool.exc import ToolRuntimeInvocationError -from graphon.nodes.tool_runtime_entities import ToolRuntimeHandle, ToolRuntimeMessage -from graphon.runtime import VariablePool from core.callback_handler.workflow_tool_callback_handler import DifyWorkflowCallbackHandler from core.plugin.impl.exc import PluginDaemonClientSideError, PluginInvokeError @@ -22,6 +17,11 @@ from core.tools.tool_manager import ToolManager from core.tools.utils.message_transformer import ToolFileMessageTransformer from core.workflow.node_runtime import DifyToolNodeRuntime from core.workflow.system_variables import build_system_variables +from graphon.model_runtime.entities.llm_entities import LLMUsage +from graphon.nodes.tool.entities import ToolNodeData, ToolProviderType +from graphon.nodes.tool.exc import ToolRuntimeInvocationError +from graphon.nodes.tool_runtime_entities import ToolRuntimeHandle, ToolRuntimeMessage +from graphon.runtime import VariablePool from tests.workflow_test_utils import build_test_graph_init_params, build_test_variable_pool diff --git a/api/tests/unit_tests/core/workflow/nodes/trigger_plugin/test_trigger_event_node.py b/api/tests/unit_tests/core/workflow/nodes/trigger_plugin/test_trigger_event_node.py index 952e798430..c8ddc53284 100644 --- a/api/tests/unit_tests/core/workflow/nodes/trigger_plugin/test_trigger_event_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/trigger_plugin/test_trigger_event_node.py @@ -1,13 +1,12 @@ from collections.abc import Mapping -from graphon.entities import GraphInitParams -from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter -from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus -from graphon.runtime import GraphRuntimeState - from core.trigger.constants import TRIGGER_PLUGIN_NODE_TYPE from core.workflow.nodes.trigger_plugin.trigger_event_node import TriggerEventNode from core.workflow.system_variables import build_system_variables +from graphon.entities import GraphInitParams +from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter +from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus +from graphon.runtime import GraphRuntimeState from tests.workflow_test_utils import build_test_graph_init_params, build_test_variable_pool diff --git a/api/tests/unit_tests/core/workflow/nodes/webhook/test_exceptions.py b/api/tests/unit_tests/core/workflow/nodes/webhook/test_exceptions.py index f1132af02b..617554ee17 100644 --- a/api/tests/unit_tests/core/workflow/nodes/webhook/test_exceptions.py +++ b/api/tests/unit_tests/core/workflow/nodes/webhook/test_exceptions.py @@ -1,5 +1,4 @@ import pytest -from graphon.entities.exc import BaseNodeError from core.workflow.nodes.trigger_webhook.exc import ( WebhookConfigError, @@ -7,6 +6,7 @@ from core.workflow.nodes.trigger_webhook.exc import ( WebhookNotFoundError, WebhookTimeoutError, ) +from graphon.entities.exc import BaseNodeError def test_webhook_node_error_inheritance(): diff --git a/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_file_conversion.py b/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_file_conversion.py index cccd3fb676..1bbc12b23f 100644 --- a/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_file_conversion.py +++ b/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_file_conversion.py @@ -6,12 +6,9 @@ to FileVariable objects, fixing the "Invalid variable type: ObjectVariable" erro when passing files to downstream LLM nodes. """ +from typing import Any from unittest.mock import Mock, patch -from graphon.entities import GraphInitParams -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.runtime import GraphRuntimeState, VariablePool - from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom from core.workflow.nodes.trigger_webhook.entities import ( ContentType, @@ -21,6 +18,9 @@ from core.workflow.nodes.trigger_webhook.entities import ( ) from core.workflow.nodes.trigger_webhook.node import TriggerWebhookNode from core.workflow.system_variables import default_system_variables +from graphon.entities import GraphInitParams +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.runtime import GraphRuntimeState, VariablePool from tests.workflow_test_utils import build_test_variable_pool @@ -97,7 +97,7 @@ def create_test_file_dict( } -def build_webhook_variable_pool(inputs: dict) -> VariablePool: +def build_webhook_variable_pool(inputs: dict[str, Any]) -> VariablePool: return build_test_variable_pool( variables=default_system_variables(), node_id="webhook-node-1", @@ -105,7 +105,7 @@ def build_webhook_variable_pool(inputs: dict) -> VariablePool: ) -def expected_factory_mapping(file_dict: dict) -> dict: +def expected_factory_mapping(file_dict: dict[str, Any]) -> dict[str, Any]: return {**file_dict, "upload_file_id": file_dict["related_id"]} diff --git a/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_node.py b/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_node.py index 34c66a4f9f..427afa96ec 100644 --- a/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_node.py @@ -1,11 +1,7 @@ +from typing import Any from unittest.mock import patch import pytest -from graphon.entities import GraphInitParams -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.file import File, FileTransferMethod, FileType -from graphon.runtime import GraphRuntimeState, VariablePool -from graphon.variables import FileVariable, StringVariable from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom from core.trigger.constants import TRIGGER_WEBHOOK_NODE_TYPE @@ -18,6 +14,11 @@ from core.workflow.nodes.trigger_webhook.entities import ( ) from core.workflow.nodes.trigger_webhook.node import TriggerWebhookNode from core.workflow.system_variables import default_system_variables +from graphon.entities import GraphInitParams +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.file import File, FileTransferMethod, FileType +from graphon.runtime import GraphRuntimeState, VariablePool +from graphon.variables import FileVariable, StringVariable from tests.workflow_test_utils import build_test_variable_pool @@ -62,7 +63,7 @@ def create_webhook_node(webhook_data: WebhookData, variable_pool: VariablePool) return node -def build_webhook_variable_pool(inputs: dict) -> VariablePool: +def build_webhook_variable_pool(inputs: dict[str, Any]) -> VariablePool: return build_test_variable_pool( variables=default_system_variables(), node_id="1", diff --git a/api/tests/unit_tests/core/workflow/test_human_input_compat.py b/api/tests/unit_tests/core/workflow/test_human_input_compat.py index cd41c43e4a..0623800b30 100644 --- a/api/tests/unit_tests/core/workflow/test_human_input_compat.py +++ b/api/tests/unit_tests/core/workflow/test_human_input_compat.py @@ -1,6 +1,5 @@ from types import SimpleNamespace -from graphon.enums import BuiltinNodeTypes from pydantic import BaseModel from core.workflow.human_input_compat import ( @@ -16,6 +15,7 @@ from core.workflow.human_input_compat import ( normalize_node_data_for_graph, parse_human_input_delivery_methods, ) +from graphon.enums import BuiltinNodeTypes def test_email_delivery_config_helpers_render_and_sanitize_text() -> None: diff --git a/api/tests/unit_tests/core/workflow/test_node_factory.py b/api/tests/unit_tests/core/workflow/test_node_factory.py index bc0b339fec..424c50eb26 100644 --- a/api/tests/unit_tests/core/workflow/test_node_factory.py +++ b/api/tests/unit_tests/core/workflow/test_node_factory.py @@ -2,15 +2,15 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch, sentinel import pytest -from graphon.entities.base_node_data import BaseNodeData -from graphon.enums import BuiltinNodeTypes, NodeType -from graphon.nodes.code.entities import CodeLanguage -from graphon.variables.segments import StringSegment from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext, InvokeFrom, UserFrom from core.workflow import node_factory from core.workflow import template_rendering as workflow_template_rendering from core.workflow.nodes.knowledge_index import KNOWLEDGE_INDEX_NODE_TYPE +from graphon.entities.base_node_data import BaseNodeData +from graphon.enums import BuiltinNodeTypes, NodeType +from graphon.nodes.code.entities import CodeLanguage +from graphon.variables.segments import StringSegment def _assert_typed_node_config(config, *, node_id: str, node_type: NodeType, version: str = "1") -> None: @@ -110,6 +110,34 @@ class TestFetchMemory: ) +class TestDifyGraphInitContext: + def test_to_graph_init_params_preserves_explicit_values(self): + run_context = { + DIFY_RUN_CONTEXT_KEY: DifyRunContext( + tenant_id="tenant-id", + app_id="app-id", + user_id="user-id", + user_from=UserFrom.ACCOUNT, + invoke_from=InvokeFrom.DEBUGGER, + ), + "extra": "value", + } + graph_config = {"nodes": [], "edges": []} + graph_init_context = node_factory.DifyGraphInitContext( + workflow_id="workflow-id", + graph_config=graph_config, + run_context=run_context, + call_depth=2, + ) + + result = graph_init_context.to_graph_init_params() + + assert result.workflow_id == "workflow-id" + assert result.graph_config == graph_config + assert result.run_context == run_context + assert result.call_depth == 2 + + class TestDefaultWorkflowCodeExecutor: def test_execute_delegates_to_code_executor(self, monkeypatch): executor = node_factory.DefaultWorkflowCodeExecutor() @@ -172,6 +200,23 @@ class TestCodeExecutorJinja2TemplateRenderer: class TestDifyNodeFactoryInit: + def test_from_graph_init_context_translates_before_init(self): + graph_init_context = MagicMock() + graph_init_context.to_graph_init_params.return_value = sentinel.graph_init_params + + with patch.object(node_factory.DifyNodeFactory, "__init__", return_value=None) as init: + factory = node_factory.DifyNodeFactory.from_graph_init_context( + graph_init_context=graph_init_context, + graph_runtime_state=sentinel.graph_runtime_state, + ) + + assert isinstance(factory, node_factory.DifyNodeFactory) + graph_init_context.to_graph_init_params.assert_called_once_with() + init.assert_called_once_with( + graph_init_params=sentinel.graph_init_params, + graph_runtime_state=sentinel.graph_runtime_state, + ) + def test_init_builds_default_dependencies(self): graph_init_params = SimpleNamespace(run_context={"context": "value"}) graph_runtime_state = sentinel.graph_runtime_state diff --git a/api/tests/unit_tests/core/workflow/test_node_runtime.py b/api/tests/unit_tests/core/workflow/test_node_runtime.py index 4f9c1dad59..71a2afb28a 100644 --- a/api/tests/unit_tests/core/workflow/test_node_runtime.py +++ b/api/tests/unit_tests/core/workflow/test_node_runtime.py @@ -2,10 +2,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock, Mock, sentinel import pytest -from graphon.file import FileTransferMethod, FileType -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType -from graphon.nodes.human_input.entities import HumanInputNodeData from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext, InvokeFrom, UserFrom from core.llm_generator.output_parser.errors import OutputParserError @@ -30,6 +26,10 @@ from core.workflow.node_runtime import ( build_dify_llm_file_saver, resolve_dify_run_context, ) +from graphon.file import FileTransferMethod, FileType +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType +from graphon.nodes.human_input.entities import HumanInputNodeData from tests.workflow_test_utils import build_test_run_context diff --git a/api/tests/unit_tests/core/workflow/test_system_variable.py b/api/tests/unit_tests/core/workflow/test_system_variable.py index 05ea3dc311..bdeab1eda8 100644 --- a/api/tests/unit_tests/core/workflow/test_system_variable.py +++ b/api/tests/unit_tests/core/workflow/test_system_variable.py @@ -1,14 +1,13 @@ from types import SimpleNamespace -from graphon.file import File, FileTransferMethod, FileType -from graphon.nodes import BuiltinNodeTypes - from core.workflow.system_variables import ( build_system_variables, default_system_variables, get_node_creation_preload_selectors, system_variables_to_mapping, ) +from graphon.file import File, FileTransferMethod, FileType +from graphon.nodes import BuiltinNodeTypes def test_build_system_variables_normalizes_workflow_execution_id(): diff --git a/api/tests/unit_tests/core/workflow/test_variable_pool.py b/api/tests/unit_tests/core/workflow/test_variable_pool.py index e7b2b2914a..dddd6eb00c 100644 --- a/api/tests/unit_tests/core/workflow/test_variable_pool.py +++ b/api/tests/unit_tests/core/workflow/test_variable_pool.py @@ -2,6 +2,15 @@ import uuid from collections import defaultdict import pytest + +from core.workflow.system_variables import build_system_variables, system_variables_to_mapping +from core.workflow.variable_pool_initializer import add_variables_to_pool +from core.workflow.variable_prefixes import ( + CONVERSATION_VARIABLE_NODE_ID, + ENVIRONMENT_VARIABLE_NODE_ID, + SYSTEM_VARIABLE_NODE_ID, +) +from factories.variable_factory import build_segment, segment_to_variable from graphon.file import File, FileTransferMethod, FileType from graphon.runtime import VariablePool from graphon.variables import FileSegment, StringSegment @@ -27,15 +36,6 @@ from graphon.variables.variables import ( Variable, ) -from core.workflow.system_variables import build_system_variables, system_variables_to_mapping -from core.workflow.variable_pool_initializer import add_variables_to_pool -from core.workflow.variable_prefixes import ( - CONVERSATION_VARIABLE_NODE_ID, - ENVIRONMENT_VARIABLE_NODE_ID, - SYSTEM_VARIABLE_NODE_ID, -) -from factories.variable_factory import build_segment, segment_to_variable - @pytest.fixture def pool(): diff --git a/api/tests/unit_tests/core/workflow/test_workflow_entry.py b/api/tests/unit_tests/core/workflow/test_workflow_entry.py index d8361d06c4..041c5cc612 100644 --- a/api/tests/unit_tests/core/workflow/test_workflow_entry.py +++ b/api/tests/unit_tests/core/workflow/test_workflow_entry.py @@ -1,12 +1,6 @@ from types import SimpleNamespace import pytest -from graphon.entities.graph_config import NodeConfigDictAdapter -from graphon.file import File, FileTransferMethod, FileType -from graphon.nodes.code.code_node import CodeNode -from graphon.nodes.code.limits import CodeNodeLimits -from graphon.runtime import VariablePool -from graphon.variables.variables import StringVariable from configs import dify_config from core.helper.code_executor.code_executor import CodeLanguage @@ -16,6 +10,12 @@ from core.workflow.variable_prefixes import ( ENVIRONMENT_VARIABLE_NODE_ID, ) from core.workflow.workflow_entry import WorkflowEntry +from graphon.entities.graph_config import NodeConfigDictAdapter +from graphon.file import File, FileTransferMethod, FileType +from graphon.nodes.code.code_node import CodeNode +from graphon.nodes.code.limits import CodeNodeLimits +from graphon.runtime import VariablePool +from graphon.variables.variables import StringVariable @pytest.fixture(autouse=True) diff --git a/api/tests/unit_tests/core/workflow/test_workflow_entry_helpers.py b/api/tests/unit_tests/core/workflow/test_workflow_entry_helpers.py index 879c0bb721..55800ffc03 100644 --- a/api/tests/unit_tests/core/workflow/test_workflow_entry_helpers.py +++ b/api/tests/unit_tests/core/workflow/test_workflow_entry_helpers.py @@ -4,6 +4,12 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch, sentinel import pytest + +from core.app.apps.exc import GenerateTaskStoppedError +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom +from core.model_manager import ModelInstance +from core.workflow import workflow_entry +from core.workflow.system_variables import default_system_variables from graphon.entities.base_node_data import BaseNodeData from graphon.entities.graph_config import NodeConfigDictAdapter from graphon.enums import NodeType, WorkflowNodeExecutionStatus @@ -17,12 +23,6 @@ from graphon.nodes import BuiltinNodeTypes from graphon.nodes.base.node import Node from graphon.runtime import ChildGraphNotFoundError, VariablePool from graphon.variables.variables import StringVariable - -from core.app.apps.exc import GenerateTaskStoppedError -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom -from core.model_manager import ModelInstance -from core.workflow import workflow_entry -from core.workflow.system_variables import default_system_variables from tests.workflow_test_utils import build_test_graph_init_params, build_test_variable_pool @@ -349,7 +349,7 @@ class TestWorkflowEntrySingleStepRun: ] with ( - patch.object(workflow_entry, "GraphInitParams", return_value=sentinel.graph_init_params), + patch.object(workflow_entry, "DifyGraphInitContext", return_value=sentinel.graph_init_context), patch.object( workflow_entry, "GraphRuntimeState", @@ -358,7 +358,7 @@ class TestWorkflowEntrySingleStepRun: patch.object(workflow_entry, "build_dify_run_context", return_value={"_dify": "context"}), patch.object(workflow_entry.time, "perf_counter", return_value=123.0), patch.object(workflow_entry, "resolve_workflow_node_class", return_value=FakeLLMNode), - patch.object(workflow_entry, "DifyNodeFactory") as dify_node_factory, + patch.object(workflow_entry.DifyNodeFactory, "from_graph_init_context") as dify_node_factory, patch.object(workflow_entry, "load_into_variable_pool"), patch.object(workflow_entry.WorkflowEntry, "mapping_user_inputs_to_variable_pool"), patch.object( @@ -412,12 +412,12 @@ class TestWorkflowEntrySingleStepRun: raise NotImplementedError with ( - patch.object(workflow_entry, "GraphInitParams", return_value=sentinel.graph_init_params), + patch.object(workflow_entry, "DifyGraphInitContext", return_value=sentinel.graph_init_context), patch.object(workflow_entry, "GraphRuntimeState", return_value=sentinel.graph_runtime_state), patch.object(workflow_entry, "build_dify_run_context", return_value={"_dify": "context"}), patch.object(workflow_entry.time, "perf_counter", return_value=123.0), patch.object(workflow_entry, "resolve_workflow_node_class", return_value=FakeNode), - patch.object(workflow_entry, "DifyNodeFactory") as dify_node_factory, + patch.object(workflow_entry.DifyNodeFactory, "from_graph_init_context") as dify_node_factory, patch.object(workflow_entry, "add_node_inputs_to_pool") as add_node_inputs_to_pool, patch.object(workflow_entry, "load_into_variable_pool") as load_into_variable_pool, patch.object( @@ -481,12 +481,12 @@ class TestWorkflowEntrySingleStepRun: return {"question": ["node", "question"]} with ( - patch.object(workflow_entry, "GraphInitParams", return_value=sentinel.graph_init_params), + patch.object(workflow_entry, "DifyGraphInitContext", return_value=sentinel.graph_init_context), patch.object(workflow_entry, "GraphRuntimeState", return_value=sentinel.graph_runtime_state), patch.object(workflow_entry, "build_dify_run_context", return_value={"_dify": "context"}), patch.object(workflow_entry.time, "perf_counter", return_value=123.0), patch.object(workflow_entry, "resolve_workflow_node_class", return_value=FakeDatasourceNode), - patch.object(workflow_entry, "DifyNodeFactory") as dify_node_factory, + patch.object(workflow_entry.DifyNodeFactory, "from_graph_init_context") as dify_node_factory, patch.object(workflow_entry, "add_node_inputs_to_pool") as add_node_inputs_to_pool, patch.object(workflow_entry, "load_into_variable_pool") as load_into_variable_pool, patch.object( @@ -541,12 +541,12 @@ class TestWorkflowEntrySingleStepRun: return "1" with ( - patch.object(workflow_entry, "GraphInitParams", return_value=sentinel.graph_init_params), + patch.object(workflow_entry, "DifyGraphInitContext", return_value=sentinel.graph_init_context), patch.object(workflow_entry, "GraphRuntimeState", return_value=sentinel.graph_runtime_state), patch.object(workflow_entry, "build_dify_run_context", return_value={"_dify": "context"}), patch.object(workflow_entry.time, "perf_counter", return_value=123.0), patch.object(workflow_entry, "resolve_workflow_node_class", return_value=FakeNode), - patch.object(workflow_entry, "DifyNodeFactory") as dify_node_factory, + patch.object(workflow_entry.DifyNodeFactory, "from_graph_init_context") as dify_node_factory, patch.object(workflow_entry, "add_node_inputs_to_pool"), patch.object(workflow_entry, "load_into_variable_pool"), patch.object(workflow_entry.WorkflowEntry, "mapping_user_inputs_to_variable_pool"), @@ -651,14 +651,18 @@ class TestWorkflowEntryHelpers: patch.object(workflow_entry, "VariablePool", return_value=sentinel.variable_pool) as variable_pool_cls, patch.object(workflow_entry, "add_variables_to_pool") as add_variables_to_pool, patch.object( - workflow_entry, "GraphInitParams", return_value=sentinel.graph_init_params - ) as graph_init_params, + workflow_entry, "DifyGraphInitContext", return_value=sentinel.graph_init_context + ) as graph_init_context_cls, patch.object(workflow_entry, "GraphRuntimeState", return_value=sentinel.graph_runtime_state), patch.object( workflow_entry, "build_dify_run_context", return_value={"_dify": "context"} ) as build_dify_run_context, patch.object(workflow_entry.time, "perf_counter", return_value=123.0), - patch.object(workflow_entry, "DifyNodeFactory", return_value=dify_node_factory) as dify_node_factory_cls, + patch.object( + workflow_entry.DifyNodeFactory, + "from_graph_init_context", + return_value=dify_node_factory, + ) as dify_node_factory_cls, patch.object( workflow_entry.WorkflowEntry, "mapping_user_inputs_to_variable_pool", @@ -688,7 +692,7 @@ class TestWorkflowEntryHelpers: user_from=UserFrom.ACCOUNT, invoke_from=InvokeFrom.DEBUGGER, ) - graph_init_params.assert_called_once_with( + graph_init_context_cls.assert_called_once_with( workflow_id="", graph_config=workflow_entry.WorkflowEntry._create_single_node_graph( "node-id", {"type": BuiltinNodeTypes.PARAMETER_EXTRACTOR, "title": "Node"} @@ -697,7 +701,7 @@ class TestWorkflowEntryHelpers: call_depth=0, ) dify_node_factory_cls.assert_called_once_with( - graph_init_params=sentinel.graph_init_params, + graph_init_context=sentinel.graph_init_context, graph_runtime_state=sentinel.graph_runtime_state, ) mapping_user_inputs_to_variable_pool.assert_called_once_with( @@ -734,11 +738,15 @@ class TestWorkflowEntryHelpers: patch.object(workflow_entry, "default_system_variables", return_value=sentinel.system_variables), patch.object(workflow_entry, "VariablePool", return_value=sentinel.variable_pool), patch.object(workflow_entry, "add_variables_to_pool"), - patch.object(workflow_entry, "GraphInitParams", return_value=sentinel.graph_init_params), + patch.object(workflow_entry, "DifyGraphInitContext", return_value=sentinel.graph_init_context), patch.object(workflow_entry, "GraphRuntimeState", return_value=sentinel.graph_runtime_state), patch.object(workflow_entry, "build_dify_run_context", return_value={"_dify": "context"}), patch.object(workflow_entry.time, "perf_counter", return_value=123.0), - patch.object(workflow_entry, "DifyNodeFactory", return_value=dify_node_factory), + patch.object( + workflow_entry.DifyNodeFactory, + "from_graph_init_context", + return_value=dify_node_factory, + ), patch.object( workflow_entry.WorkflowEntry, "mapping_user_inputs_to_variable_pool", diff --git a/api/tests/unit_tests/core/workflow/test_workflow_entry_redis_channel.py b/api/tests/unit_tests/core/workflow/test_workflow_entry_redis_channel.py index 4b2f98aeff..80dc8927fa 100644 --- a/api/tests/unit_tests/core/workflow/test_workflow_entry_redis_channel.py +++ b/api/tests/unit_tests/core/workflow/test_workflow_entry_redis_channel.py @@ -2,11 +2,10 @@ from unittest.mock import MagicMock, patch -from graphon.graph_engine.command_channels import RedisChannel -from graphon.runtime import GraphRuntimeState, VariablePool - from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.workflow.workflow_entry import WorkflowEntry +from graphon.graph_engine.command_channels import RedisChannel +from graphon.runtime import GraphRuntimeState, VariablePool class TestWorkflowEntryRedisChannel: diff --git a/api/tests/unit_tests/enterprise/telemetry/test_enterprise_trace.py b/api/tests/unit_tests/enterprise/telemetry/test_enterprise_trace.py index bb1f78b80c..1ce9581aa1 100644 --- a/api/tests/unit_tests/enterprise/telemetry/test_enterprise_trace.py +++ b/api/tests/unit_tests/enterprise/telemetry/test_enterprise_trace.py @@ -4,6 +4,7 @@ from __future__ import annotations import json from datetime import UTC, datetime +from typing import Any from unittest.mock import MagicMock, patch import pytest @@ -57,7 +58,7 @@ _T1 = datetime(2024, 1, 10, 12, 0, 5, tzinfo=UTC) def make_workflow_info(**overrides) -> WorkflowTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "workflow_id": "wf-001", "tenant_id": "tenant-abc", "workflow_run_id": "run-001", @@ -86,7 +87,7 @@ def make_workflow_info(**overrides) -> WorkflowTraceInfo: def make_node_info(**overrides) -> WorkflowNodeTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "workflow_id": "wf-001", "workflow_run_id": "run-001", "tenant_id": "tenant-abc", @@ -115,7 +116,7 @@ def make_node_info(**overrides) -> WorkflowNodeTraceInfo: def make_draft_node_info(**overrides) -> DraftNodeExecutionTrace: - defaults: dict = { + defaults: dict[str, Any] = { "workflow_id": "wf-001", "workflow_run_id": "run-draft-001", "tenant_id": "tenant-abc", @@ -136,7 +137,7 @@ def make_draft_node_info(**overrides) -> DraftNodeExecutionTrace: def make_message_info(**overrides) -> MessageTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "message_id": "msg-001", "conversation_model": "gpt-4", "message_tokens": 40, @@ -161,7 +162,7 @@ def make_message_info(**overrides) -> MessageTraceInfo: def make_tool_info(**overrides) -> ToolTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "message_id": "msg-001", "tool_name": "web_search", "tool_inputs": {"query": "test"}, @@ -176,7 +177,7 @@ def make_tool_info(**overrides) -> ToolTraceInfo: def make_moderation_info(**overrides) -> ModerationTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "message_id": "msg-001", "flagged": False, "action": "pass", @@ -189,7 +190,7 @@ def make_moderation_info(**overrides) -> ModerationTraceInfo: def make_suggested_question_info(**overrides) -> SuggestedQuestionTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "message_id": "msg-001", "total_tokens": 30, "suggested_question": ["Question A?", "Question B?"], @@ -206,7 +207,7 @@ def make_suggested_question_info(**overrides) -> SuggestedQuestionTraceInfo: def make_dataset_retrieval_info(**overrides) -> DatasetRetrievalTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "message_id": "msg-001", "documents": [ { @@ -236,7 +237,7 @@ def make_dataset_retrieval_info(**overrides) -> DatasetRetrievalTraceInfo: def make_generate_name_info(**overrides) -> GenerateNameTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "message_id": "msg-001", "tenant_id": "tenant-abc", "conversation_id": "conv-001", @@ -251,7 +252,7 @@ def make_generate_name_info(**overrides) -> GenerateNameTraceInfo: def make_prompt_generation_info(**overrides) -> PromptGenerationTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "tenant_id": "tenant-abc", "user_id": "user-001", "app_id": "app-001", diff --git a/api/tests/unit_tests/extensions/otel/decorators/handlers/test_generate_handler.py b/api/tests/unit_tests/extensions/otel/decorators/handlers/test_generate_handler.py index f7475f2239..12e91f190f 100644 --- a/api/tests/unit_tests/extensions/otel/decorators/handlers/test_generate_handler.py +++ b/api/tests/unit_tests/extensions/otel/decorators/handlers/test_generate_handler.py @@ -39,7 +39,7 @@ class TestAppGenerateHandler: "root_node_id": None, } - arguments = handler._extract_arguments(AppGenerateService.generate, (), kwargs) + arguments = handler._extract_arguments(AppGenerateService.generate, **kwargs) assert arguments is not None, "Failed to extract arguments from AppGenerateService.generate" assert "app_model" in arguments, "Handler uses app_model but parameter is missing" @@ -70,14 +70,11 @@ class TestAppGenerateHandler: handler.wrapper( tracer, dummy_func, - (), - { - "app_model": mock_app_model, - "user": mock_account_user, - "args": {"workflow_id": test_workflow_id}, - "invoke_from": InvokeFrom.DEBUGGER, - "streaming": False, - }, + app_model=mock_app_model, + user=mock_account_user, + args={"workflow_id": test_workflow_id}, + invoke_from=InvokeFrom.DEBUGGER, + streaming=False, ) spans = memory_span_exporter.get_finished_spans() diff --git a/api/tests/unit_tests/extensions/otel/decorators/handlers/test_workflow_app_runner_handler.py b/api/tests/unit_tests/extensions/otel/decorators/handlers/test_workflow_app_runner_handler.py index 500f80fc3c..842e7f55e2 100644 --- a/api/tests/unit_tests/extensions/otel/decorators/handlers/test_workflow_app_runner_handler.py +++ b/api/tests/unit_tests/extensions/otel/decorators/handlers/test_workflow_app_runner_handler.py @@ -63,7 +63,7 @@ class TestWorkflowAppRunnerHandler: def runner_run(self): return "result" - handler.wrapper(tracer, runner_run, (mock_workflow_runner,), {}) + handler.wrapper(tracer, runner_run, mock_workflow_runner) spans = memory_span_exporter.get_finished_spans() assert len(spans) == 1 diff --git a/api/tests/unit_tests/extensions/otel/decorators/test_handler.py b/api/tests/unit_tests/extensions/otel/decorators/test_handler.py index 44788bab9a..bf861e3ef7 100644 --- a/api/tests/unit_tests/extensions/otel/decorators/test_handler.py +++ b/api/tests/unit_tests/extensions/otel/decorators/test_handler.py @@ -28,7 +28,7 @@ class TestSpanHandlerExtractArguments: args = (1, 2, 3) kwargs = {} - result = handler._extract_arguments(func, args, kwargs) + result = handler._extract_arguments(func, *args, **kwargs) assert result is not None assert result["a"] == 1 @@ -44,7 +44,7 @@ class TestSpanHandlerExtractArguments: args = () kwargs = {"a": 1, "b": 2, "c": 3} - result = handler._extract_arguments(func, args, kwargs) + result = handler._extract_arguments(func, *args, **kwargs) assert result is not None assert result["a"] == 1 @@ -60,7 +60,7 @@ class TestSpanHandlerExtractArguments: args = (1,) kwargs = {"b": 2, "c": 3} - result = handler._extract_arguments(func, args, kwargs) + result = handler._extract_arguments(func, *args, **kwargs) assert result is not None assert result["a"] == 1 @@ -76,7 +76,7 @@ class TestSpanHandlerExtractArguments: args = (1,) kwargs = {} - result = handler._extract_arguments(func, args, kwargs) + result = handler._extract_arguments(func, *args, **kwargs) assert result is not None assert result["a"] == 1 @@ -94,7 +94,7 @@ class TestSpanHandlerExtractArguments: instance = MyClass() args = (1, 2) kwargs = {} - result = handler._extract_arguments(instance.method, args, kwargs) + result = handler._extract_arguments(instance.method, *args, **kwargs) assert result is not None assert result["a"] == 1 @@ -109,7 +109,7 @@ class TestSpanHandlerExtractArguments: args = (1,) kwargs = {} - result = handler._extract_arguments(func, args, kwargs) + result = handler._extract_arguments(func, *args, **kwargs) assert result is None @@ -122,11 +122,11 @@ class TestSpanHandlerExtractArguments: assert func not in handler._signature_cache - handler._extract_arguments(func, (1, 2), {}) + handler._extract_arguments(func, 1, 2) assert func in handler._signature_cache cached_sig = handler._signature_cache[func] - handler._extract_arguments(func, (3, 4), {}) + handler._extract_arguments(func, 3, 4) assert handler._signature_cache[func] is cached_sig @@ -142,7 +142,7 @@ class TestSpanHandlerWrapper: def test_func(): return "result" - result = handler.wrapper(tracer, test_func, (), {}) + result = handler.wrapper(tracer, test_func) assert result == "result" spans = memory_span_exporter.get_finished_spans() @@ -159,7 +159,7 @@ class TestSpanHandlerWrapper: def test_func(): return "result" - handler.wrapper(tracer, test_func, (), {}) + handler.wrapper(tracer, test_func) spans = memory_span_exporter.get_finished_spans() assert len(spans) == 1 @@ -174,7 +174,7 @@ class TestSpanHandlerWrapper: def test_func(): return "result" - handler.wrapper(tracer, test_func, (), {}) + handler.wrapper(tracer, test_func) spans = memory_span_exporter.get_finished_spans() assert len(spans) == 1 @@ -190,7 +190,7 @@ class TestSpanHandlerWrapper: raise ValueError("test error") with pytest.raises(ValueError, match="test error"): - handler.wrapper(tracer, test_func, (), {}) + handler.wrapper(tracer, test_func) spans = memory_span_exporter.get_finished_spans() assert len(spans) == 1 @@ -208,7 +208,7 @@ class TestSpanHandlerWrapper: raise ValueError("test error") with pytest.raises(ValueError): - handler.wrapper(tracer, test_func, (), {}) + handler.wrapper(tracer, test_func) spans = memory_span_exporter.get_finished_spans() assert len(spans) == 1 @@ -225,7 +225,7 @@ class TestSpanHandlerWrapper: raise ValueError("test error") with pytest.raises(ValueError, match="test error"): - handler.wrapper(tracer, test_func, (), {}) + handler.wrapper(tracer, test_func) @patch("extensions.otel.decorators.base.dify_config.ENABLE_OTEL", True) def test_wrapper_passes_arguments_correctly(self, tracer_provider_with_memory_exporter, memory_span_exporter): @@ -236,7 +236,7 @@ class TestSpanHandlerWrapper: def test_func(a, b, c=10): return a + b + c - result = handler.wrapper(tracer, test_func, (1, 2), {"c": 3}) + result = handler.wrapper(tracer, test_func, 1, 2, c=3) assert result == 6 @@ -249,7 +249,7 @@ class TestSpanHandlerWrapper: def my_function(x): return x * 2 - result = handler.wrapper(tracer, my_function, (5,), {}) + result = handler.wrapper(tracer, my_function, 5) assert result == 10 spans = memory_span_exporter.get_finished_spans() diff --git a/api/tests/unit_tests/extensions/test_celery_ssl.py b/api/tests/unit_tests/extensions/test_celery_ssl.py index 81687ce5f8..366e45d86d 100644 --- a/api/tests/unit_tests/extensions/test_celery_ssl.py +++ b/api/tests/unit_tests/extensions/test_celery_ssl.py @@ -7,6 +7,47 @@ from unittest.mock import MagicMock, patch class TestCelerySSLConfiguration: """Test suite for Celery SSL configuration.""" + def test_get_celery_broker_transport_options_includes_global_keyprefix_for_redis(self): + mock_config = MagicMock() + mock_config.CELERY_USE_SENTINEL = False + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + with patch("extensions.ext_celery.dify_config", mock_config): + from extensions.ext_celery import get_celery_broker_transport_options + + result = get_celery_broker_transport_options() + + assert result["global_keyprefix"] == "enterprise-a:" + + def test_get_celery_broker_transport_options_omits_global_keyprefix_when_prefix_empty(self): + mock_config = MagicMock() + mock_config.CELERY_USE_SENTINEL = False + mock_config.REDIS_KEY_PREFIX = " " + + with patch("extensions.ext_celery.dify_config", mock_config): + from extensions.ext_celery import get_celery_broker_transport_options + + result = get_celery_broker_transport_options() + + assert "global_keyprefix" not in result + + def test_get_celery_broker_transport_options_keeps_sentinel_and_adds_global_keyprefix(self): + mock_config = MagicMock() + mock_config.CELERY_USE_SENTINEL = True + mock_config.CELERY_SENTINEL_MASTER_NAME = "mymaster" + mock_config.CELERY_SENTINEL_SOCKET_TIMEOUT = 0.1 + mock_config.CELERY_SENTINEL_PASSWORD = "secret" + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + with patch("extensions.ext_celery.dify_config", mock_config): + from extensions.ext_celery import get_celery_broker_transport_options + + result = get_celery_broker_transport_options() + + assert result["master_name"] == "mymaster" + assert result["sentinel_kwargs"]["password"] == "secret" + assert result["global_keyprefix"] == "enterprise-a:" + def test_get_celery_ssl_options_when_ssl_disabled(self): """Test SSL options when BROKER_USE_SSL is False.""" from configs import DifyConfig @@ -151,3 +192,49 @@ class TestCelerySSLConfiguration: # Check that SSL is also applied to Redis backend assert "redis_backend_use_ssl" in celery_app.conf assert celery_app.conf["redis_backend_use_ssl"] is not None + + def test_celery_init_applies_global_keyprefix_to_broker_and_backend_transport(self): + mock_config = MagicMock() + mock_config.BROKER_USE_SSL = False + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + mock_config.HUMAN_INPUT_TIMEOUT_TASK_INTERVAL = 1 + mock_config.CELERY_BROKER_URL = "redis://localhost:6379/0" + mock_config.CELERY_BACKEND = "redis" + mock_config.CELERY_RESULT_BACKEND = "redis://localhost:6379/0" + mock_config.CELERY_USE_SENTINEL = False + mock_config.LOG_FORMAT = "%(message)s" + mock_config.LOG_TZ = "UTC" + mock_config.LOG_FILE = None + mock_config.CELERY_TASK_ANNOTATIONS = {} + + mock_config.CELERY_BEAT_SCHEDULER_TIME = 1 + mock_config.ENABLE_CLEAN_EMBEDDING_CACHE_TASK = False + mock_config.ENABLE_CLEAN_UNUSED_DATASETS_TASK = False + mock_config.ENABLE_CREATE_TIDB_SERVERLESS_TASK = False + mock_config.ENABLE_UPDATE_TIDB_SERVERLESS_STATUS_TASK = False + mock_config.ENABLE_CLEAN_MESSAGES = False + mock_config.ENABLE_MAIL_CLEAN_DOCUMENT_NOTIFY_TASK = False + mock_config.ENABLE_DATASETS_QUEUE_MONITOR = False + mock_config.ENABLE_HUMAN_INPUT_TIMEOUT_TASK = False + mock_config.ENABLE_CHECK_UPGRADABLE_PLUGIN_TASK = False + mock_config.MARKETPLACE_ENABLED = False + mock_config.WORKFLOW_LOG_CLEANUP_ENABLED = False + mock_config.ENABLE_WORKFLOW_RUN_CLEANUP_TASK = False + mock_config.ENABLE_WORKFLOW_SCHEDULE_POLLER_TASK = False + mock_config.WORKFLOW_SCHEDULE_POLLER_INTERVAL = 1 + mock_config.ENABLE_TRIGGER_PROVIDER_REFRESH_TASK = False + mock_config.TRIGGER_PROVIDER_REFRESH_INTERVAL = 15 + mock_config.ENABLE_API_TOKEN_LAST_USED_UPDATE_TASK = False + mock_config.API_TOKEN_LAST_USED_UPDATE_INTERVAL = 30 + mock_config.ENTERPRISE_ENABLED = False + mock_config.ENTERPRISE_TELEMETRY_ENABLED = False + + with patch("extensions.ext_celery.dify_config", mock_config): + from dify_app import DifyApp + from extensions.ext_celery import init_app + + app = DifyApp(__name__) + celery_app = init_app(app) + + assert celery_app.conf["broker_transport_options"]["global_keyprefix"] == "enterprise-a:" + assert celery_app.conf["result_backend_transport_options"]["global_keyprefix"] == "enterprise-a:" diff --git a/api/tests/unit_tests/extensions/test_pubsub_channel.py b/api/tests/unit_tests/extensions/test_pubsub_channel.py index a5b41a7266..926c406ad4 100644 --- a/api/tests/unit_tests/extensions/test_pubsub_channel.py +++ b/api/tests/unit_tests/extensions/test_pubsub_channel.py @@ -6,6 +6,7 @@ from libs.broadcast_channel.redis.sharded_channel import ShardedRedisBroadcastCh def test_get_pubsub_broadcast_channel_defaults_to_pubsub(monkeypatch): monkeypatch.setattr(dify_config, "PUBSUB_REDIS_CHANNEL_TYPE", "pubsub") + monkeypatch.setattr(ext_redis, "_pubsub_redis_client", object()) channel = ext_redis.get_pubsub_broadcast_channel() @@ -14,6 +15,7 @@ def test_get_pubsub_broadcast_channel_defaults_to_pubsub(monkeypatch): def test_get_pubsub_broadcast_channel_sharded(monkeypatch): monkeypatch.setattr(dify_config, "PUBSUB_REDIS_CHANNEL_TYPE", "sharded") + monkeypatch.setattr(ext_redis, "_pubsub_redis_client", object()) channel = ext_redis.get_pubsub_broadcast_channel() diff --git a/api/tests/unit_tests/extensions/test_redis.py b/api/tests/unit_tests/extensions/test_redis.py index 933fa32894..21248439bf 100644 --- a/api/tests/unit_tests/extensions/test_redis.py +++ b/api/tests/unit_tests/extensions/test_redis.py @@ -1,53 +1,224 @@ +from unittest.mock import MagicMock, patch + from redis import RedisError +from redis.retry import Retry -from extensions.ext_redis import redis_fallback +from extensions.ext_redis import ( + RedisClientWrapper, + _get_base_redis_params, + _get_cluster_connection_health_params, + _get_connection_health_params, + _normalize_redis_key_prefix, + _serialize_redis_name, + redis_fallback, +) -def test_redis_fallback_success(): - @redis_fallback(default_return=None) - def test_func(): - return "success" +class TestGetConnectionHealthParams: + @patch("extensions.ext_redis.dify_config") + def test_includes_all_health_params(self, mock_config): + mock_config.REDIS_RETRY_RETRIES = 3 + mock_config.REDIS_RETRY_BACKOFF_BASE = 1.0 + mock_config.REDIS_RETRY_BACKOFF_CAP = 10.0 + mock_config.REDIS_SOCKET_TIMEOUT = 5.0 + mock_config.REDIS_SOCKET_CONNECT_TIMEOUT = 5.0 + mock_config.REDIS_HEALTH_CHECK_INTERVAL = 30 - assert test_func() == "success" + params = _get_connection_health_params() + + assert "retry" in params + assert "socket_timeout" in params + assert "socket_connect_timeout" in params + assert "health_check_interval" in params + assert isinstance(params["retry"], Retry) + assert params["retry"]._retries == 3 + assert params["socket_timeout"] == 5.0 + assert params["socket_connect_timeout"] == 5.0 + assert params["health_check_interval"] == 30 -def test_redis_fallback_error(): - @redis_fallback(default_return="fallback") - def test_func(): - raise RedisError("Redis error") +class TestGetClusterConnectionHealthParams: + @patch("extensions.ext_redis.dify_config") + def test_excludes_health_check_interval(self, mock_config): + mock_config.REDIS_RETRY_RETRIES = 3 + mock_config.REDIS_RETRY_BACKOFF_BASE = 1.0 + mock_config.REDIS_RETRY_BACKOFF_CAP = 10.0 + mock_config.REDIS_SOCKET_TIMEOUT = 5.0 + mock_config.REDIS_SOCKET_CONNECT_TIMEOUT = 5.0 + mock_config.REDIS_HEALTH_CHECK_INTERVAL = 30 - assert test_func() == "fallback" + params = _get_cluster_connection_health_params() + + assert "retry" in params + assert "socket_timeout" in params + assert "socket_connect_timeout" in params + assert "health_check_interval" not in params -def test_redis_fallback_none_default(): - @redis_fallback() - def test_func(): - raise RedisError("Redis error") +class TestGetBaseRedisParams: + @patch("extensions.ext_redis.dify_config") + def test_includes_retry_and_health_params(self, mock_config): + mock_config.REDIS_USERNAME = None + mock_config.REDIS_PASSWORD = None + mock_config.REDIS_DB = 0 + mock_config.REDIS_SERIALIZATION_PROTOCOL = 3 + mock_config.REDIS_ENABLE_CLIENT_SIDE_CACHE = False + mock_config.REDIS_RETRY_RETRIES = 3 + mock_config.REDIS_RETRY_BACKOFF_BASE = 1.0 + mock_config.REDIS_RETRY_BACKOFF_CAP = 10.0 + mock_config.REDIS_SOCKET_TIMEOUT = 5.0 + mock_config.REDIS_SOCKET_CONNECT_TIMEOUT = 5.0 + mock_config.REDIS_HEALTH_CHECK_INTERVAL = 30 - assert test_func() is None + params = _get_base_redis_params() + + assert "retry" in params + assert isinstance(params["retry"], Retry) + assert params["socket_timeout"] == 5.0 + assert params["socket_connect_timeout"] == 5.0 + assert params["health_check_interval"] == 30 + # Existing params still present + assert params["db"] == 0 + assert params["encoding"] == "utf-8" -def test_redis_fallback_with_args(): - @redis_fallback(default_return=0) - def test_func(x, y): - raise RedisError("Redis error") +class TestRedisFallback: + def test_redis_fallback_success(self): + @redis_fallback(default_return=None) + def test_func(): + return "success" - assert test_func(1, 2) == 0 + assert test_func() == "success" + + def test_redis_fallback_error(self): + @redis_fallback(default_return="fallback") + def test_func(): + raise RedisError("Redis error") + + assert test_func() == "fallback" + + def test_redis_fallback_none_default(self): + @redis_fallback() + def test_func(): + raise RedisError("Redis error") + + assert test_func() is None + + def test_redis_fallback_with_args(self): + @redis_fallback(default_return=0) + def test_func(x, y): + raise RedisError("Redis error") + + assert test_func(1, 2) == 0 + + def test_redis_fallback_with_kwargs(self): + @redis_fallback(default_return={}) + def test_func(x=None, y=None): + raise RedisError("Redis error") + + assert test_func(x=1, y=2) == {} + + def test_redis_fallback_preserves_function_metadata(self): + @redis_fallback(default_return=None) + def test_func(): + """Test function docstring""" + pass + + assert test_func.__name__ == "test_func" + assert test_func.__doc__ == "Test function docstring" -def test_redis_fallback_with_kwargs(): - @redis_fallback(default_return={}) - def test_func(x=None, y=None): - raise RedisError("Redis error") +class TestRedisKeyPrefixHelpers: + def test_normalize_redis_key_prefix_trims_whitespace(self): + assert _normalize_redis_key_prefix(" enterprise-a ") == "enterprise-a" - assert test_func(x=1, y=2) == {} + def test_normalize_redis_key_prefix_treats_whitespace_only_as_empty(self): + assert _normalize_redis_key_prefix(" ") == "" + + def test_serialize_redis_name_returns_original_when_prefix_empty(self): + assert _serialize_redis_name("model_lb_index:test", "") == "model_lb_index:test" + + def test_serialize_redis_name_adds_single_colon_separator(self): + assert _serialize_redis_name("model_lb_index:test", "enterprise-a") == "enterprise-a:model_lb_index:test" -def test_redis_fallback_preserves_function_metadata(): - @redis_fallback(default_return=None) - def test_func(): - """Test function docstring""" - pass +class TestRedisClientWrapperKeyPrefix: + def test_wrapper_get_prefixes_string_keys(self): + mock_client = MagicMock() + wrapper = RedisClientWrapper() + wrapper.initialize(mock_client) - assert test_func.__name__ == "test_func" - assert test_func.__doc__ == "Test function docstring" + with patch("extensions.ext_redis.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + wrapper.get("oauth_state:abc") + + mock_client.get.assert_called_once_with("enterprise-a:oauth_state:abc") + + def test_wrapper_delete_prefixes_multiple_keys(self): + mock_client = MagicMock() + wrapper = RedisClientWrapper() + wrapper.initialize(mock_client) + + with patch("extensions.ext_redis.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + wrapper.delete("key:a", "key:b") + + mock_client.delete.assert_called_once_with("enterprise-a:key:a", "enterprise-a:key:b") + + def test_wrapper_lock_prefixes_lock_name(self): + mock_client = MagicMock() + wrapper = RedisClientWrapper() + wrapper.initialize(mock_client) + + with patch("extensions.ext_redis.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + wrapper.lock("resource-lock", timeout=10) + + mock_client.lock.assert_called_once() + args, kwargs = mock_client.lock.call_args + assert args == ("enterprise-a:resource-lock",) + assert kwargs["timeout"] == 10 + + def test_wrapper_hash_operations_prefix_key_name(self): + mock_client = MagicMock() + wrapper = RedisClientWrapper() + wrapper.initialize(mock_client) + + with patch("extensions.ext_redis.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + wrapper.hset("hash:key", "field", "value") + wrapper.hgetall("hash:key") + + mock_client.hset.assert_called_once_with("enterprise-a:hash:key", "field", "value") + mock_client.hgetall.assert_called_once_with("enterprise-a:hash:key") + + def test_wrapper_zadd_prefixes_sorted_set_name(self): + mock_client = MagicMock() + wrapper = RedisClientWrapper() + wrapper.initialize(mock_client) + + with patch("extensions.ext_redis.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + wrapper.zadd("zset:key", {"member": 1}) + + mock_client.zadd.assert_called_once() + args, kwargs = mock_client.zadd.call_args + assert args == ("enterprise-a:zset:key", {"member": 1}) + assert kwargs["nx"] is False + + def test_wrapper_preserves_keys_when_prefix_is_empty(self): + mock_client = MagicMock() + wrapper = RedisClientWrapper() + wrapper.initialize(mock_client) + + with patch("extensions.ext_redis.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = " " + + wrapper.get("plain:key") + + mock_client.get.assert_called_once_with("plain:key") diff --git a/api/tests/unit_tests/factories/test_build_from_mapping.py b/api/tests/unit_tests/factories/test_build_from_mapping.py index 4fe3f2cb28..511192001e 100644 --- a/api/tests/unit_tests/factories/test_build_from_mapping.py +++ b/api/tests/unit_tests/factories/test_build_from_mapping.py @@ -2,13 +2,13 @@ import uuid from unittest.mock import MagicMock, patch import pytest -from graphon.file import File, FileTransferMethod, FileType, FileUploadConfig from httpx import Response from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.app.file_access import DatabaseFileAccessController, FileAccessScope, bind_file_access_scope from core.workflow.file_reference import build_file_reference, parse_file_reference, resolve_file_record_id from factories.file_factory.builders import build_from_mapping as _build_from_mapping +from graphon.file import File, FileTransferMethod, FileType, FileUploadConfig from models import ToolFile, UploadFile # Test Data diff --git a/api/tests/unit_tests/factories/test_variable_factory.py b/api/tests/unit_tests/factories/test_variable_factory.py index a06c42507d..c35e80a826 100644 --- a/api/tests/unit_tests/factories/test_variable_factory.py +++ b/api/tests/unit_tests/factories/test_variable_factory.py @@ -4,6 +4,11 @@ from typing import Any from uuid import uuid4 import pytest +from hypothesis import HealthCheck, given, settings +from hypothesis import strategies as st + +from factories import variable_factory +from factories.variable_factory import TypeMismatchError, build_segment, build_segment_with_type from graphon.file import File, FileTransferMethod, FileType from graphon.variables import ( ArrayNumberVariable, @@ -31,11 +36,6 @@ from graphon.variables.segments import ( StringSegment, ) from graphon.variables.types import SegmentType -from hypothesis import HealthCheck, given, settings -from hypothesis import strategies as st - -from factories import variable_factory -from factories.variable_factory import TypeMismatchError, build_segment, build_segment_with_type def test_string_variable(): diff --git a/api/tests/unit_tests/fields/test_file_fields.py b/api/tests/unit_tests/fields/test_file_fields.py index 0e848d6ef5..9d9f626b9e 100644 --- a/api/tests/unit_tests/fields/test_file_fields.py +++ b/api/tests/unit_tests/fields/test_file_fields.py @@ -4,11 +4,11 @@ from datetime import datetime from types import SimpleNamespace import pytest -from graphon.file import File, FileTransferMethod, FileType from core.workflow.file_reference import build_file_reference from fields import conversation_fields, message_fields from fields.file_fields import FileResponse, FileWithSignedUrl, RemoteFileInfo, UploadConfig +from graphon.file import File, FileTransferMethod, FileType def test_file_response_serializes_datetime() -> None: diff --git a/api/tests/unit_tests/libs/_human_input/support.py b/api/tests/unit_tests/libs/_human_input/support.py index 13577b7ca5..e6cc23161e 100644 --- a/api/tests/unit_tests/libs/_human_input/support.py +++ b/api/tests/unit_tests/libs/_human_input/support.py @@ -6,7 +6,6 @@ from typing import Any from graphon.nodes.human_input.entities import FormInput from graphon.nodes.human_input.enums import TimeoutUnit - from libs.datetime_utils import naive_utc_now diff --git a/api/tests/unit_tests/libs/_human_input/test_form_service.py b/api/tests/unit_tests/libs/_human_input/test_form_service.py index f1ce1a2c1c..fa2c02020b 100644 --- a/api/tests/unit_tests/libs/_human_input/test_form_service.py +++ b/api/tests/unit_tests/libs/_human_input/test_form_service.py @@ -5,6 +5,7 @@ Unit tests for FormService. from datetime import timedelta import pytest + from graphon.nodes.human_input.entities import ( FormInput, UserAction, @@ -13,7 +14,6 @@ from graphon.nodes.human_input.enums import ( FormInputType, TimeoutUnit, ) - from libs.datetime_utils import naive_utc_now from .support import ( diff --git a/api/tests/unit_tests/libs/_human_input/test_models.py b/api/tests/unit_tests/libs/_human_input/test_models.py index 0babfbb315..866ee61b3e 100644 --- a/api/tests/unit_tests/libs/_human_input/test_models.py +++ b/api/tests/unit_tests/libs/_human_input/test_models.py @@ -5,6 +5,7 @@ Unit tests for human input form models. from datetime import datetime, timedelta import pytest + from graphon.nodes.human_input.entities import ( FormInput, UserAction, @@ -13,7 +14,6 @@ from graphon.nodes.human_input.enums import ( FormInputType, TimeoutUnit, ) - from libs.datetime_utils import naive_utc_now from .support import FormSubmissionData, FormSubmissionRequest, HumanInputForm diff --git a/api/tests/unit_tests/libs/broadcast_channel/redis/test_channel_unit_tests.py b/api/tests/unit_tests/libs/broadcast_channel/redis/test_channel_unit_tests.py index 460374b6f6..8bef01c1ed 100644 --- a/api/tests/unit_tests/libs/broadcast_channel/redis/test_channel_unit_tests.py +++ b/api/tests/unit_tests/libs/broadcast_channel/redis/test_channel_unit_tests.py @@ -139,6 +139,28 @@ class TestTopic: mock_redis_client.publish.assert_called_once_with("test-topic", payload) + def test_publish_prefixes_regular_topic(self, mock_redis_client: MagicMock): + with patch("extensions.redis_names.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + topic = Topic(mock_redis_client, "test-topic") + + topic.publish(b"test message") + + mock_redis_client.publish.assert_called_once_with("enterprise-a:test-topic", b"test message") + + def test_subscribe_prefixes_regular_topic(self, mock_redis_client: MagicMock): + with patch("extensions.redis_names.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + topic = Topic(mock_redis_client, "test-topic") + + subscription = topic.subscribe() + try: + subscription._start_if_needed() + finally: + subscription.close() + + mock_redis_client.pubsub.return_value.subscribe.assert_called_once_with("enterprise-a:test-topic") + class TestShardedTopic: """Test cases for the ShardedTopic class.""" @@ -176,6 +198,15 @@ class TestShardedTopic: mock_redis_client.spublish.assert_called_once_with("test-sharded-topic", payload) + def test_publish_prefixes_sharded_topic(self, mock_redis_client: MagicMock): + with patch("extensions.redis_names.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + sharded_topic = ShardedTopic(mock_redis_client, "test-sharded-topic") + + sharded_topic.publish(b"test sharded message") + + mock_redis_client.spublish.assert_called_once_with("enterprise-a:test-sharded-topic", b"test sharded message") + def test_subscribe_returns_sharded_subscription(self, sharded_topic: ShardedTopic, mock_redis_client: MagicMock): """Test that subscribe() returns a _RedisShardedSubscription instance.""" subscription = sharded_topic.subscribe() @@ -185,6 +216,19 @@ class TestShardedTopic: assert subscription._pubsub is mock_redis_client.pubsub.return_value assert subscription._topic == "test-sharded-topic" + def test_subscribe_prefixes_sharded_topic(self, mock_redis_client: MagicMock): + with patch("extensions.redis_names.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + sharded_topic = ShardedTopic(mock_redis_client, "test-sharded-topic") + + subscription = sharded_topic.subscribe() + try: + subscription._start_if_needed() + finally: + subscription.close() + + mock_redis_client.pubsub.return_value.ssubscribe.assert_called_once_with("enterprise-a:test-sharded-topic") + @dataclasses.dataclass(frozen=True) class SubscriptionTestCase: diff --git a/api/tests/unit_tests/libs/broadcast_channel/redis/test_streams_channel_unit_tests.py b/api/tests/unit_tests/libs/broadcast_channel/redis/test_streams_channel_unit_tests.py index 0886b70ee5..c6f57c7e59 100644 --- a/api/tests/unit_tests/libs/broadcast_channel/redis/test_streams_channel_unit_tests.py +++ b/api/tests/unit_tests/libs/broadcast_channel/redis/test_streams_channel_unit_tests.py @@ -1,7 +1,8 @@ import threading import time from dataclasses import dataclass -from typing import cast +from typing import Any, cast +from unittest.mock import patch import pytest @@ -29,7 +30,7 @@ class FakeStreamsRedis: self._dollar_snapshots: dict[str, int] = {} # Publisher API - def xadd(self, key: str, fields: dict, *, maxlen: int | None = None) -> str: + def xadd(self, key: str, fields: dict[str, Any], *, maxlen: int | None = None) -> str: """Append entry to stream; accept optional maxlen for API compatibility. The test double ignores maxlen trimming semantics; only records the entry. @@ -44,7 +45,7 @@ class FakeStreamsRedis: self._expire_calls[key] = self._expire_calls.get(key, 0) + 1 # Consumer API - def xread(self, streams: dict, block: int | None = None, count: int | None = None): + def xread(self, streams: dict[str, Any], block: int | None = None, count: int | None = None): # Expect a single key assert len(streams) == 1 key, last_id = next(iter(streams.items())) @@ -79,7 +80,7 @@ class BlockingRedis: def __init__(self) -> None: self._release = threading.Event() - def xread(self, streams: dict, block: int | None = None, count: int | None = None): + def xread(self, streams: dict[str, Any], block: int | None = None, count: int | None = None): self._release.wait(timeout=block / 1000.0 if block else None) return [] @@ -150,6 +151,25 @@ class TestStreamsBroadcastChannel: # Expire called after publish assert fake_redis._expire_calls.get("stream:beta", 0) >= 1 + def test_topic_uses_prefixed_stream_key(self, fake_redis: FakeStreamsRedis): + with patch("extensions.redis_names.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + topic = StreamsBroadcastChannel(fake_redis, retention_seconds=60).topic("alpha") + + assert topic._topic == "alpha" + assert topic._key == "enterprise-a:stream:alpha" + + def test_publish_uses_prefixed_stream_key(self, fake_redis: FakeStreamsRedis): + with patch("extensions.redis_names.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + topic = StreamsBroadcastChannel(fake_redis, retention_seconds=60).topic("beta") + + topic.publish(b"hello") + + assert fake_redis._store["enterprise-a:stream:beta"][0][1] == {b"data": b"hello"} + assert fake_redis._expire_calls.get("enterprise-a:stream:beta", 0) >= 1 + def test_topic_exposes_self_as_producer_and_subscriber(self, streams_channel: StreamsBroadcastChannel): topic = streams_channel.topic("producer-subscriber") @@ -225,7 +245,7 @@ class TestStreamsSubscription: self._fields = fields self._calls = 0 - def xread(self, streams: dict, block: int | None = None, count: int | None = None): + def xread(self, streams: dict[str, Any], block: int | None = None, count: int | None = None): self._calls += 1 if self._calls == 1: key = next(iter(streams)) diff --git a/api/tests/unit_tests/libs/test_email_i18n.py b/api/tests/unit_tests/libs/test_email_i18n.py index 962a36fe03..b4c0eaf7ee 100644 --- a/api/tests/unit_tests/libs/test_email_i18n.py +++ b/api/tests/unit_tests/libs/test_email_i18n.py @@ -503,6 +503,7 @@ class TestEmailI18nIntegration: EmailType.ACCOUNT_DELETION_VERIFICATION, EmailType.QUEUE_MONITOR_ALERT, EmailType.DOCUMENT_CLEAN_NOTIFY, + EmailType.WORKFLOW_COMMENT_MENTION, ] for email_type in expected_types: diff --git a/api/tests/unit_tests/libs/test_pyrefly_type_coverage.py b/api/tests/unit_tests/libs/test_pyrefly_type_coverage.py new file mode 100644 index 0000000000..cad9d47bba --- /dev/null +++ b/api/tests/unit_tests/libs/test_pyrefly_type_coverage.py @@ -0,0 +1,139 @@ +import json +from typing import Any + +from libs.pyrefly_type_coverage import ( + CoverageSummary, + format_comparison_markdown, + format_summary_markdown, + parse_summary, +) + + +def _make_report(summary: dict[str, Any]) -> str: + return json.dumps({"module_reports": [], "summary": summary}) + + +_SAMPLE_SUMMARY: dict[str, Any] = { + "n_modules": 100, + "n_typable": 1000, + "n_typed": 400, + "n_any": 50, + "n_untyped": 550, + "coverage": 45.0, + "strict_coverage": 40.0, + "n_functions": 200, + "n_methods": 300, + "n_function_params": 150, + "n_method_params": 250, + "n_classes": 80, + "n_attrs": 40, + "n_properties": 20, + "n_type_ignores": 10, +} + + +def _make_summary( + *, + n_modules: int = 100, + n_typable: int = 1000, + n_typed: int = 400, + n_any: int = 50, + n_untyped: int = 550, + coverage: float = 45.0, + strict_coverage: float = 40.0, +) -> CoverageSummary: + return { + "n_modules": n_modules, + "n_typable": n_typable, + "n_typed": n_typed, + "n_any": n_any, + "n_untyped": n_untyped, + "coverage": coverage, + "strict_coverage": strict_coverage, + } + + +def test_parse_summary_extracts_fields() -> None: + report_json = _make_report(_SAMPLE_SUMMARY) + + result = parse_summary(report_json) + + assert result["n_modules"] == 100 + assert result["n_typable"] == 1000 + assert result["n_typed"] == 400 + assert result["n_any"] == 50 + assert result["n_untyped"] == 550 + assert result["coverage"] == 45.0 + assert result["strict_coverage"] == 40.0 + + +def test_parse_summary_handles_empty_input() -> None: + assert parse_summary("")["n_modules"] == 0 + assert parse_summary(" ")["n_modules"] == 0 + + +def test_parse_summary_handles_invalid_json() -> None: + assert parse_summary("not json")["n_modules"] == 0 + + +def test_parse_summary_handles_missing_summary_key() -> None: + assert parse_summary(json.dumps({"other": 1}))["n_modules"] == 0 + + +def test_parse_summary_handles_incomplete_summary() -> None: + partial = json.dumps({"summary": {"n_modules": 5}}) + assert parse_summary(partial)["n_modules"] == 0 + + +def test_format_summary_markdown_contains_key_metrics() -> None: + summary = _make_summary() + + result = format_summary_markdown(summary) + + assert "**Type coverage**" in result + assert "45.00%" in result + assert "40.00%" in result + assert "| Modules | 100 |" in result + + +def test_format_comparison_markdown_shows_positive_delta() -> None: + base = _make_summary() + pr = _make_summary( + n_modules=101, + n_typable=1010, + n_typed=420, + n_untyped=540, + coverage=46.53, + strict_coverage=41.58, + ) + + result = format_comparison_markdown(base, pr) + + assert "| Base | PR | Delta |" in result + assert "+1.53%" in result + assert "+1.58%" in result + assert "+20" in result + + +def test_format_comparison_markdown_shows_negative_delta() -> None: + base = _make_summary() + pr = _make_summary( + n_typed=390, + n_any=60, + coverage=44.0, + strict_coverage=39.0, + ) + + result = format_comparison_markdown(base, pr) + + assert "-1.00%" in result + assert "-10" in result + + +def test_format_comparison_markdown_shows_zero_delta() -> None: + summary = _make_summary() + + result = format_comparison_markdown(summary, summary) + + assert "0.00%" in result + assert "| 0 |" in result diff --git a/api/tests/unit_tests/libs/test_sendgrid_client.py b/api/tests/unit_tests/libs/test_sendgrid_client.py index 85744003c7..a65a9b1882 100644 --- a/api/tests/unit_tests/libs/test_sendgrid_client.py +++ b/api/tests/unit_tests/libs/test_sendgrid_client.py @@ -1,3 +1,4 @@ +from typing import Any from unittest.mock import MagicMock, patch import pytest @@ -6,7 +7,7 @@ from python_http_client.exceptions import UnauthorizedError from libs.sendgrid import SendGridClient -def _mail(to: str = "user@example.com") -> dict: +def _mail(to: str = "user@example.com") -> dict[str, Any]: return {"to": to, "subject": "Hi", "html": "Hi"} diff --git a/api/tests/unit_tests/libs/test_smtp_client.py b/api/tests/unit_tests/libs/test_smtp_client.py index 1edf4899ac..96d62de2d6 100644 --- a/api/tests/unit_tests/libs/test_smtp_client.py +++ b/api/tests/unit_tests/libs/test_smtp_client.py @@ -1,3 +1,4 @@ +from typing import Any from unittest.mock import ANY, MagicMock, patch import pytest @@ -5,7 +6,7 @@ import pytest from libs.smtp import SMTPClient -def _mail() -> dict: +def _mail() -> dict[str, Any]: return {"to": "user@example.com", "subject": "Hi", "html": "Hi"} diff --git a/api/tests/unit_tests/models/test_account_models.py b/api/tests/unit_tests/models/test_account_models.py index f48db77bb5..25933dd15b 100644 --- a/api/tests/unit_tests/models/test_account_models.py +++ b/api/tests/unit_tests/models/test_account_models.py @@ -12,7 +12,6 @@ This test suite covers: import base64 import secrets from datetime import UTC, datetime -from unittest.mock import MagicMock, patch from uuid import uuid4 import pytest @@ -310,90 +309,6 @@ class TestAccountStatusTransitions: class TestTenantRelationshipIntegrity: """Test suite for tenant relationship integrity.""" - @patch("models.account.db") - def test_account_current_tenant_property(self, mock_db): - """Test the current_tenant property getter.""" - # Arrange - account = Account( - name="Test User", - email="test@example.com", - ) - account.id = str(uuid4()) - - tenant = Tenant(name="Test Tenant") - tenant.id = str(uuid4()) - - account._current_tenant = tenant - - # Act - result = account.current_tenant - - # Assert - assert result == tenant - - @patch("models.account.Session") - @patch("models.account.db") - def test_account_current_tenant_setter_with_valid_tenant(self, mock_db, mock_session_class): - """Test setting current_tenant with a valid tenant relationship.""" - # Arrange - account = Account( - name="Test User", - email="test@example.com", - ) - account.id = str(uuid4()) - - tenant = Tenant(name="Test Tenant") - tenant.id = str(uuid4()) - - # Mock the session and queries - mock_session = MagicMock() - mock_session_class.return_value.__enter__.return_value = mock_session - - # Mock TenantAccountJoin query result - tenant_join = TenantAccountJoin( - tenant_id=tenant.id, - account_id=account.id, - role=TenantAccountRole.OWNER, - ) - mock_session.scalar.return_value = tenant_join - - # Mock Tenant query result - mock_session.scalars.return_value.one.return_value = tenant - - # Act - account.current_tenant = tenant - - # Assert - assert account._current_tenant == tenant - assert account.role == TenantAccountRole.OWNER - - @patch("models.account.Session") - @patch("models.account.db") - def test_account_current_tenant_setter_without_relationship(self, mock_db, mock_session_class): - """Test setting current_tenant when no relationship exists.""" - # Arrange - account = Account( - name="Test User", - email="test@example.com", - ) - account.id = str(uuid4()) - - tenant = Tenant(name="Test Tenant") - tenant.id = str(uuid4()) - - # Mock the session and queries - mock_session = MagicMock() - mock_session_class.return_value.__enter__.return_value = mock_session - - # Mock no TenantAccountJoin found - mock_session.scalar.return_value = None - - # Act - account.current_tenant = tenant - - # Assert - assert account._current_tenant is None - def test_account_current_tenant_id_property(self): """Test the current_tenant_id property.""" # Arrange @@ -418,61 +333,6 @@ class TestTenantRelationshipIntegrity: # Assert assert tenant_id_none is None - @patch("models.account.Session") - @patch("models.account.db") - def test_account_set_tenant_id_method(self, mock_db, mock_session_class): - """Test the set_tenant_id method.""" - # Arrange - account = Account( - name="Test User", - email="test@example.com", - ) - account.id = str(uuid4()) - - tenant = Tenant(name="Test Tenant") - tenant.id = str(uuid4()) - - tenant_join = TenantAccountJoin( - tenant_id=tenant.id, - account_id=account.id, - role=TenantAccountRole.ADMIN, - ) - - # Mock the session and queries - mock_session = MagicMock() - mock_session_class.return_value.__enter__.return_value = mock_session - mock_session.execute.return_value.first.return_value = (tenant, tenant_join) - - # Act - account.set_tenant_id(tenant.id) - - # Assert - assert account._current_tenant == tenant - assert account.role == TenantAccountRole.ADMIN - - @patch("models.account.Session") - @patch("models.account.db") - def test_account_set_tenant_id_with_no_relationship(self, mock_db, mock_session_class): - """Test set_tenant_id when no relationship exists.""" - # Arrange - account = Account( - name="Test User", - email="test@example.com", - ) - account.id = str(uuid4()) - tenant_id = str(uuid4()) - - # Mock the session and queries - mock_session = MagicMock() - mock_session_class.return_value.__enter__.return_value = mock_session - mock_session.execute.return_value.first.return_value = None - - # Act - account.set_tenant_id(tenant_id) - - # Assert - should not set tenant when no relationship exists - # The method returns early without setting _current_tenant - class TestAccountRolePermissions: """Test suite for account role permissions.""" @@ -605,51 +465,6 @@ class TestAccountRolePermissions: assert current_role == TenantAccountRole.EDITOR -class TestAccountGetByOpenId: - """Test suite for get_by_openid class method.""" - - @patch("models.account.db") - def test_get_by_openid_success(self, mock_db): - """Test successful retrieval of account by OpenID.""" - # Arrange - provider = "google" - open_id = "google_user_123" - account_id = str(uuid4()) - - mock_account_integrate = MagicMock() - mock_account_integrate.account_id = account_id - - mock_account = Account(name="Test User", email="test@example.com") - mock_account.id = account_id - - # Mock db.session.execute().scalar_one_or_none() for AccountIntegrate lookup - mock_db.session.execute.return_value.scalar_one_or_none.return_value = mock_account_integrate - # Mock db.session.scalar() for Account lookup - mock_db.session.scalar.return_value = mock_account - - # Act - result = Account.get_by_openid(provider, open_id) - - # Assert - assert result == mock_account - - @patch("models.account.db") - def test_get_by_openid_not_found(self, mock_db): - """Test get_by_openid when account integrate doesn't exist.""" - # Arrange - provider = "github" - open_id = "github_user_456" - - # Mock db.session.execute().scalar_one_or_none() to return None - mock_db.session.execute.return_value.scalar_one_or_none.return_value = None - - # Act - result = Account.get_by_openid(provider, open_id) - - # Assert - assert result is None - - class TestTenantAccountJoinModel: """Test suite for TenantAccountJoin model.""" @@ -760,31 +575,6 @@ class TestTenantModel: # Assert assert tenant.custom_config == '{"feature1": true, "feature2": "value"}' - @patch("models.account.db") - def test_tenant_get_accounts(self, mock_db): - """Test getting accounts associated with a tenant.""" - # Arrange - tenant = Tenant(name="Test Workspace") - tenant.id = str(uuid4()) - - account1 = Account(name="User 1", email="user1@example.com") - account1.id = str(uuid4()) - account2 = Account(name="User 2", email="user2@example.com") - account2.id = str(uuid4()) - - # Mock the query chain - mock_scalars = MagicMock() - mock_scalars.all.return_value = [account1, account2] - mock_db.session.scalars.return_value = mock_scalars - - # Act - accounts = tenant.get_accounts() - - # Assert - assert len(accounts) == 2 - assert account1 in accounts - assert account2 in accounts - class TestTenantStatusEnum: """Test suite for TenantStatus enum.""" diff --git a/api/tests/unit_tests/models/test_app_models.py b/api/tests/unit_tests/models/test_app_models.py index 59597fb8cd..4e46cf9654 100644 --- a/api/tests/unit_tests/models/test_app_models.py +++ b/api/tests/unit_tests/models/test_app_models.py @@ -291,24 +291,6 @@ class TestAppModelConfig: # Assert assert result == questions - def test_app_model_config_annotation_reply_dict_disabled(self): - """Test annotation_reply_dict when annotation is disabled.""" - # Arrange - config = AppModelConfig( - app_id=str(uuid4()), - provider="openai", - model_id="gpt-4", - created_by=str(uuid4()), - ) - - # Mock database scalar to return None (no annotation setting found) - with patch("models.model.db.session.scalar", return_value=None): - # Act - result = config.annotation_reply_dict - - # Assert - assert result == {"enabled": False} - class TestConversationModel: """Test suite for Conversation model integrity.""" @@ -948,17 +930,6 @@ class TestSiteModel: with pytest.raises(ValueError, match="Custom disclaimer cannot exceed 512 characters"): site.custom_disclaimer = long_disclaimer - def test_site_generate_code(self): - """Test Site.generate_code static method.""" - # Mock database scalar to return 0 (no existing codes) - with patch("models.model.db.session.scalar", return_value=0): - # Act - code = Site.generate_code(8) - - # Assert - assert isinstance(code, str) - assert len(code) == 8 - class TestModelIntegration: """Test suite for model integration scenarios.""" @@ -1146,314 +1117,3 @@ class TestModelIntegration: # Assert assert site.app_id == app.id assert app.enable_site is True - - -class TestConversationStatusCount: - """Test suite for Conversation.status_count property N+1 query fix.""" - - def test_status_count_no_messages(self): - """Test status_count returns None when conversation has no messages.""" - # Arrange - conversation = Conversation( - app_id=str(uuid4()), - mode=AppMode.CHAT, - name="Test Conversation", - status="normal", - from_source=ConversationFromSource.API, - ) - conversation.id = str(uuid4()) - - # Mock the database query to return no messages - with patch("models.model.db.session.scalars") as mock_scalars: - mock_scalars.return_value.all.return_value = [] - - # Act - result = conversation.status_count - - # Assert - assert result is None - - def test_status_count_messages_without_workflow_runs(self): - """Test status_count when messages have no workflow_run_id.""" - # Arrange - app_id = str(uuid4()) - conversation_id = str(uuid4()) - - conversation = Conversation( - app_id=app_id, - mode=AppMode.CHAT, - name="Test Conversation", - status="normal", - from_source=ConversationFromSource.API, - ) - conversation.id = conversation_id - - # Mock the database query to return no messages with workflow_run_id - with patch("models.model.db.session.scalars") as mock_scalars: - mock_scalars.return_value.all.return_value = [] - - # Act - result = conversation.status_count - - # Assert - assert result is None - - def test_status_count_batch_loading_implementation(self): - """Test that status_count uses batch loading instead of N+1 queries.""" - # Arrange - from graphon.enums import WorkflowExecutionStatus - - app_id = str(uuid4()) - conversation_id = str(uuid4()) - - # Create workflow run IDs - workflow_run_id_1 = str(uuid4()) - workflow_run_id_2 = str(uuid4()) - workflow_run_id_3 = str(uuid4()) - - conversation = Conversation( - app_id=app_id, - mode=AppMode.CHAT, - name="Test Conversation", - status="normal", - from_source=ConversationFromSource.API, - ) - conversation.id = conversation_id - - # Mock messages with workflow_run_id - mock_messages = [ - MagicMock( - conversation_id=conversation_id, - workflow_run_id=workflow_run_id_1, - ), - MagicMock( - conversation_id=conversation_id, - workflow_run_id=workflow_run_id_2, - ), - MagicMock( - conversation_id=conversation_id, - workflow_run_id=workflow_run_id_3, - ), - ] - - # Mock workflow runs with different statuses - mock_workflow_runs = [ - MagicMock( - id=workflow_run_id_1, - status=WorkflowExecutionStatus.SUCCEEDED.value, - app_id=app_id, - ), - MagicMock( - id=workflow_run_id_2, - status=WorkflowExecutionStatus.FAILED.value, - app_id=app_id, - ), - MagicMock( - id=workflow_run_id_3, - status=WorkflowExecutionStatus.PARTIAL_SUCCEEDED.value, - app_id=app_id, - ), - ] - - # Track database calls - calls_made = [] - - def mock_scalars(query): - calls_made.append(str(query)) - mock_result = MagicMock() - - # Return messages for the first query (messages with workflow_run_id) - if "messages" in str(query) and "conversation_id" in str(query): - mock_result.all.return_value = mock_messages - # Return workflow runs for the batch query - elif "workflow_runs" in str(query): - mock_result.all.return_value = mock_workflow_runs - else: - mock_result.all.return_value = [] - - return mock_result - - # Act & Assert - with patch("models.model.db.session.scalars", side_effect=mock_scalars): - result = conversation.status_count - - # Verify only 2 database queries were made (not N+1) - assert len(calls_made) == 2, f"Expected 2 queries, got {len(calls_made)}: {calls_made}" - - # Verify the first query gets messages - assert "messages" in calls_made[0] - assert "conversation_id" in calls_made[0] - - # Verify the second query batch loads workflow runs with proper filtering - assert "workflow_runs" in calls_made[1] - assert "app_id" in calls_made[1] # Security filter applied - assert "IN" in calls_made[1] # Batch loading with IN clause - - # Verify correct status counts - assert result["success"] == 1 # One SUCCEEDED - assert result["failed"] == 1 # One FAILED - assert result["partial_success"] == 1 # One PARTIAL_SUCCEEDED - assert result["paused"] == 0 - - def test_status_count_app_id_filtering(self): - """Test that status_count filters workflow runs by app_id for security.""" - # Arrange - app_id = str(uuid4()) - other_app_id = str(uuid4()) - conversation_id = str(uuid4()) - workflow_run_id = str(uuid4()) - - conversation = Conversation( - app_id=app_id, - mode=AppMode.CHAT, - name="Test Conversation", - status="normal", - from_source=ConversationFromSource.API, - ) - conversation.id = conversation_id - - # Mock message with workflow_run_id - mock_messages = [ - MagicMock( - conversation_id=conversation_id, - workflow_run_id=workflow_run_id, - ), - ] - - calls_made = [] - - def mock_scalars(query): - calls_made.append(str(query)) - mock_result = MagicMock() - - if "messages" in str(query): - mock_result.all.return_value = mock_messages - elif "workflow_runs" in str(query): - # Return empty list because no workflow run matches the correct app_id - mock_result.all.return_value = [] # Workflow run filtered out by app_id - else: - mock_result.all.return_value = [] - - return mock_result - - # Act - with patch("models.model.db.session.scalars", side_effect=mock_scalars): - result = conversation.status_count - - # Assert - query should include app_id filter - workflow_query = calls_made[1] - assert "app_id" in workflow_query - - # Since workflow run has wrong app_id, it shouldn't be included in counts - assert result["success"] == 0 - assert result["failed"] == 0 - assert result["partial_success"] == 0 - assert result["paused"] == 0 - - def test_status_count_handles_invalid_workflow_status(self): - """Test that status_count gracefully handles invalid workflow status values.""" - # Arrange - app_id = str(uuid4()) - conversation_id = str(uuid4()) - workflow_run_id = str(uuid4()) - - conversation = Conversation( - app_id=app_id, - mode=AppMode.CHAT, - name="Test Conversation", - status="normal", - from_source=ConversationFromSource.API, - ) - conversation.id = conversation_id - - mock_messages = [ - MagicMock( - conversation_id=conversation_id, - workflow_run_id=workflow_run_id, - ), - ] - - # Mock workflow run with invalid status - mock_workflow_runs = [ - MagicMock( - id=workflow_run_id, - status="invalid_status", # Invalid status that should raise ValueError - app_id=app_id, - ), - ] - - with patch("models.model.db.session.scalars") as mock_scalars: - # Mock the messages query - def mock_scalars_side_effect(query): - mock_result = MagicMock() - if "messages" in str(query): - mock_result.all.return_value = mock_messages - elif "workflow_runs" in str(query): - mock_result.all.return_value = mock_workflow_runs - else: - mock_result.all.return_value = [] - return mock_result - - mock_scalars.side_effect = mock_scalars_side_effect - - # Act - should not raise exception - result = conversation.status_count - - # Assert - should handle invalid status gracefully - assert result["success"] == 0 - assert result["failed"] == 0 - assert result["partial_success"] == 0 - assert result["paused"] == 0 - - def test_status_count_paused(self): - """Test status_count includes paused workflow runs.""" - # Arrange - from graphon.enums import WorkflowExecutionStatus - - app_id = str(uuid4()) - conversation_id = str(uuid4()) - workflow_run_id = str(uuid4()) - - conversation = Conversation( - app_id=app_id, - mode=AppMode.CHAT, - name="Test Conversation", - status="normal", - from_source=ConversationFromSource.API, - ) - conversation.id = conversation_id - - mock_messages = [ - MagicMock( - conversation_id=conversation_id, - workflow_run_id=workflow_run_id, - ), - ] - - mock_workflow_runs = [ - MagicMock( - id=workflow_run_id, - status=WorkflowExecutionStatus.PAUSED.value, - app_id=app_id, - ), - ] - - with patch("models.model.db.session.scalars") as mock_scalars: - - def mock_scalars_side_effect(query): - mock_result = MagicMock() - if "messages" in str(query): - mock_result.all.return_value = mock_messages - elif "workflow_runs" in str(query): - mock_result.all.return_value = mock_workflow_runs - else: - mock_result.all.return_value = [] - return mock_result - - mock_scalars.side_effect = mock_scalars_side_effect - - # Act - result = conversation.status_count - - # Assert - assert result["paused"] == 1 diff --git a/api/tests/unit_tests/models/test_comment_models.py b/api/tests/unit_tests/models/test_comment_models.py new file mode 100644 index 0000000000..277335cbef --- /dev/null +++ b/api/tests/unit_tests/models/test_comment_models.py @@ -0,0 +1,100 @@ +from unittest.mock import Mock, patch + +from models.comment import WorkflowComment, WorkflowCommentMention, WorkflowCommentReply + + +def test_workflow_comment_account_properties_and_cache() -> None: + comment = WorkflowComment(created_by="user-1", resolved_by="user-2", content="hello", position_x=1, position_y=2) + created_account = Mock(id="user-1") + resolved_account = Mock(id="user-2") + + with patch("models.comment.db.session.get", side_effect=[created_account, resolved_account]) as get_mock: + assert comment.created_by_account is created_account + assert comment.resolved_by_account is resolved_account + assert get_mock.call_count == 2 + + comment.cache_created_by_account(created_account) + comment.cache_resolved_by_account(resolved_account) + with patch("models.comment.db.session.get") as get_mock: + assert comment.created_by_account is created_account + assert comment.resolved_by_account is resolved_account + get_mock.assert_not_called() + + comment_without_resolver = WorkflowComment( + created_by="user-1", + resolved_by=None, + content="hello", + position_x=1, + position_y=2, + ) + with patch("models.comment.db.session.get") as get_mock: + assert comment_without_resolver.resolved_by_account is None + get_mock.assert_not_called() + + +def test_workflow_comment_counts_and_participants() -> None: + reply_1 = WorkflowCommentReply(comment_id="comment-1", content="reply-1", created_by="user-2") + reply_2 = WorkflowCommentReply(comment_id="comment-1", content="reply-2", created_by="user-2") + mention_1 = WorkflowCommentMention(comment_id="comment-1", mentioned_user_id="user-3") + mention_2 = WorkflowCommentMention(comment_id="comment-1", mentioned_user_id="user-4") + comment = WorkflowComment(created_by="user-1", resolved_by=None, content="hello", position_x=1, position_y=2) + comment.replies = [reply_1, reply_2] + comment.mentions = [mention_1, mention_2] + + account_1 = Mock(id="user-1") + account_2 = Mock(id="user-2") + account_3 = Mock(id="user-3") + account_map = { + "user-1": account_1, + "user-2": account_2, + "user-3": account_3, + "user-4": None, + } + + with patch("models.comment.db.session.get", side_effect=lambda _model, user_id: account_map[user_id]) as get_mock: + participants = comment.participants + + assert comment.reply_count == 2 + assert comment.mention_count == 2 + assert set(participants) == {account_1, account_2, account_3} + assert get_mock.call_count == 4 + + +def test_workflow_comment_participants_use_cached_accounts() -> None: + reply = WorkflowCommentReply(comment_id="comment-1", content="reply-1", created_by="user-2") + mention = WorkflowCommentMention(comment_id="comment-1", mentioned_user_id="user-3") + comment = WorkflowComment(created_by="user-1", resolved_by=None, content="hello", position_x=1, position_y=2) + comment.replies = [reply] + comment.mentions = [mention] + + account_1 = Mock(id="user-1") + account_2 = Mock(id="user-2") + account_3 = Mock(id="user-3") + comment.cache_created_by_account(account_1) + reply.cache_created_by_account(account_2) + mention.cache_mentioned_user_account(account_3) + + with patch("models.comment.db.session.get") as get_mock: + participants = comment.participants + + assert set(participants) == {account_1, account_2, account_3} + get_mock.assert_not_called() + + +def test_reply_and_mention_account_properties_and_cache() -> None: + reply = WorkflowCommentReply(comment_id="comment-1", content="reply", created_by="user-1") + mention = WorkflowCommentMention(comment_id="comment-1", mentioned_user_id="user-2") + reply_account = Mock(id="user-1") + mention_account = Mock(id="user-2") + + with patch("models.comment.db.session.get", side_effect=[reply_account, mention_account]) as get_mock: + assert reply.created_by_account is reply_account + assert mention.mentioned_user_account is mention_account + assert get_mock.call_count == 2 + + reply.cache_created_by_account(reply_account) + mention.cache_mentioned_user_account(mention_account) + with patch("models.comment.db.session.get") as get_mock: + assert reply.created_by_account is reply_account + assert mention.mentioned_user_account is mention_account + get_mock.assert_not_called() diff --git a/api/tests/unit_tests/models/test_conversation_variable.py b/api/tests/unit_tests/models/test_conversation_variable.py index 86163f1554..bb3a6db1a1 100644 --- a/api/tests/unit_tests/models/test_conversation_variable.py +++ b/api/tests/unit_tests/models/test_conversation_variable.py @@ -1,8 +1,7 @@ from uuid import uuid4 -from graphon.variables import SegmentType - from factories import variable_factory +from graphon.variables import SegmentType from models import ConversationVariable diff --git a/api/tests/unit_tests/models/test_dataset_models.py b/api/tests/unit_tests/models/test_dataset_models.py index 6c8a91129b..51d95c4239 100644 --- a/api/tests/unit_tests/models/test_dataset_models.py +++ b/api/tests/unit_tests/models/test_dataset_models.py @@ -12,7 +12,7 @@ This test suite covers: import json import pickle from datetime import UTC, datetime -from unittest.mock import patch +from unittest.mock import Mock, patch from uuid import uuid4 from core.rag.index_processor.constant.index_type import IndexTechniqueType @@ -25,6 +25,7 @@ from models.dataset import ( Document, DocumentSegment, Embedding, + ExternalKnowledgeBindings, ) from models.enums import ( DataSourceType, @@ -180,6 +181,24 @@ class TestDatasetModelValidation: assert result["top_k"] == 2 assert result["score_threshold"] == 0.0 + def test_dataset_external_knowledge_info_returns_none_for_cross_tenant_template(self): + """Test external datasets fail closed when the bound template is outside the tenant.""" + dataset = Dataset( + tenant_id=str(uuid4()), + name="External Dataset", + data_source_type=DataSourceType.UPLOAD_FILE, + created_by=str(uuid4()), + provider="external", + ) + binding = Mock(spec=ExternalKnowledgeBindings) + binding.external_knowledge_id = "knowledge-1" + binding.external_knowledge_api_id = str(uuid4()) + + with patch("models.dataset.db") as mock_db: + mock_db.session.scalar.side_effect = [binding, None] + + assert dataset.external_knowledge_info is None + def test_dataset_retrieval_model_dict_property(self): """Test retrieval_model_dict property with default values.""" # Arrange diff --git a/api/tests/unit_tests/models/test_model.py b/api/tests/unit_tests/models/test_model.py index a5909f60a8..a87dd7f15a 100644 --- a/api/tests/unit_tests/models/test_model.py +++ b/api/tests/unit_tests/models/test_model.py @@ -2,9 +2,9 @@ import importlib import types import pytest -from graphon.file import FILE_MODEL_IDENTITY, FileTransferMethod from core.workflow.file_reference import build_file_reference +from graphon.file import FILE_MODEL_IDENTITY, FileTransferMethod from models.model import Conversation, Message @@ -101,118 +101,6 @@ def _build_local_file_mapping(record_id: str, *, tenant_id: str | None = None) - return mapping -@pytest.mark.parametrize("owner_cls", [Conversation, Message]) -def test_inputs_resolve_owner_tenant_for_single_file_mapping( - monkeypatch: pytest.MonkeyPatch, - owner_cls: type[Conversation] | type[Message], -): - model_module = importlib.import_module("models.model") - build_calls: list[tuple[dict[str, object], str]] = [] - - monkeypatch.setattr(model_module.db.session, "scalar", lambda _: "tenant-from-app") - - def fake_build_from_mapping(*, mapping, tenant_id, config=None, strict_type_validation=False, access_controller): - _ = config, strict_type_validation, access_controller - build_calls.append((dict(mapping), tenant_id)) - return {"tenant_id": tenant_id, "upload_file_id": mapping.get("upload_file_id")} - - monkeypatch.setattr("factories.file_factory.build_from_mapping", fake_build_from_mapping) - - owner = owner_cls(app_id="app-1") - owner.inputs = {"file": _build_local_file_mapping("upload-1")} - - restored_inputs = owner.inputs - - assert restored_inputs["file"] == {"tenant_id": "tenant-from-app", "upload_file_id": "upload-1"} - assert build_calls == [ - ( - { - **_build_local_file_mapping("upload-1"), - "upload_file_id": "upload-1", - }, - "tenant-from-app", - ) - ] - - -@pytest.mark.parametrize("owner_cls", [Conversation, Message]) -def test_inputs_resolve_owner_tenant_for_file_list_mapping( - monkeypatch: pytest.MonkeyPatch, - owner_cls: type[Conversation] | type[Message], -): - model_module = importlib.import_module("models.model") - build_calls: list[tuple[dict[str, object], str]] = [] - - monkeypatch.setattr(model_module.db.session, "scalar", lambda _: "tenant-from-app") - - def fake_build_from_mapping(*, mapping, tenant_id, config=None, strict_type_validation=False, access_controller): - _ = config, strict_type_validation, access_controller - build_calls.append((dict(mapping), tenant_id)) - return {"tenant_id": tenant_id, "upload_file_id": mapping.get("upload_file_id")} - - monkeypatch.setattr("factories.file_factory.build_from_mapping", fake_build_from_mapping) - - owner = owner_cls(app_id="app-1") - owner.inputs = { - "files": [ - _build_local_file_mapping("upload-1"), - _build_local_file_mapping("upload-2"), - ] - } - - restored_inputs = owner.inputs - - assert restored_inputs["files"] == [ - {"tenant_id": "tenant-from-app", "upload_file_id": "upload-1"}, - {"tenant_id": "tenant-from-app", "upload_file_id": "upload-2"}, - ] - assert build_calls == [ - ( - { - **_build_local_file_mapping("upload-1"), - "upload_file_id": "upload-1", - }, - "tenant-from-app", - ), - ( - { - **_build_local_file_mapping("upload-2"), - "upload_file_id": "upload-2", - }, - "tenant-from-app", - ), - ] - - -@pytest.mark.parametrize("owner_cls", [Conversation, Message]) -def test_inputs_prefer_serialized_tenant_id_when_present( - monkeypatch: pytest.MonkeyPatch, - owner_cls: type[Conversation] | type[Message], -): - model_module = importlib.import_module("models.model") - - def fail_if_called(_): - raise AssertionError("App tenant lookup should not run when tenant_id exists in the file mapping") - - monkeypatch.setattr(model_module.db.session, "scalar", fail_if_called) - - def fake_build_from_mapping(*, mapping, tenant_id, config=None, strict_type_validation=False, access_controller): - _ = config, strict_type_validation, access_controller - return {"tenant_id": tenant_id, "upload_file_id": mapping.get("upload_file_id")} - - monkeypatch.setattr("factories.file_factory.build_from_mapping", fake_build_from_mapping) - - owner = owner_cls(app_id="app-1") - owner.inputs = {"file": _build_local_file_mapping("upload-1", tenant_id="tenant-from-payload")} - - restored_inputs = owner.inputs - - assert restored_inputs["file"] == { - "tenant_id": "tenant-from-payload", - "upload_file_id": "upload-1", - } - - @pytest.mark.parametrize("owner_cls", [Conversation, Message]) def test_inputs_restore_external_remote_url_file_mappings(owner_cls: type[Conversation] | type[Message]) -> None: owner = owner_cls(app_id="app-1") diff --git a/api/tests/unit_tests/models/test_workflow.py b/api/tests/unit_tests/models/test_workflow.py index e7c0479757..f7bdc97eb5 100644 --- a/api/tests/unit_tests/models/test_workflow.py +++ b/api/tests/unit_tests/models/test_workflow.py @@ -3,14 +3,13 @@ import json from unittest import mock from uuid import uuid4 -from graphon.file import File, FileTransferMethod, FileType -from graphon.variables import FloatVariable, IntegerVariable, SecretVariable, StringVariable -from graphon.variables.segments import IntegerSegment, Segment - from constants import HIDDEN_VALUE from core.helper import encrypter from core.workflow.file_reference import build_file_reference from factories.variable_factory import build_segment +from graphon.file import File, FileTransferMethod, FileType +from graphon.variables import FloatVariable, IntegerVariable, SecretVariable, StringVariable +from graphon.variables.segments import IntegerSegment, Segment from models.workflow import ( Workflow, WorkflowDraftVariable, diff --git a/api/tests/unit_tests/models/test_workflow_models.py b/api/tests/unit_tests/models/test_workflow_models.py index 507e1c8c3a..eb9fef7587 100644 --- a/api/tests/unit_tests/models/test_workflow_models.py +++ b/api/tests/unit_tests/models/test_workflow_models.py @@ -13,12 +13,12 @@ from datetime import UTC, datetime from uuid import uuid4 import pytest + from graphon.enums import ( BuiltinNodeTypes, WorkflowExecutionStatus, WorkflowNodeExecutionStatus, ) - from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom from models.workflow import ( Workflow, diff --git a/api/tests/unit_tests/models/test_workflow_trigger_log.py b/api/tests/unit_tests/models/test_workflow_trigger_log.py deleted file mode 100644 index 7fdad92fb6..0000000000 --- a/api/tests/unit_tests/models/test_workflow_trigger_log.py +++ /dev/null @@ -1,188 +0,0 @@ -import types - -import pytest - -from models.engine import db -from models.enums import CreatorUserRole -from models.workflow import WorkflowNodeExecutionModel - - -@pytest.fixture -def fake_db_scalar(monkeypatch): - """Provide a controllable fake for db.session.scalar (SQLAlchemy 2.0 style).""" - calls = [] - - def _install(side_effect): - def _fake_scalar(statement): - calls.append(statement) - return side_effect(statement) - - # Patch the modern API used by the model implementation - monkeypatch.setattr(db.session, "scalar", _fake_scalar) - - # Backward-compatibility: if the implementation still uses db.session.get, - # make it delegate to the same side_effect so tests remain valid on older code. - if hasattr(db.session, "get"): - - def _fake_get(*_args, **_kwargs): - return side_effect(None) - - monkeypatch.setattr(db.session, "get", _fake_get) - - return calls - - return _install - - -def make_account(id_: str = "acc-1"): - # Use a simple object to avoid constructing a full SQLAlchemy model instance - # Python 3.12 forbids reassigning __class__ for SimpleNamespace; not needed here. - obj = types.SimpleNamespace() - obj.id = id_ - return obj - - -def make_end_user(id_: str = "user-1"): - # Lightweight stand-in object; no need to spoof class identity. - obj = types.SimpleNamespace() - obj.id = id_ - return obj - - -def test_created_by_account_returns_account_when_role_account(fake_db_scalar): - account = make_account("acc-1") - - # The implementation uses db.session.scalar(select(Account)...). We only need to - # return the expected object when called; the exact SQL is irrelevant for this unit test. - def side_effect(_statement): - return account - - fake_db_scalar(side_effect) - - log = WorkflowNodeExecutionModel( - tenant_id="t1", - app_id="a1", - workflow_id="w1", - triggered_from="workflow-run", - workflow_run_id=None, - index=1, - predecessor_node_id=None, - node_execution_id=None, - node_id="n1", - node_type="start", - title="Start", - inputs=None, - process_data=None, - outputs=None, - status="succeeded", - error=None, - elapsed_time=0.0, - execution_metadata=None, - created_by_role=CreatorUserRole.ACCOUNT.value, - created_by="acc-1", - ) - - assert log.created_by_account is account - - -def test_created_by_account_returns_none_when_role_not_account(fake_db_scalar): - # Even if an Account with matching id exists, property should return None when role is END_USER - account = make_account("acc-1") - - def side_effect(_statement): - return account - - fake_db_scalar(side_effect) - - log = WorkflowNodeExecutionModel( - tenant_id="t1", - app_id="a1", - workflow_id="w1", - triggered_from="workflow-run", - workflow_run_id=None, - index=1, - predecessor_node_id=None, - node_execution_id=None, - node_id="n1", - node_type="start", - title="Start", - inputs=None, - process_data=None, - outputs=None, - status="succeeded", - error=None, - elapsed_time=0.0, - execution_metadata=None, - created_by_role=CreatorUserRole.END_USER.value, - created_by="acc-1", - ) - - assert log.created_by_account is None - - -def test_created_by_end_user_returns_end_user_when_role_end_user(fake_db_scalar): - end_user = make_end_user("user-1") - - def side_effect(_statement): - return end_user - - fake_db_scalar(side_effect) - - log = WorkflowNodeExecutionModel( - tenant_id="t1", - app_id="a1", - workflow_id="w1", - triggered_from="workflow-run", - workflow_run_id=None, - index=1, - predecessor_node_id=None, - node_execution_id=None, - node_id="n1", - node_type="start", - title="Start", - inputs=None, - process_data=None, - outputs=None, - status="succeeded", - error=None, - elapsed_time=0.0, - execution_metadata=None, - created_by_role=CreatorUserRole.END_USER.value, - created_by="user-1", - ) - - assert log.created_by_end_user is end_user - - -def test_created_by_end_user_returns_none_when_role_not_end_user(fake_db_scalar): - end_user = make_end_user("user-1") - - def side_effect(_statement): - return end_user - - fake_db_scalar(side_effect) - - log = WorkflowNodeExecutionModel( - tenant_id="t1", - app_id="a1", - workflow_id="w1", - triggered_from="workflow-run", - workflow_run_id=None, - index=1, - predecessor_node_id=None, - node_execution_id=None, - node_id="n1", - node_type="start", - title="Start", - inputs=None, - process_data=None, - outputs=None, - status="succeeded", - error=None, - elapsed_time=0.0, - execution_metadata=None, - created_by_role=CreatorUserRole.ACCOUNT.value, - created_by="user-1", - ) - - assert log.created_by_end_user is None diff --git a/api/tests/unit_tests/repositories/test_workflow_collaboration_repository.py b/api/tests/unit_tests/repositories/test_workflow_collaboration_repository.py new file mode 100644 index 0000000000..1f47e8b692 --- /dev/null +++ b/api/tests/unit_tests/repositories/test_workflow_collaboration_repository.py @@ -0,0 +1,121 @@ +import json +from unittest.mock import Mock + +import pytest + +from repositories import workflow_collaboration_repository as repo_module +from repositories.workflow_collaboration_repository import WorkflowCollaborationRepository + + +class TestWorkflowCollaborationRepository: + @pytest.fixture + def mock_redis(self, monkeypatch: pytest.MonkeyPatch) -> Mock: + mock_redis = Mock() + monkeypatch.setattr(repo_module, "redis_client", mock_redis) + return mock_redis + + def test_get_sid_mapping_returns_mapping(self, mock_redis: Mock) -> None: + # Arrange + mock_redis.get.return_value = b'{"workflow_id":"wf-1","user_id":"u-1"}' + repository = WorkflowCollaborationRepository() + + # Act + result = repository.get_sid_mapping("sid-1") + + # Assert + assert result == {"workflow_id": "wf-1", "user_id": "u-1"} + + def test_list_sessions_filters_invalid_entries(self, mock_redis: Mock) -> None: + # Arrange + mock_redis.hgetall.return_value = { + b"sid-1": b'{"user_id":"u-1","username":"Jane","sid":"sid-1","connected_at":2}', + b"sid-2": b'{"username":"Missing","sid":"sid-2"}', + b"sid-3": b"not-json", + } + repository = WorkflowCollaborationRepository() + + # Act + result = repository.list_sessions("wf-1") + + # Assert + assert result == [ + { + "user_id": "u-1", + "username": "Jane", + "avatar": None, + "sid": "sid-1", + "connected_at": 2, + } + ] + + def test_set_session_info_persists_payload(self, mock_redis: Mock) -> None: + # Arrange + mock_redis.exists.return_value = True + repository = WorkflowCollaborationRepository() + payload = { + "user_id": "u-1", + "username": "Jane", + "avatar": None, + "sid": "sid-1", + "connected_at": 1, + } + + # Act + repository.set_session_info("wf-1", payload) + + # Assert + assert mock_redis.hset.called + workflow_key, sid, session_json = mock_redis.hset.call_args.args + assert workflow_key == "workflow_online_users:wf-1" + assert sid == "sid-1" + assert json.loads(session_json)["user_id"] == "u-1" + assert mock_redis.set.called + + def test_refresh_session_state_expires_keys(self, mock_redis: Mock) -> None: + # Arrange + mock_redis.exists.return_value = True + repository = WorkflowCollaborationRepository() + + # Act + repository.refresh_session_state("wf-1", "sid-1") + + # Assert + assert mock_redis.expire.call_count == 2 + + def test_get_current_leader_decodes_bytes(self, mock_redis: Mock) -> None: + # Arrange + mock_redis.get.return_value = b"sid-1" + repository = WorkflowCollaborationRepository() + + # Act + result = repository.get_current_leader("wf-1") + + # Assert + assert result == "sid-1" + + def test_set_leader_if_absent_uses_nx(self, mock_redis: Mock) -> None: + # Arrange + mock_redis.set.return_value = True + repository = WorkflowCollaborationRepository() + + # Act + result = repository.set_leader_if_absent("wf-1", "sid-1") + + # Assert + assert result is True + _key, _value = mock_redis.set.call_args.args + assert _key == "workflow_leader:wf-1" + assert _value == "sid-1" + assert mock_redis.set.call_args.kwargs["nx"] is True + assert "ex" in mock_redis.set.call_args.kwargs + + def test_get_session_sids_decodes(self, mock_redis: Mock) -> None: + # Arrange + mock_redis.hkeys.return_value = [b"sid-1", "sid-2"] + repository = WorkflowCollaborationRepository() + + # Act + result = repository.get_session_sids("wf-1") + + # Assert + assert result == ["sid-1", "sid-2"] diff --git a/api/tests/unit_tests/repositories/workflow_node_execution/__init__.py b/api/tests/unit_tests/repositories/workflow_node_execution/__init__.py deleted file mode 100644 index 78815a8d1a..0000000000 --- a/api/tests/unit_tests/repositories/workflow_node_execution/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -""" -Unit tests for workflow_node_execution repositories. -""" diff --git a/api/tests/unit_tests/repositories/workflow_node_execution/test_sqlalchemy_repository.py b/api/tests/unit_tests/repositories/workflow_node_execution/test_sqlalchemy_repository.py deleted file mode 100644 index 10850970d8..0000000000 --- a/api/tests/unit_tests/repositories/workflow_node_execution/test_sqlalchemy_repository.py +++ /dev/null @@ -1,340 +0,0 @@ -""" -Unit tests for the SQLAlchemy implementation of WorkflowNodeExecutionRepository. -""" - -import json -import uuid -from datetime import datetime -from decimal import Decimal -from unittest.mock import MagicMock, PropertyMock - -import pytest -from graphon.entities import ( - WorkflowNodeExecution, -) -from graphon.enums import ( - BuiltinNodeTypes, - WorkflowNodeExecutionMetadataKey, - WorkflowNodeExecutionStatus, -) -from graphon.model_runtime.utils.encoders import jsonable_encoder -from pytest_mock import MockerFixture -from sqlalchemy.orm import Session, sessionmaker - -from core.repositories import SQLAlchemyWorkflowNodeExecutionRepository -from core.repositories.factory import OrderConfig -from models.account import Account, Tenant -from models.workflow import WorkflowNodeExecutionModel, WorkflowNodeExecutionTriggeredFrom - - -def configure_mock_execution(mock_execution): - """Configure a mock execution with proper JSON serializable values.""" - # Configure inputs, outputs, process_data, and execution_metadata to return JSON serializable values - type(mock_execution).inputs = PropertyMock(return_value='{"key": "value"}') - type(mock_execution).outputs = PropertyMock(return_value='{"result": "success"}') - type(mock_execution).process_data = PropertyMock(return_value='{"process": "data"}') - type(mock_execution).execution_metadata = PropertyMock(return_value='{"metadata": "info"}') - - # Configure status and triggered_from to be valid enum values - mock_execution.status = "running" - mock_execution.triggered_from = "workflow-run" - - return mock_execution - - -@pytest.fixture -def session(): - """Create a mock SQLAlchemy session.""" - session = MagicMock(spec=Session) - # Configure the session to be used as a context manager - session.__enter__ = MagicMock(return_value=session) - session.__exit__ = MagicMock(return_value=None) - - # Configure the session factory to return the session - session_factory = MagicMock(spec=sessionmaker) - session_factory.return_value = session - return session, session_factory - - -@pytest.fixture -def mock_user(): - """Create a user instance for testing.""" - user = Account(name="test", email="test@example.com") - user.id = "test-user-id" - - tenant = Tenant(name="Test Workspace") - tenant.id = "test-tenant" - user._current_tenant = MagicMock() - user._current_tenant.id = "test-tenant" - - return user - - -@pytest.fixture -def repository(session, mock_user): - """Create a repository instance with test data.""" - _, session_factory = session - app_id = "test-app" - return SQLAlchemyWorkflowNodeExecutionRepository( - session_factory=session_factory, - user=mock_user, - app_id=app_id, - triggered_from=WorkflowNodeExecutionTriggeredFrom.WORKFLOW_RUN, - ) - - -def test_save(repository, session): - """Test save method.""" - session_obj, _ = session - # Create a mock execution - execution = MagicMock(spec=WorkflowNodeExecution) - execution.id = "test-id" - execution.node_execution_id = "test-node-execution-id" - execution.tenant_id = None - execution.app_id = None - execution.inputs = None - execution.process_data = None - execution.outputs = None - execution.metadata = None - execution.workflow_id = str(uuid.uuid4()) - - # Mock the to_db_model method to return the execution itself - # This simulates the behavior of setting tenant_id and app_id - db_model = MagicMock(spec=WorkflowNodeExecutionModel) - db_model.id = "test-id" - db_model.node_execution_id = "test-node-execution-id" - repository._to_db_model = MagicMock(return_value=db_model) - - # Mock session.get to return None (no existing record) - session_obj.get.return_value = None - - # Call save method - repository.save(execution) - - # Assert to_db_model was called with the execution - repository._to_db_model.assert_called_once_with(execution) - - # Assert session.get was called to check for existing record - session_obj.get.assert_called_once_with(WorkflowNodeExecutionModel, db_model.id) - - # Assert session.add was called for new record - session_obj.add.assert_called_once_with(db_model) - - # Assert session.commit was called - session_obj.commit.assert_called_once() - - -def test_save_with_existing_tenant_id(repository, session): - """Test save method with existing tenant_id.""" - session_obj, _ = session - # Create a mock execution with existing tenant_id - execution = MagicMock(spec=WorkflowNodeExecutionModel) - execution.id = "existing-id" - execution.node_execution_id = "existing-node-execution-id" - execution.tenant_id = "existing-tenant" - execution.app_id = None - execution.inputs = None - execution.process_data = None - execution.outputs = None - execution.metadata = None - - # Create a modified execution that will be returned by _to_db_model - modified_execution = MagicMock(spec=WorkflowNodeExecutionModel) - modified_execution.id = "existing-id" - modified_execution.node_execution_id = "existing-node-execution-id" - modified_execution.tenant_id = "existing-tenant" # Tenant ID should not change - modified_execution.app_id = repository._app_id # App ID should be set - # Create a dictionary to simulate __dict__ for updating attributes - modified_execution.__dict__ = { - "id": "existing-id", - "node_execution_id": "existing-node-execution-id", - "tenant_id": "existing-tenant", - "app_id": repository._app_id, - } - - # Mock the to_db_model method to return the modified execution - repository._to_db_model = MagicMock(return_value=modified_execution) - - # Mock session.get to return an existing record - existing_model = MagicMock(spec=WorkflowNodeExecutionModel) - session_obj.get.return_value = existing_model - - # Call save method - repository.save(execution) - - # Assert to_db_model was called with the execution - repository._to_db_model.assert_called_once_with(execution) - - # Assert session.get was called to check for existing record - session_obj.get.assert_called_once_with(WorkflowNodeExecutionModel, modified_execution.id) - - # Assert session.add was NOT called since we're updating existing - session_obj.add.assert_not_called() - - # Assert session.commit was called - session_obj.commit.assert_called_once() - - -def test_get_by_workflow_execution(repository, session, mocker: MockerFixture): - """Test get_by_workflow_execution method.""" - session_obj, _ = session - # Set up mock - mock_select = mocker.patch("core.repositories.sqlalchemy_workflow_node_execution_repository.select") - mock_asc = mocker.patch("core.repositories.sqlalchemy_workflow_node_execution_repository.asc") - mock_desc = mocker.patch("core.repositories.sqlalchemy_workflow_node_execution_repository.desc") - - mock_WorkflowNodeExecutionModel = mocker.patch( - "core.repositories.sqlalchemy_workflow_node_execution_repository.WorkflowNodeExecutionModel" - ) - mock_stmt = mocker.MagicMock() - mock_select.return_value = mock_stmt - mock_stmt.where.return_value = mock_stmt - mock_stmt.order_by.return_value = mock_stmt - mock_asc.return_value = mock_stmt - mock_desc.return_value = mock_stmt - mock_WorkflowNodeExecutionModel.preload_offload_data_and_files.return_value = mock_stmt - - # Create a properly configured mock execution - mock_execution = mocker.MagicMock(spec=WorkflowNodeExecutionModel) - configure_mock_execution(mock_execution) - session_obj.scalars.return_value.all.return_value = [mock_execution] - - # Create a mock domain model to be returned by _to_domain_model - mock_domain_model = mocker.MagicMock() - # Mock the _to_domain_model method to return our mock domain model - repository._to_domain_model = mocker.MagicMock(return_value=mock_domain_model) - - # Call method - order_config = OrderConfig(order_by=["index"], order_direction="desc") - result = repository.get_by_workflow_execution( - workflow_execution_id="test-workflow-run-id", - order_config=order_config, - ) - - # Assert select was called with correct parameters - mock_select.assert_called_once() - session_obj.scalars.assert_called_once_with(mock_stmt) - mock_WorkflowNodeExecutionModel.preload_offload_data_and_files.assert_called_once_with(mock_stmt) - # Assert _to_domain_model was called with the mock execution - repository._to_domain_model.assert_called_once_with(mock_execution) - # Assert the result contains our mock domain model - assert len(result) == 1 - assert result[0] is mock_domain_model - - -def test_to_db_model(repository): - """Test to_db_model method.""" - # Create a domain model - domain_model = WorkflowNodeExecution( - id="test-id", - workflow_id="test-workflow-id", - node_execution_id="test-node-execution-id", - workflow_execution_id="test-workflow-run-id", - index=1, - predecessor_node_id="test-predecessor-id", - node_id="test-node-id", - node_type=BuiltinNodeTypes.START, - title="Test Node", - inputs={"input_key": "input_value"}, - process_data={"process_key": "process_value"}, - outputs={"output_key": "output_value"}, - status=WorkflowNodeExecutionStatus.RUNNING, - error=None, - elapsed_time=1.5, - metadata={ - WorkflowNodeExecutionMetadataKey.TOTAL_TOKENS: 100, - WorkflowNodeExecutionMetadataKey.TOTAL_PRICE: Decimal("0.0"), - }, - created_at=datetime.now(), - finished_at=None, - ) - - # Convert to DB model - db_model = repository._to_db_model(domain_model) - - # Assert DB model has correct values - assert isinstance(db_model, WorkflowNodeExecutionModel) - assert db_model.id == domain_model.id - assert db_model.tenant_id == repository._tenant_id - assert db_model.app_id == repository._app_id - assert db_model.workflow_id == domain_model.workflow_id - assert db_model.triggered_from == repository._triggered_from - assert db_model.workflow_run_id == domain_model.workflow_execution_id - assert db_model.index == domain_model.index - assert db_model.predecessor_node_id == domain_model.predecessor_node_id - assert db_model.node_execution_id == domain_model.node_execution_id - assert db_model.node_id == domain_model.node_id - assert db_model.node_type == domain_model.node_type - assert db_model.title == domain_model.title - - assert db_model.inputs_dict == domain_model.inputs - assert db_model.process_data_dict == domain_model.process_data - assert db_model.outputs_dict == domain_model.outputs - assert db_model.execution_metadata_dict == jsonable_encoder(domain_model.metadata) - - assert db_model.status == domain_model.status - assert db_model.error == domain_model.error - assert db_model.elapsed_time == domain_model.elapsed_time - assert db_model.created_at == domain_model.created_at - assert db_model.created_by_role == repository._creator_user_role - assert db_model.created_by == repository._creator_user_id - assert db_model.finished_at == domain_model.finished_at - - -def test_to_domain_model(repository): - """Test _to_domain_model method.""" - # Create input dictionaries - inputs_dict = {"input_key": "input_value"} - process_data_dict = {"process_key": "process_value"} - outputs_dict = {"output_key": "output_value"} - metadata_dict = {str(WorkflowNodeExecutionMetadataKey.TOTAL_TOKENS): 100} - - # Create a DB model using our custom subclass - db_model = WorkflowNodeExecutionModel() - db_model.id = "test-id" - db_model.tenant_id = "test-tenant-id" - db_model.app_id = "test-app-id" - db_model.workflow_id = "test-workflow-id" - db_model.triggered_from = "workflow-run" - db_model.workflow_run_id = "test-workflow-run-id" - db_model.index = 1 - db_model.predecessor_node_id = "test-predecessor-id" - db_model.node_execution_id = "test-node-execution-id" - db_model.node_id = "test-node-id" - db_model.node_type = BuiltinNodeTypes.START - db_model.title = "Test Node" - db_model.inputs = json.dumps(inputs_dict) - db_model.process_data = json.dumps(process_data_dict) - db_model.outputs = json.dumps(outputs_dict) - db_model.status = WorkflowNodeExecutionStatus.RUNNING - db_model.error = None - db_model.elapsed_time = 1.5 - db_model.execution_metadata = json.dumps(metadata_dict) - db_model.created_at = datetime.now() - db_model.created_by_role = "account" - db_model.created_by = "test-user-id" - db_model.finished_at = None - - # Convert to domain model - domain_model = repository._to_domain_model(db_model) - - # Assert domain model has correct values - assert isinstance(domain_model, WorkflowNodeExecution) - assert domain_model.id == db_model.id - assert domain_model.workflow_id == db_model.workflow_id - assert domain_model.workflow_execution_id == db_model.workflow_run_id - assert domain_model.index == db_model.index - assert domain_model.predecessor_node_id == db_model.predecessor_node_id - assert domain_model.node_execution_id == db_model.node_execution_id - assert domain_model.node_id == db_model.node_id - assert domain_model.node_type == db_model.node_type - assert domain_model.title == db_model.title - assert domain_model.inputs == inputs_dict - assert domain_model.process_data == process_data_dict - assert domain_model.outputs == outputs_dict - assert domain_model.status == WorkflowNodeExecutionStatus(db_model.status) - assert domain_model.error == db_model.error - assert domain_model.elapsed_time == db_model.elapsed_time - assert domain_model.metadata == metadata_dict - assert domain_model.created_at == db_model.created_at - assert domain_model.finished_at == db_model.finished_at diff --git a/api/tests/unit_tests/repositories/workflow_node_execution/test_sqlalchemy_workflow_node_execution_repository.py b/api/tests/unit_tests/repositories/workflow_node_execution/test_sqlalchemy_workflow_node_execution_repository.py deleted file mode 100644 index 2322be9e80..0000000000 --- a/api/tests/unit_tests/repositories/workflow_node_execution/test_sqlalchemy_workflow_node_execution_repository.py +++ /dev/null @@ -1,106 +0,0 @@ -""" -Unit tests for SQLAlchemyWorkflowNodeExecutionRepository, focusing on process_data truncation functionality. -""" - -from datetime import datetime -from typing import Any -from unittest.mock import MagicMock, Mock - -from graphon.entities import WorkflowNodeExecution -from graphon.enums import BuiltinNodeTypes -from sqlalchemy.orm import sessionmaker - -from core.repositories.sqlalchemy_workflow_node_execution_repository import ( - SQLAlchemyWorkflowNodeExecutionRepository, -) -from models import Account, WorkflowNodeExecutionModel, WorkflowNodeExecutionTriggeredFrom - - -class TestSQLAlchemyWorkflowNodeExecutionRepositoryProcessData: - """Test process_data truncation functionality in SQLAlchemyWorkflowNodeExecutionRepository.""" - - def create_mock_account(self) -> Account: - """Create a mock Account for testing.""" - account = Mock(spec=Account) - account.id = "test-user-id" - account.tenant_id = "test-tenant-id" - return account - - def create_mock_session_factory(self) -> sessionmaker: - """Create a mock session factory for testing.""" - mock_session = MagicMock() - mock_session_factory = MagicMock(spec=sessionmaker) - mock_session_factory.return_value.__enter__.return_value = mock_session - mock_session_factory.return_value.__exit__.return_value = None - return mock_session_factory - - def create_repository(self, mock_file_service=None) -> SQLAlchemyWorkflowNodeExecutionRepository: - """Create a repository instance for testing.""" - mock_account = self.create_mock_account() - mock_session_factory = self.create_mock_session_factory() - - repository = SQLAlchemyWorkflowNodeExecutionRepository( - session_factory=mock_session_factory, - user=mock_account, - app_id="test-app-id", - triggered_from=WorkflowNodeExecutionTriggeredFrom.WORKFLOW_RUN, - ) - - if mock_file_service: - repository._file_service = mock_file_service - - return repository - - def create_workflow_node_execution( - self, - process_data: dict[str, Any] | None = None, - execution_id: str = "test-execution-id", - ) -> WorkflowNodeExecution: - """Create a WorkflowNodeExecution instance for testing.""" - return WorkflowNodeExecution( - id=execution_id, - workflow_id="test-workflow-id", - index=1, - node_id="test-node-id", - node_type=BuiltinNodeTypes.LLM, - title="Test Node", - process_data=process_data, - created_at=datetime.now(), - ) - - def test_to_domain_model_without_offload_data(self): - """Test _to_domain_model without offload data.""" - repository = self.create_repository() - - # Create mock database model without offload data - db_model = Mock(spec=WorkflowNodeExecutionModel) - db_model.id = "test-execution-id" - db_model.node_execution_id = "test-node-execution-id" - db_model.workflow_id = "test-workflow-id" - db_model.workflow_run_id = None - db_model.index = 1 - db_model.predecessor_node_id = None - db_model.node_id = "test-node-id" - db_model.node_type = "llm" - db_model.title = "Test Node" - db_model.status = "succeeded" - db_model.error = None - db_model.elapsed_time = 1.5 - db_model.created_at = datetime.now() - db_model.finished_at = None - - process_data = {"normal": "data"} - db_model.process_data_dict = process_data - db_model.inputs_dict = None - db_model.outputs_dict = None - db_model.execution_metadata_dict = {} - db_model.offload_data = None - - domain_model = repository._to_domain_model(db_model) - - # Domain model should have the data from database - assert domain_model.process_data == process_data - - # Should not be truncated - assert domain_model.process_data_truncated is False - assert domain_model.get_truncated_process_data() is None diff --git a/api/tests/unit_tests/services/auth/test_jina_auth_standalone_module.py b/api/tests/unit_tests/services/auth/test_jina_auth_standalone_module.py index 4b5a97bf3f..b31af996ae 100644 --- a/api/tests/unit_tests/services/auth/test_jina_auth_standalone_module.py +++ b/api/tests/unit_tests/services/auth/test_jina_auth_standalone_module.py @@ -4,6 +4,7 @@ import importlib.util import sys from pathlib import Path from types import ModuleType +from typing import Any from unittest.mock import MagicMock import httpx @@ -30,8 +31,8 @@ def jina_module() -> ModuleType: return module -def _credentials(api_key: str | None = "test_api_key_123", auth_type: str = "bearer") -> dict: - config: dict = {} if api_key is None else {"api_key": api_key} +def _credentials(api_key: str | None = "test_api_key_123", auth_type: str = "bearer") -> dict[str, Any]: + config: dict[str, Any] = {} if api_key is None else {"api_key": api_key} return {"auth_type": auth_type, "config": config} @@ -47,7 +48,7 @@ def test_init_rejects_invalid_auth_type(jina_module: ModuleType) -> None: @pytest.mark.parametrize("credentials", [{"auth_type": "bearer", "config": {}}, {"auth_type": "bearer"}]) -def test_init_requires_api_key(jina_module: ModuleType, credentials: dict) -> None: +def test_init_requires_api_key(jina_module: ModuleType, credentials: dict[str, Any]) -> None: with pytest.raises(ValueError, match="No API key provided"): jina_module.JinaAuth(credentials) diff --git a/api/tests/unit_tests/services/dataset_metadata.py b/api/tests/unit_tests/services/dataset_metadata.py deleted file mode 100644 index b825a8686a..0000000000 --- a/api/tests/unit_tests/services/dataset_metadata.py +++ /dev/null @@ -1,1014 +0,0 @@ -""" -Comprehensive unit tests for MetadataService. - -This module contains extensive unit tests for the MetadataService class, -which handles dataset metadata CRUD operations and filtering/querying functionality. - -The MetadataService provides methods for: -- Creating, reading, updating, and deleting metadata fields -- Managing built-in metadata fields -- Updating document metadata values -- Metadata filtering and querying operations -- Lock management for concurrent metadata operations - -Metadata in Dify allows users to add custom fields to datasets and documents, -enabling rich filtering and search capabilities. Metadata can be of various -types (string, number, date, boolean, etc.) and can be used to categorize -and filter documents within a dataset. - -This test suite ensures: -- Correct creation of metadata fields with validation -- Proper updating of metadata names and values -- Accurate deletion of metadata fields -- Built-in field management (enable/disable) -- Document metadata updates (partial and full) -- Lock management for concurrent operations -- Metadata querying and filtering functionality - -================================================================================ -ARCHITECTURE OVERVIEW -================================================================================ - -The MetadataService is a critical component in the Dify platform's metadata -management system. It serves as the primary interface for all metadata-related -operations, including field definitions and document-level metadata values. - -Key Concepts: -1. DatasetMetadata: Defines a metadata field for a dataset. Each metadata - field has a name, type, and is associated with a specific dataset. - -2. DatasetMetadataBinding: Links metadata fields to documents. This allows - tracking which documents have which metadata fields assigned. - -3. Document Metadata: The actual metadata values stored on documents. This - is stored as a JSON object in the document's doc_metadata field. - -4. Built-in Fields: System-defined metadata fields that are automatically - available when enabled (document_name, uploader, upload_date, etc.). - -5. Lock Management: Redis-based locking to prevent concurrent metadata - operations that could cause data corruption. - -================================================================================ -TESTING STRATEGY -================================================================================ - -This test suite follows a comprehensive testing strategy that covers: - -1. CRUD Operations: - - Creating metadata fields with validation - - Reading/retrieving metadata fields - - Updating metadata field names - - Deleting metadata fields - -2. Built-in Field Management: - - Enabling built-in fields - - Disabling built-in fields - - Getting built-in field definitions - -3. Document Metadata Operations: - - Updating document metadata (partial and full) - - Managing metadata bindings - - Handling built-in field updates - -4. Lock Management: - - Acquiring locks for dataset operations - - Acquiring locks for document operations - - Handling lock conflicts - -5. Error Handling: - - Validation errors (name length, duplicates) - - Not found errors - - Lock conflict errors - -================================================================================ -""" - -from unittest.mock import Mock, patch - -import pytest - -from core.rag.index_processor.constant.built_in_field import BuiltInField -from models.dataset import Dataset, DatasetMetadata, DatasetMetadataBinding -from services.entities.knowledge_entities.knowledge_entities import ( - MetadataArgs, - MetadataValue, -) -from services.metadata_service import MetadataService - -# ============================================================================ -# Test Data Factory -# ============================================================================ -# The Test Data Factory pattern is used here to centralize the creation of -# test objects and mock instances. This approach provides several benefits: -# -# 1. Consistency: All test objects are created using the same factory methods, -# ensuring consistent structure across all tests. -# -# 2. Maintainability: If the structure of models changes, we only need to -# update the factory methods rather than every individual test. -# -# 3. Reusability: Factory methods can be reused across multiple test classes, -# reducing code duplication. -# -# 4. Readability: Tests become more readable when they use descriptive factory -# method calls instead of complex object construction logic. -# -# ============================================================================ - - -class MetadataTestDataFactory: - """ - Factory class for creating test data and mock objects for metadata service tests. - - This factory provides static methods to create mock objects for: - - DatasetMetadata instances - - DatasetMetadataBinding instances - - Dataset instances - - Document instances - - MetadataArgs and MetadataOperationData entities - - User and tenant context - - The factory methods help maintain consistency across tests and reduce - code duplication when setting up test scenarios. - """ - - @staticmethod - def create_metadata_mock( - metadata_id: str = "metadata-123", - dataset_id: str = "dataset-123", - tenant_id: str = "tenant-123", - name: str = "category", - metadata_type: str = "string", - created_by: str = "user-123", - **kwargs, - ) -> Mock: - """ - Create a mock DatasetMetadata with specified attributes. - - Args: - metadata_id: Unique identifier for the metadata field - dataset_id: ID of the dataset this metadata belongs to - tenant_id: Tenant identifier - name: Name of the metadata field - metadata_type: Type of metadata (string, number, date, etc.) - created_by: ID of the user who created the metadata - **kwargs: Additional attributes to set on the mock - - Returns: - Mock object configured as a DatasetMetadata instance - """ - metadata = Mock(spec=DatasetMetadata) - metadata.id = metadata_id - metadata.dataset_id = dataset_id - metadata.tenant_id = tenant_id - metadata.name = name - metadata.type = metadata_type - metadata.created_by = created_by - metadata.updated_by = None - metadata.updated_at = None - for key, value in kwargs.items(): - setattr(metadata, key, value) - return metadata - - @staticmethod - def create_metadata_binding_mock( - binding_id: str = "binding-123", - dataset_id: str = "dataset-123", - tenant_id: str = "tenant-123", - metadata_id: str = "metadata-123", - document_id: str = "document-123", - created_by: str = "user-123", - **kwargs, - ) -> Mock: - """ - Create a mock DatasetMetadataBinding with specified attributes. - - Args: - binding_id: Unique identifier for the binding - dataset_id: ID of the dataset - tenant_id: Tenant identifier - metadata_id: ID of the metadata field - document_id: ID of the document - created_by: ID of the user who created the binding - **kwargs: Additional attributes to set on the mock - - Returns: - Mock object configured as a DatasetMetadataBinding instance - """ - binding = Mock(spec=DatasetMetadataBinding) - binding.id = binding_id - binding.dataset_id = dataset_id - binding.tenant_id = tenant_id - binding.metadata_id = metadata_id - binding.document_id = document_id - binding.created_by = created_by - for key, value in kwargs.items(): - setattr(binding, key, value) - return binding - - @staticmethod - def create_dataset_mock( - dataset_id: str = "dataset-123", - tenant_id: str = "tenant-123", - built_in_field_enabled: bool = False, - doc_metadata: list | None = None, - **kwargs, - ) -> Mock: - """ - Create a mock Dataset with specified attributes. - - Args: - dataset_id: Unique identifier for the dataset - tenant_id: Tenant identifier - built_in_field_enabled: Whether built-in fields are enabled - doc_metadata: List of metadata field definitions - **kwargs: Additional attributes to set on the mock - - Returns: - Mock object configured as a Dataset instance - """ - dataset = Mock(spec=Dataset) - dataset.id = dataset_id - dataset.tenant_id = tenant_id - dataset.built_in_field_enabled = built_in_field_enabled - dataset.doc_metadata = doc_metadata or [] - for key, value in kwargs.items(): - setattr(dataset, key, value) - return dataset - - @staticmethod - def create_document_mock( - document_id: str = "document-123", - dataset_id: str = "dataset-123", - name: str = "Test Document", - doc_metadata: dict | None = None, - uploader: str = "user-123", - data_source_type: str = "upload_file", - **kwargs, - ) -> Mock: - """ - Create a mock Document with specified attributes. - - Args: - document_id: Unique identifier for the document - dataset_id: ID of the dataset this document belongs to - name: Name of the document - doc_metadata: Dictionary of metadata values - uploader: ID of the user who uploaded the document - data_source_type: Type of data source - **kwargs: Additional attributes to set on the mock - - Returns: - Mock object configured as a Document instance - """ - document = Mock() - document.id = document_id - document.dataset_id = dataset_id - document.name = name - document.doc_metadata = doc_metadata or {} - document.uploader = uploader - document.data_source_type = data_source_type - - # Mock datetime objects for upload_date and last_update_date - - document.upload_date = Mock() - document.upload_date.timestamp.return_value = 1234567890.0 - document.last_update_date = Mock() - document.last_update_date.timestamp.return_value = 1234567890.0 - - for key, value in kwargs.items(): - setattr(document, key, value) - return document - - @staticmethod - def create_metadata_args_mock( - name: str = "category", - metadata_type: str = "string", - ) -> Mock: - """ - Create a mock MetadataArgs entity. - - Args: - name: Name of the metadata field - metadata_type: Type of metadata - - Returns: - Mock object configured as a MetadataArgs instance - """ - metadata_args = Mock(spec=MetadataArgs) - metadata_args.name = name - metadata_args.type = metadata_type - return metadata_args - - @staticmethod - def create_metadata_value_mock( - metadata_id: str = "metadata-123", - name: str = "category", - value: str = "test", - ) -> Mock: - """ - Create a mock MetadataValue entity. - - Args: - metadata_id: ID of the metadata field - name: Name of the metadata field - value: Value of the metadata - - Returns: - Mock object configured as a MetadataValue instance - """ - metadata_value = Mock(spec=MetadataValue) - metadata_value.id = metadata_id - metadata_value.name = name - metadata_value.value = value - return metadata_value - - -# ============================================================================ -# Tests for create_metadata -# ============================================================================ - - -class TestMetadataServiceCreateMetadata: - """ - Comprehensive unit tests for MetadataService.create_metadata method. - - This test class covers the metadata field creation functionality, - including validation, duplicate checking, and database operations. - - The create_metadata method: - 1. Validates metadata name length (max 255 characters) - 2. Checks for duplicate metadata names within the dataset - 3. Checks for conflicts with built-in field names - 4. Creates a new DatasetMetadata instance - 5. Adds it to the database session and commits - 6. Returns the created metadata - - Test scenarios include: - - Successful creation with valid data - - Name length validation - - Duplicate name detection - - Built-in field name conflicts - - Database transaction handling - """ - - @pytest.fixture - def mock_db_session(self): - """ - Mock database session for testing database operations. - - Provides a mocked database session that can be used to verify: - - Query construction and execution - - Add operations for new metadata - - Commit operations for transaction completion - """ - with patch("services.metadata_service.db.session") as mock_db: - yield mock_db - - @pytest.fixture - def mock_current_user(self): - """ - Mock current user and tenant context. - - Provides mocked current_account_with_tenant function that returns - a user and tenant ID for testing authentication and authorization. - """ - with patch("services.metadata_service.current_account_with_tenant") as mock_get_user: - mock_user = Mock() - mock_user.id = "user-123" - mock_tenant_id = "tenant-123" - mock_get_user.return_value = (mock_user, mock_tenant_id) - yield mock_get_user - - def test_create_metadata_success(self, mock_db_session, mock_current_user): - """ - Test successful creation of a metadata field. - - Verifies that when all validation passes, a new metadata field - is created and persisted to the database. - - This test ensures: - - Metadata name validation passes - - No duplicate name exists - - No built-in field conflict - - New metadata is added to database - - Transaction is committed - - Created metadata is returned - """ - # Arrange - dataset_id = "dataset-123" - metadata_args = MetadataTestDataFactory.create_metadata_args_mock(name="category", metadata_type="string") - - # Mock query to return None (no existing metadata with same name) - mock_db_session.scalar.return_value = None - - # Mock BuiltInField enum iteration - with patch("services.metadata_service.BuiltInField") as mock_builtin: - mock_builtin.__iter__ = Mock(return_value=iter([])) - - # Act - result = MetadataService.create_metadata(dataset_id, metadata_args) - - # Assert - assert result is not None - assert isinstance(result, DatasetMetadata) - - # Verify metadata was added and committed - mock_db_session.add.assert_called_once() - mock_db_session.commit.assert_called_once() - - def test_create_metadata_name_too_long_error(self, mock_db_session, mock_current_user): - """ - Test error handling when metadata name exceeds 255 characters. - - Verifies that when a metadata name is longer than 255 characters, - a ValueError is raised with an appropriate message. - - This test ensures: - - Name length validation is enforced - - Error message is clear and descriptive - - No database operations are performed - """ - # Arrange - dataset_id = "dataset-123" - long_name = "a" * 256 # 256 characters (exceeds limit) - metadata_args = MetadataTestDataFactory.create_metadata_args_mock(name=long_name, metadata_type="string") - - # Act & Assert - with pytest.raises(ValueError, match="Metadata name cannot exceed 255 characters"): - MetadataService.create_metadata(dataset_id, metadata_args) - - # Verify no database operations were performed - mock_db_session.add.assert_not_called() - mock_db_session.commit.assert_not_called() - - def test_create_metadata_duplicate_name_error(self, mock_db_session, mock_current_user): - """ - Test error handling when metadata name already exists. - - Verifies that when a metadata field with the same name already exists - in the dataset, a ValueError is raised. - - This test ensures: - - Duplicate name detection works correctly - - Error message is clear - - No new metadata is created - """ - # Arrange - dataset_id = "dataset-123" - metadata_args = MetadataTestDataFactory.create_metadata_args_mock(name="category", metadata_type="string") - - # Mock existing metadata with same name - existing_metadata = MetadataTestDataFactory.create_metadata_mock(name="category") - mock_db_session.scalar.return_value = existing_metadata - - # Act & Assert - with pytest.raises(ValueError, match="Metadata name already exists"): - MetadataService.create_metadata(dataset_id, metadata_args) - - # Verify no new metadata was added - mock_db_session.add.assert_not_called() - mock_db_session.commit.assert_not_called() - - def test_create_metadata_builtin_field_conflict_error(self, mock_db_session, mock_current_user): - """ - Test error handling when metadata name conflicts with built-in field. - - Verifies that when a metadata name matches a built-in field name, - a ValueError is raised. - - This test ensures: - - Built-in field name conflicts are detected - - Error message is clear - - No new metadata is created - """ - # Arrange - dataset_id = "dataset-123" - metadata_args = MetadataTestDataFactory.create_metadata_args_mock( - name=BuiltInField.document_name, metadata_type="string" - ) - - # Mock query to return None (no duplicate in database) - mock_db_session.scalar.return_value = None - - # Mock BuiltInField to include the conflicting name - with patch("services.metadata_service.BuiltInField") as mock_builtin: - mock_field = Mock() - mock_field.value = BuiltInField.document_name - mock_builtin.__iter__ = Mock(return_value=iter([mock_field])) - - # Act & Assert - with pytest.raises(ValueError, match="Metadata name already exists in Built-in fields"): - MetadataService.create_metadata(dataset_id, metadata_args) - - # Verify no new metadata was added - mock_db_session.add.assert_not_called() - mock_db_session.commit.assert_not_called() - - -# ============================================================================ -# Tests for update_metadata_name -# ============================================================================ - - -class TestMetadataServiceUpdateMetadataName: - """ - Comprehensive unit tests for MetadataService.update_metadata_name method. - - This test class covers the metadata field name update functionality, - including validation, duplicate checking, and document metadata updates. - - The update_metadata_name method: - 1. Validates new name length (max 255 characters) - 2. Checks for duplicate names - 3. Checks for built-in field conflicts - 4. Acquires a lock for the dataset - 5. Updates the metadata name - 6. Updates all related document metadata - 7. Releases the lock - 8. Returns the updated metadata - - Test scenarios include: - - Successful name update - - Name length validation - - Duplicate name detection - - Built-in field conflicts - - Lock management - - Document metadata updates - """ - - @pytest.fixture - def mock_db_session(self): - """Mock database session for testing.""" - with patch("services.metadata_service.db.session") as mock_db: - yield mock_db - - @pytest.fixture - def mock_current_user(self): - """Mock current user and tenant context.""" - with patch("services.metadata_service.current_account_with_tenant") as mock_get_user: - mock_user = Mock() - mock_user.id = "user-123" - mock_tenant_id = "tenant-123" - mock_get_user.return_value = (mock_user, mock_tenant_id) - yield mock_get_user - - @pytest.fixture - def mock_redis_client(self): - """Mock Redis client for lock management.""" - with patch("services.metadata_service.redis_client") as mock_redis: - mock_redis.get.return_value = None # No existing lock - mock_redis.set.return_value = True - mock_redis.delete.return_value = True - yield mock_redis - - def test_update_metadata_name_success(self, mock_db_session, mock_current_user, mock_redis_client): - """ - Test successful update of metadata field name. - - Verifies that when all validation passes, the metadata name is - updated and all related document metadata is updated accordingly. - - This test ensures: - - Name validation passes - - Lock is acquired and released - - Metadata name is updated - - Related document metadata is updated - - Transaction is committed - """ - # Arrange - dataset_id = "dataset-123" - metadata_id = "metadata-123" - new_name = "updated_category" - - existing_metadata = MetadataTestDataFactory.create_metadata_mock(metadata_id=metadata_id, name="category") - - # Mock scalar calls: first for duplicate check (None), second for metadata retrieval - mock_db_session.scalar.side_effect = [None, existing_metadata] - - # Mock no metadata bindings (no documents to update) - mock_db_session.scalars.return_value.all.return_value = [] - - # Mock BuiltInField enum - with patch("services.metadata_service.BuiltInField") as mock_builtin: - mock_builtin.__iter__ = Mock(return_value=iter([])) - - # Act - result = MetadataService.update_metadata_name(dataset_id, metadata_id, new_name) - - # Assert - assert result is not None - assert result.name == new_name - - # Verify lock was acquired and released - mock_redis_client.get.assert_called() - mock_redis_client.set.assert_called() - mock_redis_client.delete.assert_called() - - # Verify metadata was updated and committed - mock_db_session.commit.assert_called() - - def test_update_metadata_name_not_found_error(self, mock_db_session, mock_current_user, mock_redis_client): - """ - Test error handling when metadata is not found. - - Verifies that when the metadata ID doesn't exist, a ValueError - is raised with an appropriate message. - - This test ensures: - - Not found error is handled correctly - - Lock is properly released even on error - - No updates are committed - """ - # Arrange - dataset_id = "dataset-123" - metadata_id = "non-existent-metadata" - new_name = "updated_category" - - # Mock scalar calls: first for duplicate check (None), second for metadata retrieval (None = not found) - mock_db_session.scalar.side_effect = [None, None] - - # Mock BuiltInField enum - with patch("services.metadata_service.BuiltInField") as mock_builtin: - mock_builtin.__iter__ = Mock(return_value=iter([])) - - # Act & Assert - with pytest.raises(ValueError, match="Metadata not found"): - MetadataService.update_metadata_name(dataset_id, metadata_id, new_name) - - # Verify lock was released - mock_redis_client.delete.assert_called() - - -# ============================================================================ -# Tests for delete_metadata -# ============================================================================ - - -class TestMetadataServiceDeleteMetadata: - """ - Comprehensive unit tests for MetadataService.delete_metadata method. - - This test class covers the metadata field deletion functionality, - including document metadata cleanup and lock management. - - The delete_metadata method: - 1. Acquires a lock for the dataset - 2. Retrieves the metadata to delete - 3. Deletes the metadata from the database - 4. Removes metadata from all related documents - 5. Releases the lock - 6. Returns the deleted metadata - - Test scenarios include: - - Successful deletion - - Not found error handling - - Document metadata cleanup - - Lock management - """ - - @pytest.fixture - def mock_db_session(self): - """Mock database session for testing.""" - with patch("services.metadata_service.db.session") as mock_db: - yield mock_db - - @pytest.fixture - def mock_redis_client(self): - """Mock Redis client for lock management.""" - with patch("services.metadata_service.redis_client") as mock_redis: - mock_redis.get.return_value = None - mock_redis.set.return_value = True - mock_redis.delete.return_value = True - yield mock_redis - - def test_delete_metadata_success(self, mock_db_session, mock_redis_client): - """ - Test successful deletion of a metadata field. - - Verifies that when the metadata exists, it is deleted and all - related document metadata is cleaned up. - - This test ensures: - - Lock is acquired and released - - Metadata is deleted from database - - Related document metadata is removed - - Transaction is committed - """ - # Arrange - dataset_id = "dataset-123" - metadata_id = "metadata-123" - - existing_metadata = MetadataTestDataFactory.create_metadata_mock(metadata_id=metadata_id, name="category") - - # Mock metadata retrieval - mock_db_session.scalar.return_value = existing_metadata - - # Mock no metadata bindings (no documents to update) - mock_db_session.scalars.return_value.all.return_value = [] - - # Act - result = MetadataService.delete_metadata(dataset_id, metadata_id) - - # Assert - assert result == existing_metadata - - # Verify lock was acquired and released - mock_redis_client.get.assert_called() - mock_redis_client.set.assert_called() - mock_redis_client.delete.assert_called() - - # Verify metadata was deleted and committed - mock_db_session.delete.assert_called_once_with(existing_metadata) - mock_db_session.commit.assert_called() - - def test_delete_metadata_not_found_error(self, mock_db_session, mock_redis_client): - """ - Test error handling when metadata is not found. - - Verifies that when the metadata ID doesn't exist, a ValueError - is raised and the lock is properly released. - - This test ensures: - - Not found error is handled correctly - - Lock is released even on error - - No deletion is performed - """ - # Arrange - dataset_id = "dataset-123" - metadata_id = "non-existent-metadata" - - # Mock metadata retrieval to return None - mock_db_session.scalar.return_value = None - - # Act & Assert - with pytest.raises(ValueError, match="Metadata not found"): - MetadataService.delete_metadata(dataset_id, metadata_id) - - # Verify lock was released - mock_redis_client.delete.assert_called() - - # Verify no deletion was performed - mock_db_session.delete.assert_not_called() - - -# ============================================================================ -# Tests for get_built_in_fields -# ============================================================================ - - -class TestMetadataServiceGetBuiltInFields: - """ - Comprehensive unit tests for MetadataService.get_built_in_fields method. - - This test class covers the built-in field retrieval functionality. - - The get_built_in_fields method: - 1. Returns a list of built-in field definitions - 2. Each definition includes name and type - - Test scenarios include: - - Successful retrieval of built-in fields - - Correct field definitions - """ - - def test_get_built_in_fields_success(self): - """ - Test successful retrieval of built-in fields. - - Verifies that the method returns the correct list of built-in - field definitions with proper structure. - - This test ensures: - - All built-in fields are returned - - Each field has name and type - - Field definitions are correct - """ - # Act - result = MetadataService.get_built_in_fields() - - # Assert - assert isinstance(result, list) - assert len(result) > 0 - - # Verify each field has required properties - for field in result: - assert "name" in field - assert "type" in field - assert isinstance(field["name"], str) - assert isinstance(field["type"], str) - - # Verify specific built-in fields are present - field_names = [field["name"] for field in result] - assert BuiltInField.document_name in field_names - assert BuiltInField.uploader in field_names - - -# ============================================================================ -# Tests for knowledge_base_metadata_lock_check -# ============================================================================ - - -class TestMetadataServiceLockCheck: - """ - Comprehensive unit tests for MetadataService.knowledge_base_metadata_lock_check method. - - This test class covers the lock management functionality for preventing - concurrent metadata operations. - - The knowledge_base_metadata_lock_check method: - 1. Checks if a lock exists for the dataset or document - 2. Raises ValueError if lock exists (operation in progress) - 3. Sets a lock with expiration time (3600 seconds) - 4. Supports both dataset-level and document-level locks - - Test scenarios include: - - Successful lock acquisition - - Lock conflict detection - - Dataset-level locks - - Document-level locks - """ - - @pytest.fixture - def mock_redis_client(self): - """Mock Redis client for lock management.""" - with patch("services.metadata_service.redis_client") as mock_redis: - yield mock_redis - - def test_lock_check_dataset_success(self, mock_redis_client): - """ - Test successful lock acquisition for dataset operations. - - Verifies that when no lock exists, a new lock is acquired - for the dataset. - - This test ensures: - - Lock check passes when no lock exists - - Lock is set with correct key and expiration - - No error is raised - """ - # Arrange - dataset_id = "dataset-123" - mock_redis_client.get.return_value = None # No existing lock - - # Act (should not raise) - MetadataService.knowledge_base_metadata_lock_check(dataset_id, None) - - # Assert - mock_redis_client.get.assert_called_once_with(f"dataset_metadata_lock_{dataset_id}") - mock_redis_client.set.assert_called_once_with(f"dataset_metadata_lock_{dataset_id}", 1, ex=3600) - - def test_lock_check_dataset_conflict_error(self, mock_redis_client): - """ - Test error handling when dataset lock already exists. - - Verifies that when a lock exists for the dataset, a ValueError - is raised with an appropriate message. - - This test ensures: - - Lock conflict is detected - - Error message is clear - - No new lock is set - """ - # Arrange - dataset_id = "dataset-123" - mock_redis_client.get.return_value = "1" # Lock exists - - # Act & Assert - with pytest.raises(ValueError, match="Another knowledge base metadata operation is running"): - MetadataService.knowledge_base_metadata_lock_check(dataset_id, None) - - # Verify lock was checked but not set - mock_redis_client.get.assert_called_once() - mock_redis_client.set.assert_not_called() - - def test_lock_check_document_success(self, mock_redis_client): - """ - Test successful lock acquisition for document operations. - - Verifies that when no lock exists, a new lock is acquired - for the document. - - This test ensures: - - Lock check passes when no lock exists - - Lock is set with correct key and expiration - - No error is raised - """ - # Arrange - document_id = "document-123" - mock_redis_client.get.return_value = None # No existing lock - - # Act (should not raise) - MetadataService.knowledge_base_metadata_lock_check(None, document_id) - - # Assert - mock_redis_client.get.assert_called_once_with(f"document_metadata_lock_{document_id}") - mock_redis_client.set.assert_called_once_with(f"document_metadata_lock_{document_id}", 1, ex=3600) - - -# ============================================================================ -# Tests for get_dataset_metadatas -# ============================================================================ - - -class TestMetadataServiceGetDatasetMetadatas: - """ - Comprehensive unit tests for MetadataService.get_dataset_metadatas method. - - This test class covers the metadata retrieval functionality for datasets. - - The get_dataset_metadatas method: - 1. Retrieves all metadata fields for a dataset - 2. Excludes built-in fields from the list - 3. Includes usage count for each metadata field - 4. Returns built-in field enabled status - - Test scenarios include: - - Successful retrieval with metadata fields - - Empty metadata list - - Built-in field filtering - - Usage count calculation - """ - - @pytest.fixture - def mock_db_session(self): - """Mock database session for testing.""" - with patch("services.metadata_service.db.session") as mock_db: - yield mock_db - - def test_get_dataset_metadatas_success(self, mock_db_session): - """ - Test successful retrieval of dataset metadata fields. - - Verifies that all metadata fields are returned with correct - structure and usage counts. - - This test ensures: - - All metadata fields are included - - Built-in fields are excluded - - Usage counts are calculated correctly - - Built-in field status is included - """ - # Arrange - dataset = MetadataTestDataFactory.create_dataset_mock( - dataset_id="dataset-123", - built_in_field_enabled=True, - doc_metadata=[ - {"id": "metadata-1", "name": "category", "type": "string"}, - {"id": "metadata-2", "name": "priority", "type": "number"}, - {"id": "built-in", "name": "document_name", "type": "string"}, - ], - ) - - # Mock usage count queries - mock_db_session.scalar.return_value = 5 # 5 documents use this metadata - - # Act - result = MetadataService.get_dataset_metadatas(dataset) - - # Assert - assert "doc_metadata" in result - assert "built_in_field_enabled" in result - assert result["built_in_field_enabled"] is True - - # Verify built-in fields are excluded - metadata_ids = [meta["id"] for meta in result["doc_metadata"]] - assert "built-in" not in metadata_ids - - # Verify all custom metadata fields are included - assert len(result["doc_metadata"]) == 2 - - # Verify usage counts are included - for meta in result["doc_metadata"]: - assert "count" in meta - assert meta["count"] == 5 - - -# ============================================================================ -# Additional Documentation and Notes -# ============================================================================ -# -# This test suite covers the core metadata CRUD operations and basic -# filtering functionality. Additional test scenarios that could be added: -# -# 1. enable_built_in_field / disable_built_in_field: -# - Testing built-in field enablement -# - Testing built-in field disablement -# - Testing document metadata updates when enabling/disabling -# -# 2. update_documents_metadata: -# - Testing partial updates -# - Testing full updates -# - Testing metadata binding creation -# - Testing built-in field updates -# -# 3. Metadata Filtering and Querying: -# - Testing metadata-based document filtering -# - Testing complex metadata queries -# - Testing metadata value retrieval -# -# These scenarios are not currently implemented but could be added if needed -# based on real-world usage patterns or discovered edge cases. -# -# ============================================================================ diff --git a/api/tests/unit_tests/services/dataset_permission_service.py b/api/tests/unit_tests/services/dataset_permission_service.py deleted file mode 100644 index e098e90455..0000000000 --- a/api/tests/unit_tests/services/dataset_permission_service.py +++ /dev/null @@ -1,825 +0,0 @@ -""" -Comprehensive unit tests for DatasetPermissionService and DatasetService permission methods. - -This module contains extensive unit tests for dataset permission management, -including partial member list operations, permission validation, and permission -enum handling. - -The DatasetPermissionService provides methods for: -- Retrieving partial member permissions (get_dataset_partial_member_list) -- Updating partial member lists (update_partial_member_list) -- Validating permissions before operations (check_permission) -- Clearing partial member lists (clear_partial_member_list) - -The DatasetService provides permission checking methods: -- check_dataset_permission - validates user access to dataset -- check_dataset_operator_permission - validates operator permissions - -These operations are critical for dataset access control and security, ensuring -that users can only access datasets they have permission to view or modify. - -This test suite ensures: -- Correct retrieval of partial member lists -- Proper update of partial member permissions -- Accurate permission validation logic -- Proper handling of permission enums (only_me, all_team_members, partial_members) -- Security boundaries are maintained -- Error conditions are handled correctly - -================================================================================ -ARCHITECTURE OVERVIEW -================================================================================ - -The Dataset permission system is a multi-layered access control mechanism -that provides fine-grained control over who can access and modify datasets. - -1. Permission Levels: - - only_me: Only the dataset creator can access - - all_team_members: All members of the tenant can access - - partial_members: Only specific users listed in DatasetPermission can access - -2. Permission Storage: - - Dataset.permission: Stores the permission level enum - - DatasetPermission: Stores individual user permissions for partial_members - - Each DatasetPermission record links a dataset to a user account - -3. Permission Validation: - - Tenant-level checks: Users must be in the same tenant - - Role-based checks: OWNER role bypasses some restrictions - - Explicit permission checks: For partial_members, explicit DatasetPermission - records are required - -4. Permission Operations: - - Partial member list management: Add/remove users from partial access - - Permission validation: Check before allowing operations - - Permission clearing: Remove all partial members when changing permission level - -================================================================================ -TESTING STRATEGY -================================================================================ - -This test suite follows a comprehensive testing strategy that covers: - -1. Partial Member List Operations: - - Retrieving member lists - - Adding new members - - Updating existing members - - Removing members - - Empty list handling - -2. Permission Validation: - - Dataset editor permissions - - Dataset operator restrictions - - Permission enum validation - - Partial member list validation - - Tenant isolation - -3. Permission Enum Handling: - - only_me permission behavior - - all_team_members permission behavior - - partial_members permission behavior - - Permission transitions - - Edge cases for each enum value - -4. Security and Access Control: - - Tenant boundary enforcement - - Role-based access control - - Creator privilege validation - - Explicit permission requirement - -5. Error Handling: - - Invalid permission changes - - Missing required data - - Database transaction failures - - Permission denial scenarios - -================================================================================ -""" - -from unittest.mock import Mock, create_autospec, patch - -import pytest - -from models import Account, TenantAccountRole -from models.dataset import ( - Dataset, - DatasetPermission, - DatasetPermissionEnum, -) -from services.dataset_service import DatasetPermissionService, DatasetService -from services.errors.account import NoPermissionError - -# ============================================================================ -# Test Data Factory -# ============================================================================ -# The Test Data Factory pattern is used here to centralize the creation of -# test objects and mock instances. This approach provides several benefits: -# -# 1. Consistency: All test objects are created using the same factory methods, -# ensuring consistent structure across all tests. -# -# 2. Maintainability: If the structure of models or services changes, we only -# need to update the factory methods rather than every individual test. -# -# 3. Reusability: Factory methods can be reused across multiple test classes, -# reducing code duplication. -# -# 4. Readability: Tests become more readable when they use descriptive factory -# method calls instead of complex object construction logic. -# -# ============================================================================ - - -class DatasetPermissionTestDataFactory: - """ - Factory class for creating test data and mock objects for dataset permission tests. - - This factory provides static methods to create mock objects for: - - Dataset instances with various permission configurations - - User/Account instances with different roles and permissions - - DatasetPermission instances - - Permission enum values - - Database query results - - The factory methods help maintain consistency across tests and reduce - code duplication when setting up test scenarios. - """ - - @staticmethod - def create_dataset_mock( - dataset_id: str = "dataset-123", - tenant_id: str = "tenant-123", - permission: DatasetPermissionEnum = DatasetPermissionEnum.ONLY_ME, - created_by: str = "user-123", - name: str = "Test Dataset", - **kwargs, - ) -> Mock: - """ - Create a mock Dataset with specified attributes. - - Args: - dataset_id: Unique identifier for the dataset - tenant_id: Tenant identifier - permission: Permission level enum - created_by: ID of user who created the dataset - name: Dataset name - **kwargs: Additional attributes to set on the mock - - Returns: - Mock object configured as a Dataset instance - """ - dataset = Mock(spec=Dataset) - dataset.id = dataset_id - dataset.tenant_id = tenant_id - dataset.permission = permission - dataset.created_by = created_by - dataset.name = name - for key, value in kwargs.items(): - setattr(dataset, key, value) - return dataset - - @staticmethod - def create_user_mock( - user_id: str = "user-123", - tenant_id: str = "tenant-123", - role: TenantAccountRole = TenantAccountRole.NORMAL, - is_dataset_editor: bool = True, - is_dataset_operator: bool = False, - **kwargs, - ) -> Mock: - """ - Create a mock user (Account) with specified attributes. - - Args: - user_id: Unique identifier for the user - tenant_id: Tenant identifier - role: User role (OWNER, ADMIN, NORMAL, DATASET_OPERATOR, etc.) - is_dataset_editor: Whether user has dataset editor permissions - is_dataset_operator: Whether user is a dataset operator - **kwargs: Additional attributes to set on the mock - - Returns: - Mock object configured as an Account instance - """ - user = create_autospec(Account, instance=True) - user.id = user_id - user.current_tenant_id = tenant_id - user.current_role = role - user.is_dataset_editor = is_dataset_editor - user.is_dataset_operator = is_dataset_operator - for key, value in kwargs.items(): - setattr(user, key, value) - return user - - @staticmethod - def create_dataset_permission_mock( - permission_id: str = "permission-123", - dataset_id: str = "dataset-123", - account_id: str = "user-456", - tenant_id: str = "tenant-123", - has_permission: bool = True, - **kwargs, - ) -> Mock: - """ - Create a mock DatasetPermission instance. - - Args: - permission_id: Unique identifier for the permission - dataset_id: Dataset ID - account_id: User account ID - tenant_id: Tenant identifier - has_permission: Whether permission is granted - **kwargs: Additional attributes to set on the mock - - Returns: - Mock object configured as a DatasetPermission instance - """ - permission = Mock(spec=DatasetPermission) - permission.id = permission_id - permission.dataset_id = dataset_id - permission.account_id = account_id - permission.tenant_id = tenant_id - permission.has_permission = has_permission - for key, value in kwargs.items(): - setattr(permission, key, value) - return permission - - @staticmethod - def create_user_list_mock(user_ids: list[str]) -> list[dict[str, str]]: - """ - Create a list of user dictionaries for partial member list operations. - - Args: - user_ids: List of user IDs to include - - Returns: - List of user dictionaries with "user_id" keys - """ - return [{"user_id": user_id} for user_id in user_ids] - - -# ============================================================================ -# Tests for check_permission -# ============================================================================ - - -class TestDatasetPermissionServiceCheckPermission: - """ - Comprehensive unit tests for DatasetPermissionService.check_permission method. - - This test class covers the permission validation logic that ensures - users have the appropriate permissions to modify dataset permissions. - - The check_permission method: - 1. Validates user is a dataset editor - 2. Checks if dataset operator is trying to change permissions - 3. Validates partial member list when setting to partial_members - 4. Ensures dataset operators cannot change permission levels - 5. Ensures dataset operators cannot modify partial member lists - - Test scenarios include: - - Valid permission changes by dataset editors - - Dataset operator restrictions - - Partial member list validation - - Missing dataset editor permissions - - Invalid permission changes - """ - - @pytest.fixture - def mock_get_partial_member_list(self): - """ - Mock get_dataset_partial_member_list method. - - Provides a mocked version of the get_dataset_partial_member_list - method for testing permission validation logic. - """ - with patch.object(DatasetPermissionService, "get_dataset_partial_member_list") as mock_get_list: - yield mock_get_list - - def test_check_permission_dataset_editor_success(self, mock_get_partial_member_list): - """ - Test successful permission check for dataset editor. - - Verifies that when a dataset editor (not operator) tries to - change permissions, the check passes. - - This test ensures: - - Dataset editors can change permissions - - No errors are raised for valid changes - - Partial member list validation is skipped for non-operators - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(is_dataset_editor=True, is_dataset_operator=False) - dataset = DatasetPermissionTestDataFactory.create_dataset_mock(permission=DatasetPermissionEnum.ONLY_ME) - requested_permission = DatasetPermissionEnum.ALL_TEAM - requested_partial_member_list = None - - # Act (should not raise) - DatasetPermissionService.check_permission(user, dataset, requested_permission, requested_partial_member_list) - - # Assert - # Verify get_partial_member_list was not called (not needed for non-operators) - mock_get_partial_member_list.assert_not_called() - - def test_check_permission_not_dataset_editor_error(self): - """ - Test error when user is not a dataset editor. - - Verifies that when a user without dataset editor permissions - tries to change permissions, a NoPermissionError is raised. - - This test ensures: - - Non-editors cannot change permissions - - Error message is clear - - Error type is correct - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(is_dataset_editor=False) - dataset = DatasetPermissionTestDataFactory.create_dataset_mock() - requested_permission = DatasetPermissionEnum.ALL_TEAM - requested_partial_member_list = None - - # Act & Assert - with pytest.raises(NoPermissionError, match="User does not have permission to edit this dataset"): - DatasetPermissionService.check_permission( - user, dataset, requested_permission, requested_partial_member_list - ) - - def test_check_permission_operator_cannot_change_permission_error(self): - """ - Test error when dataset operator tries to change permission level. - - Verifies that when a dataset operator tries to change the permission - level, a NoPermissionError is raised. - - This test ensures: - - Dataset operators cannot change permission levels - - Error message is clear - - Current permission is preserved - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(is_dataset_editor=True, is_dataset_operator=True) - dataset = DatasetPermissionTestDataFactory.create_dataset_mock(permission=DatasetPermissionEnum.ONLY_ME) - requested_permission = DatasetPermissionEnum.ALL_TEAM # Trying to change - requested_partial_member_list = None - - # Act & Assert - with pytest.raises(NoPermissionError, match="Dataset operators cannot change the dataset permissions"): - DatasetPermissionService.check_permission( - user, dataset, requested_permission, requested_partial_member_list - ) - - def test_check_permission_operator_partial_members_missing_list_error(self, mock_get_partial_member_list): - """ - Test error when operator sets partial_members without providing list. - - Verifies that when a dataset operator tries to set permission to - partial_members without providing a member list, a ValueError is raised. - - This test ensures: - - Partial member list is required for partial_members permission - - Error message is clear - - Error type is correct - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(is_dataset_editor=True, is_dataset_operator=True) - dataset = DatasetPermissionTestDataFactory.create_dataset_mock(permission=DatasetPermissionEnum.PARTIAL_TEAM) - requested_permission = "partial_members" - requested_partial_member_list = None # Missing list - - # Act & Assert - with pytest.raises(ValueError, match="Partial member list is required when setting to partial members"): - DatasetPermissionService.check_permission( - user, dataset, requested_permission, requested_partial_member_list - ) - - def test_check_permission_operator_cannot_modify_partial_list_error(self, mock_get_partial_member_list): - """ - Test error when operator tries to modify partial member list. - - Verifies that when a dataset operator tries to change the partial - member list, a ValueError is raised. - - This test ensures: - - Dataset operators cannot modify partial member lists - - Error message is clear - - Current member list is preserved - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(is_dataset_editor=True, is_dataset_operator=True) - dataset = DatasetPermissionTestDataFactory.create_dataset_mock(permission=DatasetPermissionEnum.PARTIAL_TEAM) - requested_permission = "partial_members" - - # Current member list - current_member_list = ["user-456", "user-789"] - mock_get_partial_member_list.return_value = current_member_list - - # Requested member list (different from current) - requested_partial_member_list = DatasetPermissionTestDataFactory.create_user_list_mock( - ["user-456", "user-999"] # Different list - ) - - # Act & Assert - with pytest.raises(ValueError, match="Dataset operators cannot change the dataset permissions"): - DatasetPermissionService.check_permission( - user, dataset, requested_permission, requested_partial_member_list - ) - - def test_check_permission_operator_can_keep_same_partial_list(self, mock_get_partial_member_list): - """ - Test that operator can keep the same partial member list. - - Verifies that when a dataset operator keeps the same partial member - list, the check passes. - - This test ensures: - - Operators can keep existing partial member lists - - No errors are raised for unchanged lists - - Permission validation works correctly - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(is_dataset_editor=True, is_dataset_operator=True) - dataset = DatasetPermissionTestDataFactory.create_dataset_mock(permission=DatasetPermissionEnum.PARTIAL_TEAM) - requested_permission = "partial_members" - - # Current member list - current_member_list = ["user-456", "user-789"] - mock_get_partial_member_list.return_value = current_member_list - - # Requested member list (same as current) - requested_partial_member_list = DatasetPermissionTestDataFactory.create_user_list_mock( - ["user-456", "user-789"] # Same list - ) - - # Act (should not raise) - DatasetPermissionService.check_permission(user, dataset, requested_permission, requested_partial_member_list) - - # Assert - # Verify get_partial_member_list was called to compare lists - mock_get_partial_member_list.assert_called_once_with(dataset.id) - - -# ============================================================================ -# Tests for DatasetService.check_dataset_permission -# ============================================================================ - - -class TestDatasetServiceCheckDatasetPermission: - """ - Comprehensive unit tests for DatasetService.check_dataset_permission method. - - This test class covers the dataset permission checking logic that validates - whether a user has access to a dataset based on permission enums. - - The check_dataset_permission method: - 1. Validates tenant match - 2. Checks OWNER role (bypasses some restrictions) - 3. Validates only_me permission (creator only) - 4. Validates partial_members permission (explicit permission required) - 5. Validates all_team_members permission (all tenant members) - - Test scenarios include: - - Tenant boundary enforcement - - OWNER role bypass - - only_me permission validation - - partial_members permission validation - - all_team_members permission validation - - Permission denial scenarios - """ - - @pytest.fixture - def mock_db_session(self): - """ - Mock database session for testing. - - Provides a mocked database session that can be used to verify - database queries for permission checks. - """ - with patch("services.dataset_service.db.session") as mock_db: - yield mock_db - - def test_check_dataset_permission_owner_bypass(self, mock_db_session): - """ - Test that OWNER role bypasses permission checks. - - Verifies that when a user has OWNER role, they can access any - dataset in their tenant regardless of permission level. - - This test ensures: - - OWNER role bypasses permission restrictions - - No database queries are needed for OWNER - - Access is granted automatically - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(role=TenantAccountRole.OWNER, tenant_id="tenant-123") - dataset = DatasetPermissionTestDataFactory.create_dataset_mock( - tenant_id="tenant-123", - permission=DatasetPermissionEnum.ONLY_ME, - created_by="other-user-123", # Not the current user - ) - - # Act (should not raise) - DatasetService.check_dataset_permission(dataset, user) - - # Assert - # Verify no permission queries were made (OWNER bypasses) - mock_db_session.query.assert_not_called() - - def test_check_dataset_permission_tenant_mismatch_error(self): - """ - Test error when user and dataset are in different tenants. - - Verifies that when a user tries to access a dataset from a different - tenant, a NoPermissionError is raised. - - This test ensures: - - Tenant boundary is enforced - - Error message is clear - - Error type is correct - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(tenant_id="tenant-123") - dataset = DatasetPermissionTestDataFactory.create_dataset_mock(tenant_id="tenant-456") # Different tenant - - # Act & Assert - with pytest.raises(NoPermissionError, match="You do not have permission to access this dataset"): - DatasetService.check_dataset_permission(dataset, user) - - def test_check_dataset_permission_only_me_creator_success(self): - """ - Test that creator can access only_me dataset. - - Verifies that when a user is the creator of an only_me dataset, - they can access it successfully. - - This test ensures: - - Creators can access their own only_me datasets - - No explicit permission record is needed - - Access is granted correctly - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(user_id="user-123", role=TenantAccountRole.NORMAL) - dataset = DatasetPermissionTestDataFactory.create_dataset_mock( - tenant_id="tenant-123", - permission=DatasetPermissionEnum.ONLY_ME, - created_by="user-123", # User is the creator - ) - - # Act (should not raise) - DatasetService.check_dataset_permission(dataset, user) - - def test_check_dataset_permission_only_me_non_creator_error(self): - """ - Test error when non-creator tries to access only_me dataset. - - Verifies that when a user who is not the creator tries to access - an only_me dataset, a NoPermissionError is raised. - - This test ensures: - - Non-creators cannot access only_me datasets - - Error message is clear - - Error type is correct - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(user_id="user-123", role=TenantAccountRole.NORMAL) - dataset = DatasetPermissionTestDataFactory.create_dataset_mock( - tenant_id="tenant-123", - permission=DatasetPermissionEnum.ONLY_ME, - created_by="other-user-456", # Different creator - ) - - # Act & Assert - with pytest.raises(NoPermissionError, match="You do not have permission to access this dataset"): - DatasetService.check_dataset_permission(dataset, user) - - def test_check_dataset_permission_partial_members_creator_success(self, mock_db_session): - """ - Test that creator can access partial_members dataset without explicit permission. - - Verifies that when a user is the creator of a partial_members dataset, - they can access it even without an explicit DatasetPermission record. - - This test ensures: - - Creators can access their own datasets - - No explicit permission record is needed for creators - - Access is granted correctly - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(user_id="user-123", role=TenantAccountRole.NORMAL) - dataset = DatasetPermissionTestDataFactory.create_dataset_mock( - tenant_id="tenant-123", - permission=DatasetPermissionEnum.PARTIAL_TEAM, - created_by="user-123", # User is the creator - ) - - # Act (should not raise) - DatasetService.check_dataset_permission(dataset, user) - - # Assert - # Verify permission query was not executed (creator bypasses) - mock_db_session.query.assert_not_called() - - def test_check_dataset_permission_all_team_members_success(self): - """ - Test that any tenant member can access all_team_members dataset. - - Verifies that when a dataset has all_team_members permission, any - user in the same tenant can access it. - - This test ensures: - - All team members can access - - No explicit permission record is needed - - Access is granted correctly - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(user_id="user-123", role=TenantAccountRole.NORMAL) - dataset = DatasetPermissionTestDataFactory.create_dataset_mock( - tenant_id="tenant-123", - permission=DatasetPermissionEnum.ALL_TEAM, - created_by="other-user-456", # Not the creator - ) - - # Act (should not raise) - DatasetService.check_dataset_permission(dataset, user) - - -# ============================================================================ -# Tests for DatasetService.check_dataset_operator_permission -# ============================================================================ - - -class TestDatasetServiceCheckDatasetOperatorPermission: - """ - Comprehensive unit tests for DatasetService.check_dataset_operator_permission method. - - This test class covers the dataset operator permission checking logic, - which validates whether a dataset operator has access to a dataset. - - The check_dataset_operator_permission method: - 1. Validates dataset exists - 2. Validates user exists - 3. Checks OWNER role (bypasses restrictions) - 4. Validates only_me permission (creator only) - 5. Validates partial_members permission (explicit permission required) - - Test scenarios include: - - Dataset not found error - - User not found error - - OWNER role bypass - - only_me permission validation - - partial_members permission validation - - Permission denial scenarios - """ - - @pytest.fixture - def mock_db_session(self): - """ - Mock database session for testing. - - Provides a mocked database session that can be used to verify - database queries for permission checks. - """ - with patch("services.dataset_service.db.session") as mock_db: - yield mock_db - - def test_check_dataset_operator_permission_dataset_not_found_error(self): - """ - Test error when dataset is None. - - Verifies that when dataset is None, a ValueError is raised. - - This test ensures: - - Dataset existence is validated - - Error message is clear - - Error type is correct - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock() - dataset = None - - # Act & Assert - with pytest.raises(ValueError, match="Dataset not found"): - DatasetService.check_dataset_operator_permission(user=user, dataset=dataset) - - def test_check_dataset_operator_permission_user_not_found_error(self): - """ - Test error when user is None. - - Verifies that when user is None, a ValueError is raised. - - This test ensures: - - User existence is validated - - Error message is clear - - Error type is correct - """ - # Arrange - user = None - dataset = DatasetPermissionTestDataFactory.create_dataset_mock() - - # Act & Assert - with pytest.raises(ValueError, match="User not found"): - DatasetService.check_dataset_operator_permission(user=user, dataset=dataset) - - def test_check_dataset_operator_permission_owner_bypass(self): - """ - Test that OWNER role bypasses permission checks. - - Verifies that when a user has OWNER role, they can access any - dataset in their tenant regardless of permission level. - - This test ensures: - - OWNER role bypasses permission restrictions - - No database queries are needed for OWNER - - Access is granted automatically - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(role=TenantAccountRole.OWNER, tenant_id="tenant-123") - dataset = DatasetPermissionTestDataFactory.create_dataset_mock( - tenant_id="tenant-123", - permission=DatasetPermissionEnum.ONLY_ME, - created_by="other-user-123", # Not the current user - ) - - # Act (should not raise) - DatasetService.check_dataset_operator_permission(user=user, dataset=dataset) - - def test_check_dataset_operator_permission_only_me_creator_success(self): - """ - Test that creator can access only_me dataset. - - Verifies that when a user is the creator of an only_me dataset, - they can access it successfully. - - This test ensures: - - Creators can access their own only_me datasets - - No explicit permission record is needed - - Access is granted correctly - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(user_id="user-123", role=TenantAccountRole.NORMAL) - dataset = DatasetPermissionTestDataFactory.create_dataset_mock( - tenant_id="tenant-123", - permission=DatasetPermissionEnum.ONLY_ME, - created_by="user-123", # User is the creator - ) - - # Act (should not raise) - DatasetService.check_dataset_operator_permission(user=user, dataset=dataset) - - def test_check_dataset_operator_permission_only_me_non_creator_error(self): - """ - Test error when non-creator tries to access only_me dataset. - - Verifies that when a user who is not the creator tries to access - an only_me dataset, a NoPermissionError is raised. - - This test ensures: - - Non-creators cannot access only_me datasets - - Error message is clear - - Error type is correct - """ - # Arrange - user = DatasetPermissionTestDataFactory.create_user_mock(user_id="user-123", role=TenantAccountRole.NORMAL) - dataset = DatasetPermissionTestDataFactory.create_dataset_mock( - tenant_id="tenant-123", - permission=DatasetPermissionEnum.ONLY_ME, - created_by="other-user-456", # Different creator - ) - - # Act & Assert - with pytest.raises(NoPermissionError, match="You do not have permission to access this dataset"): - DatasetService.check_dataset_operator_permission(user=user, dataset=dataset) - - -# ============================================================================ -# Additional Documentation and Notes -# ============================================================================ -# -# This test suite covers the core permission management operations for datasets. -# Additional test scenarios that could be added: -# -# 1. Permission Enum Transitions: -# - Testing transitions between permission levels -# - Testing validation during transitions -# - Testing partial member list updates during transitions -# -# 2. Bulk Operations: -# - Testing bulk permission updates -# - Testing bulk partial member list updates -# - Testing performance with large member lists -# -# 3. Edge Cases: -# - Testing with very large partial member lists -# - Testing with special characters in user IDs -# - Testing with deleted users -# - Testing with inactive permissions -# -# 4. Integration Scenarios: -# - Testing permission changes followed by access attempts -# - Testing concurrent permission updates -# - Testing permission inheritance -# -# These scenarios are not currently implemented but could be added if needed -# based on real-world usage patterns or discovered edge cases. -# -# ============================================================================ diff --git a/api/tests/unit_tests/services/dataset_service_test_helpers.py b/api/tests/unit_tests/services/dataset_service_test_helpers.py index da557de8a4..3349c1fd8c 100644 --- a/api/tests/unit_tests/services/dataset_service_test_helpers.py +++ b/api/tests/unit_tests/services/dataset_service_test_helpers.py @@ -7,10 +7,10 @@ document, and segment service test modules that exercise import json from types import SimpleNamespace +from typing import Any from unittest.mock import MagicMock, Mock, create_autospec, patch import pytest -from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType from werkzeug.exceptions import Forbidden, NotFound from core.errors.error import LLMBadRequestError, ProviderTokenNotInitError @@ -19,6 +19,7 @@ from core.rag.index_processor.constant.built_in_field import BuiltInField from core.rag.index_processor.constant.index_type import IndexStructureType from core.rag.retrieval.retrieval_methods import RetrievalMethod from enums.cloud_plan import CloudPlan +from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType from models import Account, TenantAccountRole from models.dataset import ( ChildChunk, @@ -166,7 +167,7 @@ class DatasetServiceUnitDataFactory: built_in_field_enabled: bool = False, doc_form: str | None = "text_model", enable_api: bool = False, - summary_index_setting: dict | None = None, + summary_index_setting: dict[str, Any] | None = None, **kwargs, ) -> Mock: dataset = Mock(spec=Dataset) @@ -214,12 +215,12 @@ class DatasetServiceUnitDataFactory: archived: bool = False, enabled: bool = True, data_source_type: str = "upload_file", - data_source_info_dict: dict | None = None, + data_source_info_dict: dict[str, Any] | None = None, data_source_info: str | None = None, doc_form: str = "text_model", need_summary: bool = True, position: int = 0, - doc_metadata: dict | None = None, + doc_metadata: dict[str, Any] | None = None, name: str = "Document", **kwargs, ) -> Mock: diff --git a/api/tests/unit_tests/services/dataset_service_update_delete.py b/api/tests/unit_tests/services/dataset_service_update_delete.py deleted file mode 100644 index 62c39f96d3..0000000000 --- a/api/tests/unit_tests/services/dataset_service_update_delete.py +++ /dev/null @@ -1,818 +0,0 @@ -""" -Comprehensive unit tests for DatasetService update and delete operations. - -This module contains extensive unit tests for the DatasetService class, -specifically focusing on update and delete operations for datasets. - -The DatasetService provides methods for: -- Updating dataset configuration and settings (update_dataset) -- Deleting datasets with proper cleanup (delete_dataset) -- Updating RAG pipeline dataset settings (update_rag_pipeline_dataset_settings) -- Checking if dataset is in use (dataset_use_check) -- Updating dataset API access status (update_dataset_api_status) - -These operations are critical for dataset lifecycle management and require -careful handling of permissions, dependencies, and data integrity. - -This test suite ensures: -- Correct update of dataset properties -- Proper permission validation before updates/deletes -- Cascade deletion handling -- Event signaling for cleanup operations -- RAG pipeline dataset configuration updates -- API status management -- Use check validation - -================================================================================ -ARCHITECTURE OVERVIEW -================================================================================ - -The DatasetService update and delete operations are part of the dataset -lifecycle management system. These operations interact with multiple -components: - -1. Permission System: All update/delete operations require proper - permission validation to ensure users can only modify datasets they - have access to. - -2. Event System: Dataset deletion triggers the dataset_was_deleted event, - which notifies other components to clean up related data (documents, - segments, vector indices, etc.). - -3. Dependency Checking: Before deletion, the system checks if the dataset - is in use by any applications (via AppDatasetJoin). - -4. RAG Pipeline Integration: RAG pipeline datasets have special update - logic that handles chunk structure, indexing techniques, and embedding - model configuration. - -5. API Status Management: Datasets can have their API access enabled or - disabled, which affects whether they can be accessed via the API. - -================================================================================ -TESTING STRATEGY -================================================================================ - -This test suite follows a comprehensive testing strategy that covers: - -1. Update Operations: - - Internal dataset updates - - External dataset updates - - RAG pipeline dataset updates - - Permission validation - - Name duplicate checking - - Configuration validation - -2. Delete Operations: - - Successful deletion - - Permission validation - - Event signaling - - Database cleanup - - Not found handling - -3. Use Check Operations: - - Dataset in use detection - - Dataset not in use detection - - AppDatasetJoin query validation - -4. API Status Operations: - - Enable API access - - Disable API access - - Permission validation - - Current user validation - -5. RAG Pipeline Operations: - - Unpublished dataset updates - - Published dataset updates - - Chunk structure validation - - Indexing technique changes - - Embedding model configuration - -================================================================================ -""" - -import datetime -from unittest.mock import Mock, create_autospec, patch - -import pytest -from sqlalchemy.orm import Session - -from core.rag.index_processor.constant.index_type import IndexTechniqueType -from models import Account, TenantAccountRole -from models.dataset import ( - AppDatasetJoin, - Dataset, - DatasetPermissionEnum, -) -from services.dataset_service import DatasetService -from services.errors.account import NoPermissionError - -# ============================================================================ -# Test Data Factory -# ============================================================================ -# The Test Data Factory pattern is used here to centralize the creation of -# test objects and mock instances. This approach provides several benefits: -# -# 1. Consistency: All test objects are created using the same factory methods, -# ensuring consistent structure across all tests. -# -# 2. Maintainability: If the structure of models or services changes, we only -# need to update the factory methods rather than every individual test. -# -# 3. Reusability: Factory methods can be reused across multiple test classes, -# reducing code duplication. -# -# 4. Readability: Tests become more readable when they use descriptive factory -# method calls instead of complex object construction logic. -# -# ============================================================================ - - -class DatasetUpdateDeleteTestDataFactory: - """ - Factory class for creating test data and mock objects for dataset update/delete tests. - - This factory provides static methods to create mock objects for: - - Dataset instances with various configurations - - User/Account instances with different roles - - Knowledge configuration objects - - Database session mocks - - Event signal mocks - - The factory methods help maintain consistency across tests and reduce - code duplication when setting up test scenarios. - """ - - @staticmethod - def create_dataset_mock( - dataset_id: str = "dataset-123", - provider: str = "vendor", - name: str = "Test Dataset", - description: str = "Test description", - tenant_id: str = "tenant-123", - indexing_technique: str = IndexTechniqueType.HIGH_QUALITY, - embedding_model_provider: str | None = "openai", - embedding_model: str | None = "text-embedding-ada-002", - collection_binding_id: str | None = "binding-123", - enable_api: bool = True, - permission: DatasetPermissionEnum = DatasetPermissionEnum.ONLY_ME, - created_by: str = "user-123", - chunk_structure: str | None = None, - runtime_mode: str = "general", - **kwargs, - ) -> Mock: - """ - Create a mock Dataset with specified attributes. - - Args: - dataset_id: Unique identifier for the dataset - provider: Dataset provider (vendor, external) - name: Dataset name - description: Dataset description - tenant_id: Tenant identifier - indexing_technique: Indexing technique (high_quality, economy) - embedding_model_provider: Embedding model provider - embedding_model: Embedding model name - collection_binding_id: Collection binding ID - enable_api: Whether API access is enabled - permission: Dataset permission level - created_by: ID of user who created the dataset - chunk_structure: Chunk structure for RAG pipeline datasets - runtime_mode: Runtime mode (general, rag_pipeline) - **kwargs: Additional attributes to set on the mock - - Returns: - Mock object configured as a Dataset instance - """ - dataset = Mock(spec=Dataset) - dataset.id = dataset_id - dataset.provider = provider - dataset.name = name - dataset.description = description - dataset.tenant_id = tenant_id - dataset.indexing_technique = indexing_technique - dataset.embedding_model_provider = embedding_model_provider - dataset.embedding_model = embedding_model - dataset.collection_binding_id = collection_binding_id - dataset.enable_api = enable_api - dataset.permission = permission - dataset.created_by = created_by - dataset.chunk_structure = chunk_structure - dataset.runtime_mode = runtime_mode - dataset.retrieval_model = {} - dataset.keyword_number = 10 - for key, value in kwargs.items(): - setattr(dataset, key, value) - return dataset - - @staticmethod - def create_user_mock( - user_id: str = "user-123", - tenant_id: str = "tenant-123", - role: TenantAccountRole = TenantAccountRole.NORMAL, - is_dataset_editor: bool = True, - **kwargs, - ) -> Mock: - """ - Create a mock user (Account) with specified attributes. - - Args: - user_id: Unique identifier for the user - tenant_id: Tenant identifier - role: User role (OWNER, ADMIN, NORMAL, etc.) - is_dataset_editor: Whether user has dataset editor permissions - **kwargs: Additional attributes to set on the mock - - Returns: - Mock object configured as an Account instance - """ - user = create_autospec(Account, instance=True) - user.id = user_id - user.current_tenant_id = tenant_id - user.current_role = role - user.is_dataset_editor = is_dataset_editor - for key, value in kwargs.items(): - setattr(user, key, value) - return user - - @staticmethod - def create_knowledge_configuration_mock( - chunk_structure: str = "tree", - indexing_technique: str = IndexTechniqueType.HIGH_QUALITY, - embedding_model_provider: str = "openai", - embedding_model: str = "text-embedding-ada-002", - keyword_number: int = 10, - retrieval_model: dict | None = None, - **kwargs, - ) -> Mock: - """ - Create a mock KnowledgeConfiguration entity. - - Args: - chunk_structure: Chunk structure type - indexing_technique: Indexing technique - embedding_model_provider: Embedding model provider - embedding_model: Embedding model name - keyword_number: Keyword number for economy indexing - retrieval_model: Retrieval model configuration - **kwargs: Additional attributes to set on the mock - - Returns: - Mock object configured as a KnowledgeConfiguration instance - """ - config = Mock() - config.chunk_structure = chunk_structure - config.indexing_technique = indexing_technique - config.embedding_model_provider = embedding_model_provider - config.embedding_model = embedding_model - config.keyword_number = keyword_number - config.retrieval_model = Mock() - config.retrieval_model.model_dump.return_value = retrieval_model or { - "search_method": "semantic_search", - "top_k": 2, - } - for key, value in kwargs.items(): - setattr(config, key, value) - return config - - @staticmethod - def create_app_dataset_join_mock( - app_id: str = "app-123", - dataset_id: str = "dataset-123", - **kwargs, - ) -> Mock: - """ - Create a mock AppDatasetJoin instance. - - Args: - app_id: Application ID - dataset_id: Dataset ID - **kwargs: Additional attributes to set on the mock - - Returns: - Mock object configured as an AppDatasetJoin instance - """ - join = Mock(spec=AppDatasetJoin) - join.app_id = app_id - join.dataset_id = dataset_id - for key, value in kwargs.items(): - setattr(join, key, value) - return join - - -# ============================================================================ -# Tests for update_dataset -# ============================================================================ - - -class TestDatasetServiceUpdateDataset: - """ - Comprehensive unit tests for DatasetService.update_dataset method. - - This test class covers the dataset update functionality, including - internal and external dataset updates, permission validation, and - name duplicate checking. - - The update_dataset method: - 1. Retrieves the dataset by ID - 2. Validates dataset exists - 3. Checks for duplicate names - 4. Validates user permissions - 5. Routes to appropriate update handler (internal or external) - 6. Returns the updated dataset - - Test scenarios include: - - Successful internal dataset updates - - Successful external dataset updates - - Permission validation - - Duplicate name detection - - Dataset not found errors - """ - - @pytest.fixture - def mock_dataset_service_dependencies(self): - """ - Mock dataset service dependencies for testing. - - Provides mocked dependencies including: - - get_dataset method - - check_dataset_permission method - - _has_dataset_same_name method - - Database session - - Current time utilities - """ - with ( - patch("services.dataset_service.DatasetService.get_dataset") as mock_get_dataset, - patch("services.dataset_service.DatasetService.check_dataset_permission") as mock_check_perm, - patch("services.dataset_service.DatasetService._has_dataset_same_name") as mock_has_same_name, - patch("extensions.ext_database.db.session") as mock_db, - patch("services.dataset_service.naive_utc_now") as mock_naive_utc_now, - ): - current_time = datetime.datetime(2023, 1, 1, 12, 0, 0) - mock_naive_utc_now.return_value = current_time - - yield { - "get_dataset": mock_get_dataset, - "check_permission": mock_check_perm, - "has_same_name": mock_has_same_name, - "db_session": mock_db, - "naive_utc_now": mock_naive_utc_now, - "current_time": current_time, - } - - def test_update_dataset_internal_success(self, mock_dataset_service_dependencies): - """ - Test successful update of an internal dataset. - - Verifies that when all validation passes, an internal dataset - is updated correctly through the _update_internal_dataset method. - - This test ensures: - - Dataset is retrieved correctly - - Permission is checked - - Name duplicate check is performed - - Internal update handler is called - - Updated dataset is returned - """ - # Arrange - dataset_id = "dataset-123" - dataset = DatasetUpdateDeleteTestDataFactory.create_dataset_mock( - dataset_id=dataset_id, provider="vendor", name="Old Name" - ) - user = DatasetUpdateDeleteTestDataFactory.create_user_mock() - - update_data = { - "name": "New Name", - "description": "New Description", - } - - mock_dataset_service_dependencies["get_dataset"].return_value = dataset - mock_dataset_service_dependencies["has_same_name"].return_value = False - - with patch("services.dataset_service.DatasetService._update_internal_dataset") as mock_update_internal: - mock_update_internal.return_value = dataset - - # Act - result = DatasetService.update_dataset(dataset_id, update_data, user) - - # Assert - assert result == dataset - - # Verify dataset was retrieved - mock_dataset_service_dependencies["get_dataset"].assert_called_once_with(dataset_id) - - # Verify permission was checked - mock_dataset_service_dependencies["check_permission"].assert_called_once_with(dataset, user) - - # Verify name duplicate check was performed - mock_dataset_service_dependencies["has_same_name"].assert_called_once() - - # Verify internal update handler was called - mock_update_internal.assert_called_once() - - def test_update_dataset_external_success(self, mock_dataset_service_dependencies): - """ - Test successful update of an external dataset. - - Verifies that when all validation passes, an external dataset - is updated correctly through the _update_external_dataset method. - - This test ensures: - - Dataset is retrieved correctly - - Permission is checked - - Name duplicate check is performed - - External update handler is called - - Updated dataset is returned - """ - # Arrange - dataset_id = "dataset-123" - dataset = DatasetUpdateDeleteTestDataFactory.create_dataset_mock( - dataset_id=dataset_id, provider="external", name="Old Name" - ) - user = DatasetUpdateDeleteTestDataFactory.create_user_mock() - - update_data = { - "name": "New Name", - "external_knowledge_id": "new-knowledge-id", - } - - mock_dataset_service_dependencies["get_dataset"].return_value = dataset - mock_dataset_service_dependencies["has_same_name"].return_value = False - - with patch("services.dataset_service.DatasetService._update_external_dataset") as mock_update_external: - mock_update_external.return_value = dataset - - # Act - result = DatasetService.update_dataset(dataset_id, update_data, user) - - # Assert - assert result == dataset - - # Verify external update handler was called - mock_update_external.assert_called_once() - - def test_update_dataset_not_found_error(self, mock_dataset_service_dependencies): - """ - Test error handling when dataset is not found. - - Verifies that when the dataset ID doesn't exist, a ValueError - is raised with an appropriate message. - - This test ensures: - - Dataset not found error is handled correctly - - No update operations are performed - - Error message is clear - """ - # Arrange - dataset_id = "non-existent-dataset" - user = DatasetUpdateDeleteTestDataFactory.create_user_mock() - - update_data = {"name": "New Name"} - - mock_dataset_service_dependencies["get_dataset"].return_value = None - - # Act & Assert - with pytest.raises(ValueError, match="Dataset not found"): - DatasetService.update_dataset(dataset_id, update_data, user) - - # Verify no update operations were attempted - mock_dataset_service_dependencies["check_permission"].assert_not_called() - mock_dataset_service_dependencies["has_same_name"].assert_not_called() - - def test_update_dataset_duplicate_name_error(self, mock_dataset_service_dependencies): - """ - Test error handling when dataset name already exists. - - Verifies that when a dataset with the same name already exists - in the tenant, a ValueError is raised. - - This test ensures: - - Duplicate name detection works correctly - - Error message is clear - - No update operations are performed - """ - # Arrange - dataset_id = "dataset-123" - dataset = DatasetUpdateDeleteTestDataFactory.create_dataset_mock(dataset_id=dataset_id) - user = DatasetUpdateDeleteTestDataFactory.create_user_mock() - - update_data = {"name": "Existing Name"} - - mock_dataset_service_dependencies["get_dataset"].return_value = dataset - mock_dataset_service_dependencies["has_same_name"].return_value = True # Duplicate exists - - # Act & Assert - with pytest.raises(ValueError, match="Dataset name already exists"): - DatasetService.update_dataset(dataset_id, update_data, user) - - # Verify permission check was not called (fails before that) - mock_dataset_service_dependencies["check_permission"].assert_not_called() - - def test_update_dataset_permission_denied_error(self, mock_dataset_service_dependencies): - """ - Test error handling when user lacks permission. - - Verifies that when the user doesn't have permission to update - the dataset, a NoPermissionError is raised. - - This test ensures: - - Permission validation works correctly - - Error is raised before any updates - - Error type is correct - """ - # Arrange - dataset_id = "dataset-123" - dataset = DatasetUpdateDeleteTestDataFactory.create_dataset_mock(dataset_id=dataset_id) - user = DatasetUpdateDeleteTestDataFactory.create_user_mock() - - update_data = {"name": "New Name"} - - mock_dataset_service_dependencies["get_dataset"].return_value = dataset - mock_dataset_service_dependencies["has_same_name"].return_value = False - mock_dataset_service_dependencies["check_permission"].side_effect = NoPermissionError("No permission") - - # Act & Assert - with pytest.raises(NoPermissionError): - DatasetService.update_dataset(dataset_id, update_data, user) - - -# ============================================================================ -# Tests for update_rag_pipeline_dataset_settings -# ============================================================================ - - -class TestDatasetServiceUpdateRagPipelineDatasetSettings: - """ - Comprehensive unit tests for DatasetService.update_rag_pipeline_dataset_settings method. - - This test class covers the RAG pipeline dataset settings update functionality, - including chunk structure, indexing technique, and embedding model configuration. - - The update_rag_pipeline_dataset_settings method: - 1. Validates current_user and tenant - 2. Merges dataset into session - 3. Handles unpublished vs published datasets differently - 4. Updates chunk structure, indexing technique, and retrieval model - 5. Configures embedding model for high_quality indexing - 6. Updates keyword_number for economy indexing - 7. Commits transaction - 8. Triggers index update tasks if needed - - Test scenarios include: - - Unpublished dataset updates - - Published dataset updates - - Chunk structure validation - - Indexing technique changes - - Embedding model configuration - - Error handling - """ - - @pytest.fixture - def mock_session(self): - """ - Mock database session for testing. - - Provides a mocked SQLAlchemy session for testing session operations. - """ - return Mock(spec=Session) - - @pytest.fixture - def mock_dataset_service_dependencies(self): - """ - Mock dataset service dependencies for testing. - - Provides mocked dependencies including: - - current_user context - - ModelManager - - DatasetCollectionBindingService - - Database session operations - - Task scheduling - """ - with ( - patch( - "services.dataset_service.current_user", create_autospec(Account, instance=True) - ) as mock_current_user, - patch("services.dataset_service.ModelManager.for_tenant") as mock_model_manager, - patch( - "services.dataset_service.DatasetCollectionBindingService.get_dataset_collection_binding" - ) as mock_get_binding, - patch("services.dataset_service.deal_dataset_index_update_task") as mock_task, - ): - mock_current_user.current_tenant_id = "tenant-123" - mock_current_user.id = "user-123" - - yield { - "current_user": mock_current_user, - "model_manager": mock_model_manager, - "get_binding": mock_get_binding, - "task": mock_task, - } - - def test_update_rag_pipeline_dataset_settings_unpublished_success( - self, mock_session, mock_dataset_service_dependencies - ): - """ - Test successful update of unpublished RAG pipeline dataset. - - Verifies that when a dataset is not published, all settings can - be updated including chunk structure and indexing technique. - - This test ensures: - - Current user validation passes - - Dataset is merged into session - - Chunk structure is updated - - Indexing technique is updated - - Embedding model is configured for high_quality - - Retrieval model is updated - - Dataset is added to session - """ - # Arrange - dataset = DatasetUpdateDeleteTestDataFactory.create_dataset_mock( - dataset_id="dataset-123", - runtime_mode="rag_pipeline", - chunk_structure="tree", - indexing_technique=IndexTechniqueType.HIGH_QUALITY, - ) - - knowledge_config = DatasetUpdateDeleteTestDataFactory.create_knowledge_configuration_mock( - chunk_structure="list", - indexing_technique=IndexTechniqueType.HIGH_QUALITY, - embedding_model_provider="openai", - embedding_model="text-embedding-ada-002", - ) - - # Mock embedding model - mock_embedding_model = Mock() - mock_embedding_model.model_name = "text-embedding-ada-002" - mock_embedding_model.provider = "openai" - mock_embedding_model.credentials = {} - - mock_model_schema = Mock() - mock_model_schema.features = [] - - mock_text_embedding_model = Mock() - mock_text_embedding_model.get_model_schema.return_value = mock_model_schema - mock_embedding_model.model_type_instance = mock_text_embedding_model - - mock_model_instance = Mock() - mock_model_instance.get_model_instance.return_value = mock_embedding_model - mock_dataset_service_dependencies["model_manager"].return_value = mock_model_instance - - # Mock collection binding - mock_binding = Mock() - mock_binding.id = "binding-123" - mock_dataset_service_dependencies["get_binding"].return_value = mock_binding - - mock_session.merge.return_value = dataset - - # Act - DatasetService.update_rag_pipeline_dataset_settings( - mock_session, dataset, knowledge_config, has_published=False - ) - - # Assert - assert dataset.chunk_structure == "list" - assert dataset.indexing_technique == IndexTechniqueType.HIGH_QUALITY - assert dataset.embedding_model == "text-embedding-ada-002" - assert dataset.embedding_model_provider == "openai" - assert dataset.collection_binding_id == "binding-123" - - # Verify dataset was added to session - mock_session.add.assert_called_once_with(dataset) - - def test_update_rag_pipeline_dataset_settings_published_chunk_structure_error( - self, mock_session, mock_dataset_service_dependencies - ): - """ - Test error handling when trying to update chunk structure of published dataset. - - Verifies that when a dataset is published and has an existing chunk structure, - attempting to change it raises a ValueError. - - This test ensures: - - Chunk structure change is detected - - ValueError is raised with appropriate message - - No updates are committed - """ - # Arrange - dataset = DatasetUpdateDeleteTestDataFactory.create_dataset_mock( - dataset_id="dataset-123", - runtime_mode="rag_pipeline", - chunk_structure="tree", # Existing structure - indexing_technique=IndexTechniqueType.HIGH_QUALITY, - ) - - knowledge_config = DatasetUpdateDeleteTestDataFactory.create_knowledge_configuration_mock( - chunk_structure="list", # Different structure - indexing_technique=IndexTechniqueType.HIGH_QUALITY, - ) - - mock_session.merge.return_value = dataset - - # Act & Assert - with pytest.raises(ValueError, match="Chunk structure is not allowed to be updated"): - DatasetService.update_rag_pipeline_dataset_settings( - mock_session, dataset, knowledge_config, has_published=True - ) - - # Verify no commit was attempted - mock_session.commit.assert_not_called() - - def test_update_rag_pipeline_dataset_settings_published_economy_error( - self, mock_session, mock_dataset_service_dependencies - ): - """ - Test error handling when trying to change to economy indexing on published dataset. - - Verifies that when a dataset is published, changing indexing technique to - economy is not allowed and raises a ValueError. - - This test ensures: - - Economy indexing change is detected - - ValueError is raised with appropriate message - - No updates are committed - """ - # Arrange - dataset = DatasetUpdateDeleteTestDataFactory.create_dataset_mock( - dataset_id="dataset-123", - runtime_mode="rag_pipeline", - indexing_technique=IndexTechniqueType.HIGH_QUALITY, # Current technique - ) - - knowledge_config = DatasetUpdateDeleteTestDataFactory.create_knowledge_configuration_mock( - indexing_technique=IndexTechniqueType.ECONOMY, # Trying to change to economy - ) - - mock_session.merge.return_value = dataset - - # Act & Assert - with pytest.raises( - ValueError, match="Knowledge base indexing technique is not allowed to be updated to economy" - ): - DatasetService.update_rag_pipeline_dataset_settings( - mock_session, dataset, knowledge_config, has_published=True - ) - - def test_update_rag_pipeline_dataset_settings_missing_current_user_error( - self, mock_session, mock_dataset_service_dependencies - ): - """ - Test error handling when current_user is missing. - - Verifies that when current_user is None or has no tenant ID, a ValueError - is raised. - - This test ensures: - - Current user validation works correctly - - Error message is clear - - No updates are performed - """ - # Arrange - dataset = DatasetUpdateDeleteTestDataFactory.create_dataset_mock() - knowledge_config = DatasetUpdateDeleteTestDataFactory.create_knowledge_configuration_mock() - - mock_dataset_service_dependencies["current_user"].current_tenant_id = None # Missing tenant - - # Act & Assert - with pytest.raises(ValueError, match="Current user or current tenant not found"): - DatasetService.update_rag_pipeline_dataset_settings( - mock_session, dataset, knowledge_config, has_published=False - ) - - -# ============================================================================ -# Additional Documentation and Notes -# ============================================================================ -# -# This test suite covers the core update and delete operations for datasets. -# Additional test scenarios that could be added: -# -# 1. Update Operations: -# - Testing with different indexing techniques -# - Testing embedding model provider changes -# - Testing retrieval model updates -# - Testing icon_info updates -# - Testing partial_member_list updates -# -# 2. Delete Operations: -# - Testing cascade deletion of related data -# - Testing event handler execution -# - Testing with datasets that have documents -# - Testing with datasets that have segments -# -# 3. RAG Pipeline Operations: -# - Testing economy indexing technique updates -# - Testing embedding model provider errors -# - Testing keyword_number updates -# - Testing index update task triggering -# -# 4. Integration Scenarios: -# - Testing update followed by delete -# - Testing multiple updates in sequence -# - Testing concurrent update attempts -# - Testing with different user roles -# -# These scenarios are not currently implemented but could be added if needed -# based on real-world usage patterns or discovered edge cases. -# -# ============================================================================ diff --git a/api/tests/unit_tests/services/document_service_status.py b/api/tests/unit_tests/services/document_service_status.py deleted file mode 100644 index 1b682d5762..0000000000 --- a/api/tests/unit_tests/services/document_service_status.py +++ /dev/null @@ -1,70 +0,0 @@ -"""Unit tests for non-SQL validation in DocumentService status management methods.""" - -from unittest.mock import Mock, create_autospec - -import pytest - -from models import Account -from models.dataset import Dataset -from services.dataset_service import DocumentService - - -class DocumentStatusTestDataFactory: - """Factory class for creating test data and mock objects for document status tests.""" - - @staticmethod - def create_dataset_mock( - dataset_id: str = "dataset-123", - tenant_id: str = "tenant-123", - name: str = "Test Dataset", - built_in_field_enabled: bool = False, - **kwargs, - ) -> Mock: - """Create a mock Dataset with specified attributes.""" - dataset = Mock(spec=Dataset) - dataset.id = dataset_id - dataset.tenant_id = tenant_id - dataset.name = name - dataset.built_in_field_enabled = built_in_field_enabled - for key, value in kwargs.items(): - setattr(dataset, key, value) - return dataset - - @staticmethod - def create_user_mock( - user_id: str = "user-123", - tenant_id: str = "tenant-123", - **kwargs, - ) -> Mock: - """Create a mock user (Account) with specified attributes.""" - user = create_autospec(Account, instance=True) - user.id = user_id - user.current_tenant_id = tenant_id - for key, value in kwargs.items(): - setattr(user, key, value) - return user - - -class TestDocumentServiceBatchUpdateDocumentStatus: - """Unit tests for non-SQL path in DocumentService.batch_update_document_status.""" - - def test_batch_update_document_status_invalid_action_error(self): - """ - Test error handling for invalid action. - - Verifies that when an invalid action is provided, a ValueError - is raised. - - This test ensures: - - Invalid actions are rejected - - Error message is clear - - Error type is correct - """ - # Arrange - dataset = DocumentStatusTestDataFactory.create_dataset_mock() - user = DocumentStatusTestDataFactory.create_user_mock() - document_ids = ["document-123"] - - # Act & Assert - with pytest.raises(ValueError, match="Invalid action"): - DocumentService.batch_update_document_status(dataset, document_ids, "invalid_action", user) diff --git a/api/tests/unit_tests/services/document_service_validation.py b/api/tests/unit_tests/services/document_service_validation.py index 6903c47a24..71df8c4e20 100644 --- a/api/tests/unit_tests/services/document_service_validation.py +++ b/api/tests/unit_tests/services/document_service_validation.py @@ -109,11 +109,11 @@ This test suite follows a comprehensive testing strategy that covers: from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.entities.model_entities import ModelType from core.errors.error import LLMBadRequestError, ProviderTokenNotInitError from core.rag.entities import PreProcessingRule, Rule, Segmentation from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType +from graphon.model_runtime.entities.model_entities import ModelType from models.dataset import Dataset, DatasetProcessRule, Document from services.dataset_service import DatasetService, DocumentService from services.entities.knowledge_entities.knowledge_entities import ( diff --git a/api/tests/unit_tests/services/external_dataset_service.py b/api/tests/unit_tests/services/external_dataset_service.py index 5848603ab8..83bae370eb 100644 --- a/api/tests/unit_tests/services/external_dataset_service.py +++ b/api/tests/unit_tests/services/external_dataset_service.py @@ -51,7 +51,7 @@ class ExternalDatasetTestDataFactory: tenant_id: str = "tenant-1", name: str = "Test API", description: str = "Description", - settings: dict | None = None, + settings: dict[str, Any] | None = None, ) -> ExternalKnowledgeApis: """ Create a concrete ``ExternalKnowledgeApis`` instance with minimal fields. @@ -220,7 +220,7 @@ class TestExternalDatasetServiceValidateApiList: ({"endpoint": "https://example.com"}, "api_key is required"), ], ) - def test_validate_api_list_failures(self, config: dict, expected_message: str): + def test_validate_api_list_failures(self, config: dict[str, Any], expected_message: str): """ Invalid configs should raise ``ValueError`` with a clear message. """ @@ -396,10 +396,11 @@ class TestExternalDatasetServiceUsageAndBindings: mock_db_session.scalar.return_value = 3 - in_use, count = ExternalDatasetService.external_knowledge_api_use_check("api-1") + in_use, count = ExternalDatasetService.external_knowledge_api_use_check("api-1", "tenant-1") assert in_use is True assert count == 3 + assert "tenant_id" in str(mock_db_session.scalar.call_args.args[0]) def test_external_knowledge_api_use_check_not_in_use(self, mock_db_session: MagicMock): """ @@ -408,7 +409,7 @@ class TestExternalDatasetServiceUsageAndBindings: mock_db_session.scalar.return_value = 0 - in_use, count = ExternalDatasetService.external_knowledge_api_use_check("api-1") + in_use, count = ExternalDatasetService.external_knowledge_api_use_check("api-1", "tenant-1") assert in_use is False assert count == 0 diff --git a/api/tests/unit_tests/services/hit_service.py b/api/tests/unit_tests/services/hit_service.py index 22ab8503df..ddbc7dc041 100644 --- a/api/tests/unit_tests/services/hit_service.py +++ b/api/tests/unit_tests/services/hit_service.py @@ -6,6 +6,7 @@ which handles retrieval testing operations for datasets, including internal dataset retrieval and external knowledge base retrieval. """ +from typing import Any from unittest.mock import MagicMock, Mock, patch import pytest @@ -30,7 +31,7 @@ class HitTestingTestDataFactory: dataset_id: str = "dataset-123", tenant_id: str = "tenant-123", provider: str = "vendor", - retrieval_model: dict | None = None, + retrieval_model: dict[str, Any] | None = None, **kwargs, ) -> Mock: """ @@ -83,7 +84,7 @@ class HitTestingTestDataFactory: @staticmethod def create_document_mock( content: str = "Test document content", - metadata: dict | None = None, + metadata: dict[str, Any] | None = None, **kwargs, ) -> Mock: """ diff --git a/api/tests/unit_tests/services/plugin/test_plugin_auto_upgrade_service.py b/api/tests/unit_tests/services/plugin/test_plugin_auto_upgrade_service.py index edb50d09a6..021bebceff 100644 --- a/api/tests/unit_tests/services/plugin/test_plugin_auto_upgrade_service.py +++ b/api/tests/unit_tests/services/plugin/test_plugin_auto_upgrade_service.py @@ -6,23 +6,23 @@ MODULE = "services.plugin.plugin_auto_upgrade_service" def _patched_session(): - """Patch Session(db.engine) to return a mock session as context manager.""" + """Patch session_factory.create_session() to return a mock session as context manager.""" session = MagicMock() - session_cls = MagicMock() - session_cls.return_value.__enter__ = MagicMock(return_value=session) - session_cls.return_value.__exit__ = MagicMock(return_value=False) - patcher = patch(f"{MODULE}.Session", session_cls) - db_patcher = patch(f"{MODULE}.db") - return patcher, db_patcher, session + session.__enter__ = MagicMock(return_value=session) + session.__exit__ = MagicMock(return_value=False) + mock_factory = MagicMock() + mock_factory.create_session.return_value = session + patcher = patch(f"{MODULE}.session_factory", mock_factory) + return patcher, session class TestGetStrategy: def test_returns_strategy_when_found(self): - p1, p2, session = _patched_session() + p1, session = _patched_session() strategy = MagicMock() - session.query.return_value.where.return_value.first.return_value = strategy + session.scalar.return_value = strategy - with p1, p2: + with p1: from services.plugin.plugin_auto_upgrade_service import PluginAutoUpgradeService result = PluginAutoUpgradeService.get_strategy("t1") @@ -30,10 +30,10 @@ class TestGetStrategy: assert result is strategy def test_returns_none_when_not_found(self): - p1, p2, session = _patched_session() - session.query.return_value.where.return_value.first.return_value = None + p1, session = _patched_session() + session.scalar.return_value = None - with p1, p2: + with p1: from services.plugin.plugin_auto_upgrade_service import PluginAutoUpgradeService result = PluginAutoUpgradeService.get_strategy("t1") @@ -43,10 +43,10 @@ class TestGetStrategy: class TestChangeStrategy: def test_creates_new_strategy(self): - p1, p2, session = _patched_session() - session.query.return_value.where.return_value.first.return_value = None + p1, session = _patched_session() + session.scalar.return_value = None - with p1, p2, patch(f"{MODULE}.TenantPluginAutoUpgradeStrategy") as strat_cls: + with p1, patch(f"{MODULE}.select"), patch(f"{MODULE}.TenantPluginAutoUpgradeStrategy") as strat_cls: strat_cls.return_value = MagicMock() from services.plugin.plugin_auto_upgrade_service import PluginAutoUpgradeService @@ -61,14 +61,13 @@ class TestChangeStrategy: assert result is True session.add.assert_called_once() - session.commit.assert_called_once() def test_updates_existing_strategy(self): - p1, p2, session = _patched_session() + p1, session = _patched_session() existing = MagicMock() - session.query.return_value.where.return_value.first.return_value = existing + session.scalar.return_value = existing - with p1, p2: + with p1: from services.plugin.plugin_auto_upgrade_service import PluginAutoUpgradeService result = PluginAutoUpgradeService.change_strategy( @@ -86,17 +85,16 @@ class TestChangeStrategy: assert existing.upgrade_mode == TenantPluginAutoUpgradeStrategy.UpgradeMode.PARTIAL assert existing.exclude_plugins == ["p1"] assert existing.include_plugins == ["p2"] - session.commit.assert_called_once() class TestExcludePlugin: def test_creates_default_strategy_when_none_exists(self): - p1, p2, session = _patched_session() - session.query.return_value.where.return_value.first.return_value = None + p1, session = _patched_session() + session.scalar.return_value = None with ( p1, - p2, + patch(f"{MODULE}.select"), patch(f"{MODULE}.TenantPluginAutoUpgradeStrategy") as strat_cls, patch(f"{MODULE}.PluginAutoUpgradeService.change_strategy") as cs, ): @@ -111,13 +109,13 @@ class TestExcludePlugin: cs.assert_called_once() def test_appends_to_exclude_list_in_exclude_mode(self): - p1, p2, session = _patched_session() + p1, session = _patched_session() existing = MagicMock() existing.upgrade_mode = "exclude" existing.exclude_plugins = ["p-existing"] - session.query.return_value.where.return_value.first.return_value = existing + session.scalar.return_value = existing - with p1, p2, patch(f"{MODULE}.TenantPluginAutoUpgradeStrategy") as strat_cls: + with p1, patch(f"{MODULE}.select"), patch(f"{MODULE}.TenantPluginAutoUpgradeStrategy") as strat_cls: strat_cls.UpgradeMode.EXCLUDE = "exclude" strat_cls.UpgradeMode.PARTIAL = "partial" strat_cls.UpgradeMode.ALL = "all" @@ -127,16 +125,15 @@ class TestExcludePlugin: assert result is True assert existing.exclude_plugins == ["p-existing", "p-new"] - session.commit.assert_called_once() def test_removes_from_include_list_in_partial_mode(self): - p1, p2, session = _patched_session() + p1, session = _patched_session() existing = MagicMock() existing.upgrade_mode = "partial" existing.include_plugins = ["p1", "p2"] - session.query.return_value.where.return_value.first.return_value = existing + session.scalar.return_value = existing - with p1, p2, patch(f"{MODULE}.TenantPluginAutoUpgradeStrategy") as strat_cls: + with p1, patch(f"{MODULE}.select"), patch(f"{MODULE}.TenantPluginAutoUpgradeStrategy") as strat_cls: strat_cls.UpgradeMode.EXCLUDE = "exclude" strat_cls.UpgradeMode.PARTIAL = "partial" strat_cls.UpgradeMode.ALL = "all" @@ -148,12 +145,12 @@ class TestExcludePlugin: assert existing.include_plugins == ["p2"] def test_switches_to_exclude_mode_from_all(self): - p1, p2, session = _patched_session() + p1, session = _patched_session() existing = MagicMock() existing.upgrade_mode = "all" - session.query.return_value.where.return_value.first.return_value = existing + session.scalar.return_value = existing - with p1, p2, patch(f"{MODULE}.TenantPluginAutoUpgradeStrategy") as strat_cls: + with p1, patch(f"{MODULE}.select"), patch(f"{MODULE}.TenantPluginAutoUpgradeStrategy") as strat_cls: strat_cls.UpgradeMode.EXCLUDE = "exclude" strat_cls.UpgradeMode.PARTIAL = "partial" strat_cls.UpgradeMode.ALL = "all" @@ -166,13 +163,13 @@ class TestExcludePlugin: assert existing.exclude_plugins == ["p1"] def test_no_duplicate_in_exclude_list(self): - p1, p2, session = _patched_session() + p1, session = _patched_session() existing = MagicMock() existing.upgrade_mode = "exclude" existing.exclude_plugins = ["p1"] - session.query.return_value.where.return_value.first.return_value = existing + session.scalar.return_value = existing - with p1, p2, patch(f"{MODULE}.TenantPluginAutoUpgradeStrategy") as strat_cls: + with p1, patch(f"{MODULE}.select"), patch(f"{MODULE}.TenantPluginAutoUpgradeStrategy") as strat_cls: strat_cls.UpgradeMode.EXCLUDE = "exclude" strat_cls.UpgradeMode.PARTIAL = "partial" strat_cls.UpgradeMode.ALL = "all" diff --git a/api/tests/unit_tests/services/plugin/test_plugin_permission_service.py b/api/tests/unit_tests/services/plugin/test_plugin_permission_service.py index 69091110db..53a9e6210c 100644 --- a/api/tests/unit_tests/services/plugin/test_plugin_permission_service.py +++ b/api/tests/unit_tests/services/plugin/test_plugin_permission_service.py @@ -6,23 +6,25 @@ MODULE = "services.plugin.plugin_permission_service" def _patched_session(): - """Patch Session(db.engine) to return a mock session as context manager.""" + """Patch session_factory.create_session() to return a mock session as context manager.""" session = MagicMock() - session_cls = MagicMock() - session_cls.return_value.__enter__ = MagicMock(return_value=session) - session_cls.return_value.__exit__ = MagicMock(return_value=False) - patcher = patch(f"{MODULE}.Session", session_cls) - db_patcher = patch(f"{MODULE}.db") - return patcher, db_patcher, session + session.__enter__ = MagicMock(return_value=session) + session.__exit__ = MagicMock(return_value=False) + session.begin.return_value.__enter__ = MagicMock(return_value=session) + session.begin.return_value.__exit__ = MagicMock(return_value=False) + mock_factory = MagicMock() + mock_factory.create_session.return_value = session + patcher = patch(f"{MODULE}.session_factory", mock_factory) + return patcher, session class TestGetPermission: def test_returns_permission_when_found(self): - p1, p2, session = _patched_session() + p1, session = _patched_session() permission = MagicMock() - session.query.return_value.where.return_value.first.return_value = permission + session.scalar.return_value = permission - with p1, p2: + with p1: from services.plugin.plugin_permission_service import PluginPermissionService result = PluginPermissionService.get_permission("t1") @@ -30,10 +32,10 @@ class TestGetPermission: assert result is permission def test_returns_none_when_not_found(self): - p1, p2, session = _patched_session() - session.query.return_value.where.return_value.first.return_value = None + p1, session = _patched_session() + session.scalar.return_value = None - with p1, p2: + with p1: from services.plugin.plugin_permission_service import PluginPermissionService result = PluginPermissionService.get_permission("t1") @@ -43,10 +45,10 @@ class TestGetPermission: class TestChangePermission: def test_creates_new_permission_when_not_exists(self): - p1, p2, session = _patched_session() - session.query.return_value.where.return_value.first.return_value = None + p1, session = _patched_session() + session.scalar.return_value = None - with p1, p2, patch(f"{MODULE}.TenantPluginPermission") as perm_cls: + with p1, patch(f"{MODULE}.select"), patch(f"{MODULE}.TenantPluginPermission") as perm_cls: perm_cls.return_value = MagicMock() from services.plugin.plugin_permission_service import PluginPermissionService @@ -54,22 +56,24 @@ class TestChangePermission: "t1", TenantPluginPermission.InstallPermission.EVERYONE, TenantPluginPermission.DebugPermission.EVERYONE ) + assert result is True + session.begin.assert_called_once() session.add.assert_called_once() - session.commit.assert_called_once() def test_updates_existing_permission(self): - p1, p2, session = _patched_session() + p1, session = _patched_session() existing = MagicMock() - session.query.return_value.where.return_value.first.return_value = existing + session.scalar.return_value = existing - with p1, p2: + with p1: from services.plugin.plugin_permission_service import PluginPermissionService result = PluginPermissionService.change_permission( "t1", TenantPluginPermission.InstallPermission.ADMINS, TenantPluginPermission.DebugPermission.ADMINS ) + assert result is True + session.begin.assert_called_once() assert existing.install_permission == TenantPluginPermission.InstallPermission.ADMINS assert existing.debug_permission == TenantPluginPermission.DebugPermission.ADMINS - session.commit.assert_called_once() session.add.assert_not_called() diff --git a/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_dsl_service.py b/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_dsl_service.py index f4fdac5f9f..337659b15f 100644 --- a/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_dsl_service.py +++ b/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_dsl_service.py @@ -1,13 +1,13 @@ from types import SimpleNamespace -from typing import cast +from typing import Any, cast from unittest.mock import MagicMock, Mock import pytest import yaml -from graphon.enums import BuiltinNodeTypes from sqlalchemy.orm import Session from core.workflow.nodes.knowledge_index import KNOWLEDGE_INDEX_NODE_TYPE +from graphon.enums import BuiltinNodeTypes from services.entities.knowledge_entities.rag_pipeline_entities import IconInfo, RagPipelineDatasetCreateEntity from services.rag_pipeline.rag_pipeline_dsl_service import ( ImportStatus, @@ -247,10 +247,11 @@ workflow: dataset_mock = Mock() dataset_mock.id = "d1" mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.Dataset", return_value=dataset_mock) + mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.select", return_value=MagicMock()) session = cast(MagicMock, Mock()) service = RagPipelineDslService(session=cast(Session, session)) - session.query.return_value.filter_by.return_value.all.return_value = [] + session.scalars.return_value.all.return_value = [] account = Mock(current_tenant_id="t1") result = service.import_rag_pipeline(account=account, import_mode="yaml-content", yaml_content=yaml_content) @@ -320,6 +321,7 @@ workflow: dataset_mock.id = "d1" mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.Dataset", return_value=dataset_mock) mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.DatasetCollectionBinding", return_value=Mock(id="b1")) + mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.select", return_value=MagicMock()) service = RagPipelineDslService(session=Mock()) # Mocking self._session.scalar for the pipeline lookup @@ -406,12 +408,14 @@ def test_create_or_update_pipeline_create_new(mocker) -> None: mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.current_user", SimpleNamespace(id="u1")) mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.Workflow", return_value=Mock()) + mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.select", return_value=MagicMock()) pipeline_cls = mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.Pipeline") pipeline_instance = pipeline_cls.return_value pipeline_instance.tenant_id = "t1" pipeline_instance.id = "p1" pipeline_instance.name = "P" pipeline_instance.is_published = False + session.scalar.return_value = None result = service._create_or_update_pipeline(pipeline=None, data=data, account=account, dependencies=[]) @@ -447,8 +451,7 @@ def test_export_rag_pipeline_dsl_with_workflow(mocker) -> None: workflow.rag_pipeline_variables = [] workflow.to_dict.return_value = {"graph": {"nodes": []}} - # Mocking single .where() call - session.query.return_value.where.return_value.first.return_value = workflow + session.scalar.return_value = workflow mocker.patch( "services.rag_pipeline.rag_pipeline_dsl_service.DependenciesAnalysisService.generate_dependencies", return_value=[], @@ -550,12 +553,12 @@ def test_append_workflow_export_data_filters_credentials(mocker) -> None: ] } } - session.query.return_value.where.return_value.first.return_value = workflow + session.scalar.return_value = workflow mocker.patch( "services.rag_pipeline.rag_pipeline_dsl_service.DependenciesAnalysisService.generate_dependencies", return_value=[], ) - export_data: dict = {} + export_data: dict[str, Any] = {} pipeline = Mock(id="p1", tenant_id="t1") service._append_workflow_export_data(export_data=export_data, pipeline=pipeline, include_secret=False) @@ -568,7 +571,7 @@ def test_append_workflow_export_data_filters_credentials(mocker) -> None: def test_create_rag_pipeline_dataset_raises_when_name_conflicts(mocker) -> None: session = cast(MagicMock, Mock()) service = RagPipelineDslService(session=cast(Session, session)) - session.query.return_value.filter_by.return_value.first.return_value = Mock() + session.scalar.return_value = Mock() create_entity = RagPipelineDatasetCreateEntity( name="Existing Name", description="", @@ -584,8 +587,8 @@ def test_create_rag_pipeline_dataset_raises_when_name_conflicts(mocker) -> None: def test_create_rag_pipeline_dataset_generates_name_when_missing(mocker) -> None: session = cast(MagicMock, Mock()) service = RagPipelineDslService(session=cast(Session, session)) - session.query.return_value.filter_by.return_value.first.return_value = None - session.query.return_value.filter_by.return_value.all.return_value = [Mock(name="Untitled")] + session.scalar.return_value = None + session.scalars.return_value.all.return_value = [Mock(name="Untitled")] mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.generate_incremental_name", return_value="Untitled 2") mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.current_user", Mock(id="u1", current_tenant_id="t1")) mocker.patch.object( @@ -632,13 +635,13 @@ def test_append_workflow_export_data_encrypts_knowledge_retrieval_dataset_ids(mo ] } } - session.query.return_value.where.return_value.first.return_value = workflow + session.scalar.return_value = workflow mocker.patch.object(service, "encrypt_dataset_id", side_effect=lambda dataset_id, tenant_id: f"enc-{dataset_id}") mocker.patch( "services.rag_pipeline.rag_pipeline_dsl_service.DependenciesAnalysisService.generate_dependencies", return_value=[], ) - export_data: dict = {} + export_data: dict[str, Any] = {} pipeline = Mock(id="p1", tenant_id="t1") service._append_workflow_export_data(export_data=export_data, pipeline=pipeline, include_secret=False) @@ -727,7 +730,7 @@ def test_create_or_update_pipeline_decrypts_knowledge_retrieval_dataset_ids(mock }, } draft_workflow = Mock(id="wf1") - session.query.return_value.where.return_value.first.return_value = draft_workflow + session.scalar.return_value = draft_workflow mocker.patch.object(service, "decrypt_dataset_id", side_effect=["d1", None]) result = service._create_or_update_pipeline(pipeline=pipeline, data=data, account=account) @@ -743,7 +746,8 @@ def test_create_or_update_pipeline_creates_draft_when_missing(mocker) -> None: account = Mock(id="u1", current_tenant_id="t1") pipeline = Mock(id="p1", tenant_id="t1", name="N", description="D") data = {"rag_pipeline": {"name": "N2", "description": "D2"}, "workflow": {"graph": {"nodes": []}}} - session.query.return_value.where.return_value.first.return_value = None + session.scalar.return_value = None + mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.select", return_value=MagicMock()) workflow_cls = mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.Workflow") workflow_cls.return_value.id = "wf-new" @@ -817,7 +821,7 @@ def test_import_rag_pipeline_fails_for_non_string_version_type() -> None: def test_append_workflow_export_data_raises_when_draft_workflow_missing() -> None: session = cast(MagicMock, Mock()) service = RagPipelineDslService(session=cast(Session, session)) - session.query.return_value.where.return_value.first.return_value = None + session.scalar.return_value = None with pytest.raises(ValueError, match="Missing draft workflow configuration"): service._append_workflow_export_data(export_data={}, pipeline=Mock(tenant_id="t1"), include_secret=False) @@ -841,7 +845,7 @@ def test_append_workflow_export_data_keeps_secret_fields_when_include_secret_tru ] } } - session.query.return_value.where.return_value.first.return_value = workflow + session.scalar.return_value = workflow mocker.patch( "services.rag_pipeline.rag_pipeline_dsl_service.DependenciesAnalysisService.generate_dependencies", return_value=[], @@ -1003,7 +1007,8 @@ def test_import_rag_pipeline_sets_default_version_and_kind(mocker) -> None: ) dataset = Mock(id="d1") mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.Dataset", return_value=dataset) - session.query.return_value.filter_by.return_value.all.return_value = [] + mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.select", return_value=MagicMock()) + session.scalars.return_value.all.return_value = [] mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.generate_incremental_name", return_value="P") result = service.import_rag_pipeline( @@ -1061,7 +1066,7 @@ def test_append_workflow_export_data_skips_empty_node_data(mocker) -> None: workflow = Mock() workflow.graph_dict = {"nodes": []} workflow.to_dict.return_value = {"graph": {"nodes": [{"data": {}}, {}]}} - session.query.return_value.where.return_value.first.return_value = workflow + session.scalar.return_value = workflow mocker.patch( "services.rag_pipeline.rag_pipeline_dsl_service.DependenciesAnalysisService.generate_dependencies", return_value=[], @@ -1246,11 +1251,12 @@ def test_create_or_update_pipeline_saves_dependencies_to_redis(mocker) -> None: account = Mock(id="u1", current_tenant_id="t1") mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.current_user", SimpleNamespace(id="u1")) mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.Workflow", return_value=Mock(id="wf-1")) + mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.select", return_value=MagicMock()) pipeline_cls = mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.Pipeline") pipeline = pipeline_cls.return_value pipeline.tenant_id = "t1" pipeline.id = "p1" - session.query.return_value.where.return_value.first.return_value = None + session.scalar.return_value = None setex = mocker.patch("services.rag_pipeline.rag_pipeline_dsl_service.redis_client.setex") dependency = PluginDependency( type=PluginDependency.Type.Marketplace, diff --git a/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_service.py b/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_service.py index f270ee0fde..327281d07f 100644 --- a/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_service.py +++ b/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_service.py @@ -116,81 +116,6 @@ def test_get_all_published_workflow_applies_limit_and_has_more(rag_pipeline_serv assert has_more is True -def test_get_pipeline_raises_when_dataset_not_found(mocker, rag_pipeline_service) -> None: - mocker.patch("services.rag_pipeline.rag_pipeline.db.session.scalar", return_value=None) - - with pytest.raises(ValueError, match="Dataset not found"): - rag_pipeline_service.get_pipeline("tenant-1", "dataset-1") - - -# --- update_customized_pipeline_template --- - - -def test_update_customized_pipeline_template_success(mocker) -> None: - template = SimpleNamespace(name="old", description="old", icon={}, updated_by=None) - - # First scalar finds the template, second scalar (duplicate check) returns None - mocker.patch("services.rag_pipeline.rag_pipeline.db.session.scalar", side_effect=[template, None]) - mocker.patch("services.rag_pipeline.rag_pipeline.db.session.commit") - mocker.patch("services.rag_pipeline.rag_pipeline.current_user", SimpleNamespace(id="u1", current_tenant_id="t1")) - - info = PipelineTemplateInfoEntity( - name="new", - description="new desc", - icon_info=IconInfo(icon="🔥"), - ) - result = RagPipelineService.update_customized_pipeline_template("tpl-1", info) - - assert result.name == "new" - assert result.description == "new desc" - - -def test_update_customized_pipeline_template_not_found(mocker) -> None: - mocker.patch("services.rag_pipeline.rag_pipeline.db.session.scalar", return_value=None) - mocker.patch("services.rag_pipeline.rag_pipeline.current_user", SimpleNamespace(id="u1", current_tenant_id="t1")) - - info = PipelineTemplateInfoEntity(name="x", description="d", icon_info=IconInfo(icon="i")) - with pytest.raises(ValueError, match="Customized pipeline template not found"): - RagPipelineService.update_customized_pipeline_template("tpl-missing", info) - - -def test_update_customized_pipeline_template_duplicate_name(mocker) -> None: - template = SimpleNamespace(name="old", description="old", icon={}, updated_by=None) - duplicate = SimpleNamespace(name="dup") - - mocker.patch("services.rag_pipeline.rag_pipeline.db.session.scalar", side_effect=[template, duplicate]) - mocker.patch("services.rag_pipeline.rag_pipeline.current_user", SimpleNamespace(id="u1", current_tenant_id="t1")) - - info = PipelineTemplateInfoEntity(name="dup", description="d", icon_info=IconInfo(icon="i")) - with pytest.raises(ValueError, match="Template name is already exists"): - RagPipelineService.update_customized_pipeline_template("tpl-1", info) - - -# --- delete_customized_pipeline_template --- - - -def test_delete_customized_pipeline_template_success(mocker) -> None: - template = SimpleNamespace(id="tpl-1") - mocker.patch("services.rag_pipeline.rag_pipeline.db.session.scalar", return_value=template) - delete_mock = mocker.patch("services.rag_pipeline.rag_pipeline.db.session.delete") - commit_mock = mocker.patch("services.rag_pipeline.rag_pipeline.db.session.commit") - - mocker.patch("services.rag_pipeline.rag_pipeline.current_user", SimpleNamespace(id="u1", current_tenant_id="t1")) - - RagPipelineService.delete_customized_pipeline_template("tpl-1") - - delete_mock.assert_called_once_with(template) - commit_mock.assert_called_once() - - -def test_delete_customized_pipeline_template_not_found(mocker) -> None: - mocker.patch("services.rag_pipeline.rag_pipeline.db.session.scalar", return_value=None) - mocker.patch("services.rag_pipeline.rag_pipeline.current_user", SimpleNamespace(id="u1", current_tenant_id="t1")) - - with pytest.raises(ValueError, match="Customized pipeline template not found"): - RagPipelineService.delete_customized_pipeline_template("tpl-missing") - - # --- sync_draft_workflow --- @@ -862,7 +787,6 @@ def test_retry_error_document_success(mocker, rag_pipeline_service) -> None: def test_set_datasource_variables_success(mocker, rag_pipeline_service) -> None: from graphon.entities.workflow_node_execution import WorkflowNodeExecution - from models.dataset import Pipeline # 1. Setup mocks @@ -1558,12 +1482,11 @@ def test_handle_node_run_result_raises_when_no_terminal_event(mocker, rag_pipeli def test_handle_node_run_result_marks_document_error_for_published_invoke(mocker, rag_pipeline_service) -> None: + from core.app.entities.app_invoke_entities import InvokeFrom from graphon.enums import WorkflowNodeExecutionStatus from graphon.graph_events import NodeRunFailedEvent from graphon.node_events.base import NodeRunResult - from core.app.entities.app_invoke_entities import InvokeFrom - class FakeVariablePool: def __init__(self): self._values = { diff --git a/api/tests/unit_tests/services/test_account_service.py b/api/tests/unit_tests/services/test_account_service.py index d15074e7a6..c4f5f57153 100644 --- a/api/tests/unit_tests/services/test_account_service.py +++ b/api/tests/unit_tests/services/test_account_service.py @@ -5,7 +5,7 @@ from unittest.mock import MagicMock, patch import pytest from configs import dify_config -from models.account import Account, AccountStatus +from models.account import Account, AccountStatus, TenantStatus from services.account_service import AccountService, RegisterService, TenantService from services.errors.account import ( AccountAlreadyInTenantError, @@ -1427,16 +1427,7 @@ class TestRegisterService: mock_tenant.name = "Test Workspace" mock_inviter = TestAccountAssociatedDataFactory.create_account_mock(account_id="inviter-123", name="Inviter") - # Mock database queries - need to mock the sessionmaker query - mock_session = MagicMock() - mock_session.query.return_value.filter_by.return_value.first.return_value = None # No existing account - - mock_sessionmaker = MagicMock() - mock_sessionmaker.return_value.begin.return_value.__enter__.return_value = mock_session - mock_sessionmaker.return_value.begin.return_value.__exit__.return_value = None - with ( - patch("services.account_service.sessionmaker", mock_sessionmaker), patch("services.account_service.AccountService.get_account_by_email_with_case_fallback") as mock_lookup, ): mock_lookup.return_value = None @@ -1475,7 +1466,7 @@ class TestRegisterService: status=AccountStatus.PENDING, is_setup=True, ) - mock_lookup.assert_called_once_with("newuser@example.com", session=mock_session) + mock_lookup.assert_called_once_with("newuser@example.com") def test_invite_new_member_normalizes_new_account_email( self, mock_db_dependencies, mock_redis_dependencies, mock_task_dependencies @@ -1486,13 +1477,7 @@ class TestRegisterService: mock_inviter = TestAccountAssociatedDataFactory.create_account_mock(account_id="inviter-123", name="Inviter") mixed_email = "Invitee@Example.com" - mock_session = MagicMock() - mock_sessionmaker = MagicMock() - mock_sessionmaker.return_value.begin.return_value.__enter__.return_value = mock_session - mock_sessionmaker.return_value.begin.return_value.__exit__.return_value = None - with ( - patch("services.account_service.sessionmaker", mock_sessionmaker), patch("services.account_service.AccountService.get_account_by_email_with_case_fallback") as mock_lookup, ): mock_lookup.return_value = None @@ -1525,7 +1510,7 @@ class TestRegisterService: status=AccountStatus.PENDING, is_setup=True, ) - mock_lookup.assert_called_once_with(mixed_email, session=mock_session) + mock_lookup.assert_called_once_with(mixed_email) mock_check_permission.assert_called_once_with(mock_tenant, mock_inviter, None, "add") mock_create_member.assert_called_once_with(mock_tenant, mock_new_account, "normal") mock_switch_tenant.assert_called_once_with(mock_new_account, mock_tenant.id) @@ -1545,16 +1530,7 @@ class TestRegisterService: account_id="existing-user-456", email="existing@example.com", status="pending" ) - # Mock database queries - need to mock the sessionmaker query - mock_session = MagicMock() - mock_session.query.return_value.filter_by.return_value.first.return_value = mock_existing_account - - mock_sessionmaker = MagicMock() - mock_sessionmaker.return_value.begin.return_value.__enter__.return_value = mock_session - mock_sessionmaker.return_value.begin.return_value.__exit__.return_value = None - with ( - patch("services.account_service.sessionmaker", mock_sessionmaker), patch("services.account_service.AccountService.get_account_by_email_with_case_fallback") as mock_lookup, ): mock_lookup.return_value = mock_existing_account @@ -1584,7 +1560,7 @@ class TestRegisterService: mock_create_member.assert_called_once_with(mock_tenant, mock_existing_account, "normal") mock_generate_token.assert_called_once_with(mock_tenant, mock_existing_account) mock_task_dependencies.delay.assert_called_once() - mock_lookup.assert_called_once_with("existing@example.com", session=mock_session) + mock_lookup.assert_called_once_with("existing@example.com") def test_invite_new_member_already_in_tenant(self, mock_db_dependencies, mock_redis_dependencies): """Test inviting a member who is already in the tenant.""" @@ -1721,7 +1697,7 @@ class TestRegisterService: # Setup test data mock_tenant = MagicMock() mock_tenant.id = "tenant-456" - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_account = TestAccountAssociatedDataFactory.create_account_mock( account_id="user-123", email="test@example.com" ) @@ -1783,7 +1759,7 @@ class TestRegisterService: # Setup test data mock_tenant = MagicMock() mock_tenant.id = "tenant-456" - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL # Mock Redis data invitation_data = { @@ -1808,7 +1784,7 @@ class TestRegisterService: # Setup test data mock_tenant = MagicMock() mock_tenant.id = "tenant-456" - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_account = TestAccountAssociatedDataFactory.create_account_mock( account_id="different-user-456", email="test@example.com" ) diff --git a/api/tests/unit_tests/services/test_app_dsl_service.py b/api/tests/unit_tests/services/test_app_dsl_service.py deleted file mode 100644 index b2a2a1f685..0000000000 --- a/api/tests/unit_tests/services/test_app_dsl_service.py +++ /dev/null @@ -1,970 +0,0 @@ -import base64 -from types import SimpleNamespace -from unittest.mock import MagicMock - -import pytest -import yaml -from graphon.enums import BuiltinNodeTypes - -from core.trigger.constants import ( - TRIGGER_PLUGIN_NODE_TYPE, - TRIGGER_SCHEDULE_NODE_TYPE, - TRIGGER_WEBHOOK_NODE_TYPE, -) -from models import Account, App, AppMode -from models.model import IconType -from services import app_dsl_service -from services.app_dsl_service import ( - AppDslService, - CheckDependenciesPendingData, - ImportMode, - ImportStatus, - PendingData, - _check_version_compatibility, -) - - -class _FakeHttpResponse: - def __init__(self, content: bytes, *, raises: Exception | None = None): - self.content = content - self._raises = raises - - def raise_for_status(self) -> None: - if self._raises is not None: - raise self._raises - - -def _account_mock(*, tenant_id: str = "tenant-1", account_id: str = "account-1") -> MagicMock: - account = MagicMock(spec=Account) - account.current_tenant_id = tenant_id - account.id = account_id - return account - - -def _app_mock(**kwargs: object) -> MagicMock: - """Create a MagicMock with spec=App for type-safe test doubles.""" - app = MagicMock(spec=App) - for key, value in kwargs.items(): - setattr(app, key, value) - return app - - -def _yaml_dump(data: dict) -> str: - return yaml.safe_dump(data, allow_unicode=True) - - -def _workflow_yaml(*, version: str = app_dsl_service.CURRENT_DSL_VERSION) -> str: - return _yaml_dump( - { - "version": version, - "kind": "app", - "app": {"name": "My App", "mode": AppMode.WORKFLOW.value}, - "workflow": {"graph": {"nodes": []}, "features": {}}, - } - ) - - -def test_check_version_compatibility_invalid_version_returns_failed(): - assert _check_version_compatibility("not-a-version") == ImportStatus.FAILED - - -def test_check_version_compatibility_newer_version_returns_pending(): - assert _check_version_compatibility("99.0.0") == ImportStatus.PENDING - - -def test_check_version_compatibility_major_older_returns_pending(monkeypatch): - monkeypatch.setattr(app_dsl_service, "CURRENT_DSL_VERSION", "1.0.0") - assert _check_version_compatibility("0.9.9") == ImportStatus.PENDING - - -def test_check_version_compatibility_minor_older_returns_completed_with_warnings(): - assert _check_version_compatibility("0.5.0") == ImportStatus.COMPLETED_WITH_WARNINGS - - -def test_check_version_compatibility_equal_returns_completed(): - assert _check_version_compatibility(app_dsl_service.CURRENT_DSL_VERSION) == ImportStatus.COMPLETED - - -def test_import_app_invalid_import_mode_raises_value_error(): - service = AppDslService(MagicMock()) - with pytest.raises(ValueError, match="Invalid import_mode"): - service.import_app(account=_account_mock(), import_mode="invalid-mode", yaml_content="version: '0.1.0'") - - -def test_import_app_yaml_url_requires_url(): - service = AppDslService(MagicMock()) - result = service.import_app(account=_account_mock(), import_mode=ImportMode.YAML_URL, yaml_url=None) - assert result.status == ImportStatus.FAILED - assert "yaml_url is required" in result.error - - -def test_import_app_yaml_content_requires_content(): - service = AppDslService(MagicMock()) - result = service.import_app(account=_account_mock(), import_mode=ImportMode.YAML_CONTENT, yaml_content=None) - assert result.status == ImportStatus.FAILED - assert "yaml_content is required" in result.error - - -def test_import_app_yaml_url_fetch_error_returns_failed(monkeypatch): - def fake_get(_url: str, **_kwargs): - raise RuntimeError("boom") - - monkeypatch.setattr(app_dsl_service.ssrf_proxy, "get", fake_get) - - service = AppDslService(MagicMock()) - result = service.import_app( - account=_account_mock(), import_mode=ImportMode.YAML_URL, yaml_url="https://example.com/a.yml" - ) - assert result.status == ImportStatus.FAILED - assert "Error fetching YAML from URL: boom" in result.error - - -def test_import_app_yaml_url_empty_content_returns_failed(monkeypatch): - def fake_get(_url: str, **_kwargs): - return _FakeHttpResponse(b"") - - monkeypatch.setattr(app_dsl_service.ssrf_proxy, "get", fake_get) - - service = AppDslService(MagicMock()) - result = service.import_app( - account=_account_mock(), import_mode=ImportMode.YAML_URL, yaml_url="https://example.com/a.yml" - ) - assert result.status == ImportStatus.FAILED - assert "Empty content" in result.error - - -def test_import_app_yaml_url_file_too_large_returns_failed(monkeypatch): - def fake_get(_url: str, **_kwargs): - return _FakeHttpResponse(b"x" * (app_dsl_service.DSL_MAX_SIZE + 1)) - - monkeypatch.setattr(app_dsl_service.ssrf_proxy, "get", fake_get) - - service = AppDslService(MagicMock()) - result = service.import_app( - account=_account_mock(), import_mode=ImportMode.YAML_URL, yaml_url="https://example.com/a.yml" - ) - assert result.status == ImportStatus.FAILED - assert "File size exceeds" in result.error - - -def test_import_app_yaml_not_mapping_returns_failed(): - service = AppDslService(MagicMock()) - result = service.import_app(account=_account_mock(), import_mode=ImportMode.YAML_CONTENT, yaml_content="[]") - assert result.status == ImportStatus.FAILED - assert "content must be a mapping" in result.error - - -def test_import_app_version_not_str_returns_failed(): - service = AppDslService(MagicMock()) - yaml_content = _yaml_dump({"version": 1, "kind": "app", "app": {"name": "x", "mode": "workflow"}}) - result = service.import_app(account=_account_mock(), import_mode=ImportMode.YAML_CONTENT, yaml_content=yaml_content) - assert result.status == ImportStatus.FAILED - assert "Invalid version type" in result.error - - -def test_import_app_missing_app_data_returns_failed(): - service = AppDslService(MagicMock()) - result = service.import_app( - account=_account_mock(), - import_mode=ImportMode.YAML_CONTENT, - yaml_content=_yaml_dump({"version": "0.6.0", "kind": "app"}), - ) - assert result.status == ImportStatus.FAILED - assert "Missing app data" in result.error - - -def test_import_app_app_id_not_found_returns_failed(monkeypatch): - def fake_select(_model): - stmt = MagicMock() - stmt.where.return_value = stmt - return stmt - - monkeypatch.setattr(app_dsl_service, "select", fake_select) - - session = MagicMock() - session.scalar.return_value = None - service = AppDslService(session) - result = service.import_app( - account=_account_mock(), - import_mode=ImportMode.YAML_CONTENT, - yaml_content=_workflow_yaml(), - app_id="missing-app", - ) - assert result.status == ImportStatus.FAILED - assert result.error == "App not found" - - -def test_import_app_overwrite_only_allows_workflow_and_advanced_chat(monkeypatch): - def fake_select(_model): - stmt = MagicMock() - stmt.where.return_value = stmt - return stmt - - monkeypatch.setattr(app_dsl_service, "select", fake_select) - - existing_app = _app_mock(id="app-1", tenant_id="tenant-1", mode=AppMode.CHAT.value) - - session = MagicMock() - session.scalar.return_value = existing_app - service = AppDslService(session) - result = service.import_app( - account=_account_mock(), - import_mode=ImportMode.YAML_CONTENT, - yaml_content=_workflow_yaml(), - app_id="app-1", - ) - assert result.status == ImportStatus.FAILED - assert "Only workflow or advanced chat apps" in result.error - - -def test_import_app_pending_stores_import_info_in_redis(): - service = AppDslService(MagicMock()) - app_dsl_service.redis_client.setex.reset_mock() - result = service.import_app( - account=_account_mock(), - import_mode=ImportMode.YAML_CONTENT, - yaml_content=_workflow_yaml(version="99.0.0"), - name="n", - description="d", - icon_type="emoji", - icon="i", - icon_background="#000000", - ) - assert result.status == ImportStatus.PENDING - assert result.imported_dsl_version == "99.0.0" - - app_dsl_service.redis_client.setex.assert_called_once() - call = app_dsl_service.redis_client.setex.call_args - redis_key = call.args[0] - assert redis_key.startswith(app_dsl_service.IMPORT_INFO_REDIS_KEY_PREFIX) - - -def test_import_app_completed_uses_declared_dependencies(monkeypatch): - dependencies_payload = [{"id": "langgenius/google", "version": "1.0.0"}] - - plugin_deps = [SimpleNamespace(model_dump=lambda: dependencies_payload[0])] - monkeypatch.setattr( - app_dsl_service.PluginDependency, - "model_validate", - lambda d: plugin_deps[0], - ) - - created_app = _app_mock(id="app-new", mode=AppMode.WORKFLOW.value, tenant_id="tenant-1") - monkeypatch.setattr(AppDslService, "_create_or_update_app", lambda *_args, **_kwargs: created_app) - - draft_var_service = MagicMock() - monkeypatch.setattr(app_dsl_service, "WorkflowDraftVariableService", lambda *args, **kwargs: draft_var_service) - - service = AppDslService(MagicMock()) - result = service.import_app( - account=_account_mock(), - import_mode=ImportMode.YAML_CONTENT, - yaml_content=_yaml_dump( - { - "version": app_dsl_service.CURRENT_DSL_VERSION, - "kind": "app", - "app": {"name": "My App", "mode": AppMode.WORKFLOW.value}, - "workflow": {"graph": {"nodes": []}, "features": {}}, - "dependencies": dependencies_payload, - } - ), - ) - - assert result.status == ImportStatus.COMPLETED - assert result.app_id == "app-new" - draft_var_service.delete_app_workflow_variables.assert_called_once_with(app_id="app-new") - - -@pytest.mark.parametrize("has_workflow", [True, False]) -def test_import_app_legacy_versions_extract_dependencies(monkeypatch, has_workflow: bool): - monkeypatch.setattr( - AppDslService, - "_extract_dependencies_from_workflow_graph", - lambda *_args, **_kwargs: ["from-workflow"], - ) - monkeypatch.setattr( - AppDslService, - "_extract_dependencies_from_model_config", - lambda *_args, **_kwargs: ["from-model-config"], - ) - monkeypatch.setattr( - app_dsl_service.DependenciesAnalysisService, - "generate_latest_dependencies", - lambda deps: [SimpleNamespace(model_dump=lambda: {"dep": deps[0]})], - ) - - created_app = _app_mock(id="app-legacy", mode=AppMode.WORKFLOW.value, tenant_id="tenant-1") - monkeypatch.setattr(AppDslService, "_create_or_update_app", lambda *_args, **_kwargs: created_app) - - draft_var_service = MagicMock() - monkeypatch.setattr(app_dsl_service, "WorkflowDraftVariableService", lambda *args, **kwargs: draft_var_service) - - data: dict = { - "version": "0.1.5", - "kind": "app", - "app": {"name": "Legacy", "mode": AppMode.WORKFLOW.value}, - } - if has_workflow: - data["workflow"] = {"graph": {"nodes": []}, "features": {}} - else: - data["model_config"] = {"model": {"provider": "openai"}} - - service = AppDslService(MagicMock()) - result = service.import_app( - account=_account_mock(), import_mode=ImportMode.YAML_CONTENT, yaml_content=_yaml_dump(data) - ) - assert result.status == ImportStatus.COMPLETED_WITH_WARNINGS - draft_var_service.delete_app_workflow_variables.assert_called_once_with(app_id="app-legacy") - - -def test_import_app_yaml_error_returns_failed(monkeypatch): - def bad_safe_load(_content: str): - raise yaml.YAMLError("bad") - - monkeypatch.setattr(app_dsl_service.yaml, "safe_load", bad_safe_load) - - service = AppDslService(MagicMock()) - result = service.import_app(account=_account_mock(), import_mode=ImportMode.YAML_CONTENT, yaml_content="x: y") - assert result.status == ImportStatus.FAILED - assert result.error.startswith("Invalid YAML format:") - - -def test_import_app_unexpected_error_returns_failed(monkeypatch): - monkeypatch.setattr( - AppDslService, "_create_or_update_app", lambda *_args, **_kwargs: (_ for _ in ()).throw(ValueError("oops")) - ) - - service = AppDslService(MagicMock()) - result = service.import_app( - account=_account_mock(), import_mode=ImportMode.YAML_CONTENT, yaml_content=_workflow_yaml() - ) - assert result.status == ImportStatus.FAILED - assert result.error == "oops" - - -def test_confirm_import_expired_returns_failed(): - service = AppDslService(MagicMock()) - result = service.confirm_import(import_id="import-1", account=_account_mock()) - assert result.status == ImportStatus.FAILED - assert "expired" in result.error - - -def test_confirm_import_invalid_pending_data_type_returns_failed(): - app_dsl_service.redis_client.get.return_value = 123 - service = AppDslService(MagicMock()) - result = service.confirm_import(import_id="import-1", account=_account_mock()) - assert result.status == ImportStatus.FAILED - assert "Invalid import information" in result.error - - -def test_confirm_import_success_deletes_redis_key(monkeypatch): - def fake_select(_model): - stmt = MagicMock() - stmt.where.return_value = stmt - return stmt - - monkeypatch.setattr(app_dsl_service, "select", fake_select) - - session = MagicMock() - session.scalar.return_value = None - service = AppDslService(session) - - pending = PendingData( - import_mode=ImportMode.YAML_CONTENT, - yaml_content=_workflow_yaml(), - name="name", - description="desc", - icon_type="emoji", - icon="🤖", - icon_background="#fff", - app_id=None, - ) - app_dsl_service.redis_client.get.return_value = pending.model_dump_json() - - created_app = _app_mock(id="confirmed-app", mode=AppMode.WORKFLOW.value, tenant_id="tenant-1") - monkeypatch.setattr(AppDslService, "_create_or_update_app", lambda *_args, **_kwargs: created_app) - - app_dsl_service.redis_client.delete.reset_mock() - result = service.confirm_import(import_id="import-1", account=_account_mock()) - assert result.status == ImportStatus.COMPLETED - assert result.app_id == "confirmed-app" - app_dsl_service.redis_client.delete.assert_called_once_with( - f"{app_dsl_service.IMPORT_INFO_REDIS_KEY_PREFIX}import-1" - ) - - -def test_confirm_import_exception_returns_failed(monkeypatch): - app_dsl_service.redis_client.get.return_value = "not-json" - monkeypatch.setattr( - PendingData, "model_validate_json", lambda *_args, **_kwargs: (_ for _ in ()).throw(ValueError("bad")) - ) - - service = AppDslService(MagicMock()) - result = service.confirm_import(import_id="import-1", account=_account_mock()) - assert result.status == ImportStatus.FAILED - assert result.error == "bad" - - -def test_check_dependencies_returns_empty_when_no_redis_data(): - service = AppDslService(MagicMock()) - result = service.check_dependencies(app_model=_app_mock(id="app-1", tenant_id="tenant-1")) - assert result.leaked_dependencies == [] - - -def test_check_dependencies_calls_analysis_service(monkeypatch): - pending = CheckDependenciesPendingData(dependencies=[], app_id="app-1").model_dump_json() - app_dsl_service.redis_client.get.return_value = pending - dep = app_dsl_service.PluginDependency.model_validate( - {"type": "package", "value": {"plugin_unique_identifier": "acme/foo", "version": "1.0.0"}} - ) - monkeypatch.setattr( - app_dsl_service.DependenciesAnalysisService, - "get_leaked_dependencies", - lambda *, tenant_id, dependencies: [dep], - ) - - service = AppDslService(MagicMock()) - result = service.check_dependencies(app_model=_app_mock(id="app-1", tenant_id="tenant-1")) - assert len(result.leaked_dependencies) == 1 - - -def test_create_or_update_app_missing_mode_raises(): - service = AppDslService(MagicMock()) - with pytest.raises(ValueError, match="loss app mode"): - service._create_or_update_app(app=None, data={"app": {}}, account=_account_mock()) - - -def test_create_or_update_app_existing_app_updates_fields(monkeypatch): - fixed_now = object() - monkeypatch.setattr(app_dsl_service, "naive_utc_now", lambda: fixed_now) - - workflow_service = MagicMock() - workflow_service.get_draft_workflow.return_value = None - monkeypatch.setattr(app_dsl_service, "WorkflowService", lambda: workflow_service) - monkeypatch.setattr( - app_dsl_service.variable_factory, - "build_environment_variable_from_mapping", - lambda _m: SimpleNamespace(kind="env"), - ) - monkeypatch.setattr( - app_dsl_service.variable_factory, - "build_conversation_variable_from_mapping", - lambda _m: SimpleNamespace(kind="conv"), - ) - - app = _app_mock( - id="app-1", - tenant_id="tenant-1", - mode=AppMode.WORKFLOW.value, - name="old", - description="old-desc", - icon_type=IconType.EMOJI, - icon="old-icon", - icon_background="#111111", - updated_by=None, - updated_at=None, - app_model_config=None, - ) - service = AppDslService(MagicMock()) - updated = service._create_or_update_app( - app=app, - data={ - "app": {"mode": AppMode.WORKFLOW.value, "name": "yaml-name", "icon_type": IconType.IMAGE, "icon": "X"}, - "workflow": {"graph": {"nodes": []}, "features": {}}, - }, - account=_account_mock(), - name="override-name", - description=None, - icon_background="#222222", - ) - assert updated is app - assert app.name == "override-name" - assert app.icon_type == IconType.IMAGE - assert app.icon == "X" - assert app.icon_background == "#222222" - assert app.updated_at is fixed_now - - -def test_create_or_update_app_new_app_requires_tenant(): - account = _account_mock() - account.current_tenant_id = None - service = AppDslService(MagicMock()) - with pytest.raises(ValueError, match="Current tenant is not set"): - service._create_or_update_app( - app=None, - data={"app": {"mode": AppMode.WORKFLOW.value, "name": "n"}}, - account=account, - ) - - -def test_create_or_update_app_creates_workflow_app_and_saves_dependencies(monkeypatch): - class DummyApp(SimpleNamespace): - pass - - monkeypatch.setattr(app_dsl_service, "App", DummyApp) - - sent: list[tuple[str, object]] = [] - monkeypatch.setattr(app_dsl_service.app_was_created, "send", lambda app, account: sent.append((app.id, account.id))) - - workflow_service = MagicMock() - workflow_service.get_draft_workflow.return_value = SimpleNamespace(unique_hash="uh") - monkeypatch.setattr(app_dsl_service, "WorkflowService", lambda: workflow_service) - - monkeypatch.setattr( - app_dsl_service.variable_factory, - "build_environment_variable_from_mapping", - lambda _m: SimpleNamespace(kind="env"), - ) - monkeypatch.setattr( - app_dsl_service.variable_factory, - "build_conversation_variable_from_mapping", - lambda _m: SimpleNamespace(kind="conv"), - ) - - monkeypatch.setattr( - AppDslService, "decrypt_dataset_id", lambda *_args, **_kwargs: "00000000-0000-0000-0000-000000000000" - ) - - session = MagicMock() - service = AppDslService(session) - deps = [ - app_dsl_service.PluginDependency.model_validate( - {"type": "package", "value": {"plugin_unique_identifier": "acme/foo", "version": "1.0.0"}} - ) - ] - data = { - "app": {"mode": AppMode.WORKFLOW.value, "name": "n"}, - "workflow": { - "environment_variables": [{"x": 1}], - "conversation_variables": [{"y": 2}], - "graph": { - "nodes": [ - {"data": {"type": BuiltinNodeTypes.KNOWLEDGE_RETRIEVAL, "dataset_ids": ["enc-1", "enc-2"]}}, - ] - }, - "features": {}, - }, - } - - app = service._create_or_update_app(app=None, data=data, account=_account_mock(), dependencies=deps) - - assert app.tenant_id == "tenant-1" - assert sent == [(app.id, "account-1")] - app_dsl_service.redis_client.setex.assert_called() - workflow_service.sync_draft_workflow.assert_called_once() - - passed_graph = workflow_service.sync_draft_workflow.call_args.kwargs["graph"] - dataset_ids = passed_graph["nodes"][0]["data"]["dataset_ids"] - assert dataset_ids == ["00000000-0000-0000-0000-000000000000", "00000000-0000-0000-0000-000000000000"] - - -def test_create_or_update_app_workflow_missing_workflow_data_raises(): - service = AppDslService(MagicMock()) - with pytest.raises(ValueError, match="Missing workflow data"): - service._create_or_update_app( - app=_app_mock( - id="a", - tenant_id="t", - mode=AppMode.WORKFLOW.value, - name="n", - description="d", - icon_background="#fff", - app_model_config=None, - ), - data={"app": {"mode": AppMode.WORKFLOW.value}}, - account=_account_mock(), - ) - - -def test_create_or_update_app_chat_requires_model_config(): - service = AppDslService(MagicMock()) - with pytest.raises(ValueError, match="Missing model_config"): - service._create_or_update_app( - app=_app_mock( - id="a", - tenant_id="t", - mode=AppMode.CHAT.value, - name="n", - description="d", - icon_background="#fff", - app_model_config=None, - ), - data={"app": {"mode": AppMode.CHAT.value}}, - account=_account_mock(), - ) - - -def test_create_or_update_app_chat_creates_model_config_and_sends_event(monkeypatch): - class DummyModelConfig(SimpleNamespace): - def from_model_config_dict(self, _cfg: dict): - return self - - monkeypatch.setattr(app_dsl_service, "AppModelConfig", DummyModelConfig) - - sent: list[str] = [] - monkeypatch.setattr( - app_dsl_service.app_model_config_was_updated, "send", lambda app, app_model_config: sent.append(app.id) - ) - - session = MagicMock() - service = AppDslService(session) - - app = _app_mock( - id="app-1", - tenant_id="tenant-1", - mode=AppMode.CHAT.value, - name="n", - description="d", - icon_background="#fff", - app_model_config=None, - ) - service._create_or_update_app( - app=app, - data={"app": {"mode": AppMode.CHAT.value}, "model_config": {"model": {"provider": "openai"}}}, - account=_account_mock(), - ) - - assert app.app_model_config_id is not None - assert sent == ["app-1"] - session.add.assert_called() - - -def test_create_or_update_app_invalid_mode_raises(): - service = AppDslService(MagicMock()) - with pytest.raises(ValueError, match="Invalid app mode"): - service._create_or_update_app( - app=_app_mock( - id="a", - tenant_id="t", - mode=AppMode.RAG_PIPELINE.value, - name="n", - description="d", - icon_background="#fff", - app_model_config=None, - ), - data={"app": {"mode": AppMode.RAG_PIPELINE.value}}, - account=_account_mock(), - ) - - -def test_export_dsl_delegates_by_mode(monkeypatch): - workflow_calls: list[bool] = [] - model_calls: list[bool] = [] - monkeypatch.setattr(AppDslService, "_append_workflow_export_data", lambda **_kwargs: workflow_calls.append(True)) - monkeypatch.setattr( - AppDslService, "_append_model_config_export_data", lambda *_args, **_kwargs: model_calls.append(True) - ) - - workflow_app = _app_mock( - mode=AppMode.WORKFLOW.value, - tenant_id="tenant-1", - name="n", - icon="i", - icon_type="emoji", - icon_background="#fff", - description="d", - use_icon_as_answer_icon=False, - app_model_config=None, - ) - AppDslService.export_dsl(workflow_app) - assert workflow_calls == [True] - - chat_app = _app_mock( - mode=AppMode.CHAT.value, - tenant_id="tenant-1", - name="n", - icon="i", - icon_type="emoji", - icon_background="#fff", - description="d", - use_icon_as_answer_icon=False, - app_model_config=SimpleNamespace(to_dict=lambda: {"agent_mode": {"tools": []}}), - ) - AppDslService.export_dsl(chat_app) - assert model_calls == [True] - - -def test_export_dsl_preserves_icon_and_icon_type(monkeypatch): - monkeypatch.setattr(AppDslService, "_append_workflow_export_data", lambda **_kwargs: None) - - emoji_app = _app_mock( - mode=AppMode.WORKFLOW.value, - tenant_id="tenant-1", - name="Emoji App", - icon="🎨", - icon_type=IconType.EMOJI, - icon_background="#FF5733", - description="App with emoji icon", - use_icon_as_answer_icon=True, - app_model_config=None, - ) - yaml_output = AppDslService.export_dsl(emoji_app) - data = yaml.safe_load(yaml_output) - assert data["app"]["icon"] == "🎨" - assert data["app"]["icon_type"] == "emoji" - assert data["app"]["icon_background"] == "#FF5733" - - image_app = _app_mock( - mode=AppMode.WORKFLOW.value, - tenant_id="tenant-1", - name="Image App", - icon="https://example.com/icon.png", - icon_type=IconType.IMAGE, - icon_background="#FFEAD5", - description="App with image icon", - use_icon_as_answer_icon=False, - app_model_config=None, - ) - yaml_output = AppDslService.export_dsl(image_app) - data = yaml.safe_load(yaml_output) - assert data["app"]["icon"] == "https://example.com/icon.png" - assert data["app"]["icon_type"] == "image" - assert data["app"]["icon_background"] == "#FFEAD5" - - -def test_append_workflow_export_data_filters_and_overrides(monkeypatch): - workflow_dict = { - "graph": { - "nodes": [ - {"data": {"type": BuiltinNodeTypes.KNOWLEDGE_RETRIEVAL, "dataset_ids": ["d1", "d2"]}}, - {"data": {"type": BuiltinNodeTypes.TOOL, "credential_id": "secret"}}, - { - "data": { - "type": BuiltinNodeTypes.AGENT, - "agent_parameters": {"tools": {"value": [{"credential_id": "secret"}]}}, - } - }, - {"data": {"type": TRIGGER_SCHEDULE_NODE_TYPE, "config": {"x": 1}}}, - {"data": {"type": TRIGGER_WEBHOOK_NODE_TYPE, "webhook_url": "x", "webhook_debug_url": "y"}}, - {"data": {"type": TRIGGER_PLUGIN_NODE_TYPE, "subscription_id": "s"}}, - ] - } - } - - workflow = SimpleNamespace(to_dict=lambda *, include_secret: workflow_dict) - workflow_service = MagicMock() - workflow_service.get_draft_workflow.return_value = workflow - monkeypatch.setattr(app_dsl_service, "WorkflowService", lambda: workflow_service) - - monkeypatch.setattr( - AppDslService, "encrypt_dataset_id", lambda *, dataset_id, tenant_id: f"enc:{tenant_id}:{dataset_id}" - ) - monkeypatch.setattr( - TriggerScheduleNode := app_dsl_service.TriggerScheduleNode, - "get_default_config", - lambda: {"config": {"default": True}}, - ) - monkeypatch.setattr(AppDslService, "_extract_dependencies_from_workflow", lambda *_args, **_kwargs: ["dep-1"]) - monkeypatch.setattr( - app_dsl_service.DependenciesAnalysisService, - "generate_dependencies", - lambda *, tenant_id, dependencies: [ - SimpleNamespace(model_dump=lambda: {"tenant": tenant_id, "dep": dependencies[0]}) - ], - ) - monkeypatch.setattr(app_dsl_service, "jsonable_encoder", lambda x: x) - - export_data: dict = {} - AppDslService._append_workflow_export_data( - export_data=export_data, - app_model=_app_mock(tenant_id="tenant-1"), - include_secret=False, - workflow_id=None, - ) - - nodes = export_data["workflow"]["graph"]["nodes"] - assert nodes[0]["data"]["dataset_ids"] == ["enc:tenant-1:d1", "enc:tenant-1:d2"] - assert "credential_id" not in nodes[1]["data"] - assert "credential_id" not in nodes[2]["data"]["agent_parameters"]["tools"]["value"][0] - assert nodes[3]["data"]["config"] == {"default": True} - assert nodes[4]["data"]["webhook_url"] == "" - assert nodes[4]["data"]["webhook_debug_url"] == "" - assert nodes[5]["data"]["subscription_id"] == "" - assert export_data["dependencies"] == [{"tenant": "tenant-1", "dep": "dep-1"}] - - -def test_append_workflow_export_data_missing_workflow_raises(monkeypatch): - workflow_service = MagicMock() - workflow_service.get_draft_workflow.return_value = None - monkeypatch.setattr(app_dsl_service, "WorkflowService", lambda: workflow_service) - - with pytest.raises(ValueError, match="Missing draft workflow configuration"): - AppDslService._append_workflow_export_data( - export_data={}, - app_model=_app_mock(tenant_id="tenant-1"), - include_secret=False, - workflow_id=None, - ) - - -def test_append_model_config_export_data_filters_credential_id(monkeypatch): - monkeypatch.setattr(AppDslService, "_extract_dependencies_from_model_config", lambda *_args, **_kwargs: ["dep-1"]) - monkeypatch.setattr( - app_dsl_service.DependenciesAnalysisService, - "generate_dependencies", - lambda *, tenant_id, dependencies: [ - SimpleNamespace(model_dump=lambda: {"tenant": tenant_id, "dep": dependencies[0]}) - ], - ) - monkeypatch.setattr(app_dsl_service, "jsonable_encoder", lambda x: x) - - app_model_config = SimpleNamespace(to_dict=lambda: {"agent_mode": {"tools": [{"credential_id": "secret"}]}}) - app_model = _app_mock(tenant_id="tenant-1", app_model_config=app_model_config) - export_data: dict = {} - - AppDslService._append_model_config_export_data(export_data, app_model) - assert export_data["model_config"]["agent_mode"]["tools"] == [{}] - assert export_data["dependencies"] == [{"tenant": "tenant-1", "dep": "dep-1"}] - - -def test_append_model_config_export_data_requires_app_config(): - with pytest.raises(ValueError, match="Missing app configuration"): - AppDslService._append_model_config_export_data({}, _app_mock(app_model_config=None)) - - -def test_extract_dependencies_from_workflow_graph_covers_all_node_types(monkeypatch): - monkeypatch.setattr( - app_dsl_service.DependenciesAnalysisService, - "analyze_tool_dependency", - lambda provider_id: f"tool:{provider_id}", - ) - monkeypatch.setattr( - app_dsl_service.DependenciesAnalysisService, - "analyze_model_provider_dependency", - lambda provider: f"model:{provider}", - ) - - monkeypatch.setattr(app_dsl_service.ToolNodeData, "model_validate", lambda _d: SimpleNamespace(provider_id="p1")) - monkeypatch.setattr( - app_dsl_service.LLMNodeData, "model_validate", lambda _d: SimpleNamespace(model=SimpleNamespace(provider="m1")) - ) - monkeypatch.setattr( - app_dsl_service.QuestionClassifierNodeData, - "model_validate", - lambda _d: SimpleNamespace(model=SimpleNamespace(provider="m2")), - ) - monkeypatch.setattr( - app_dsl_service.ParameterExtractorNodeData, - "model_validate", - lambda _d: SimpleNamespace(model=SimpleNamespace(provider="m3")), - ) - - def kr_validate(_d): - return SimpleNamespace( - retrieval_mode="multiple", - multiple_retrieval_config=SimpleNamespace( - reranking_mode="weighted_score", - weights=SimpleNamespace(vector_setting=SimpleNamespace(embedding_provider_name="m4")), - reranking_model=None, - ), - single_retrieval_config=None, - ) - - monkeypatch.setattr(app_dsl_service.KnowledgeRetrievalNodeData, "model_validate", kr_validate) - - graph = { - "nodes": [ - {"data": {"type": BuiltinNodeTypes.TOOL}}, - {"data": {"type": BuiltinNodeTypes.LLM}}, - {"data": {"type": BuiltinNodeTypes.QUESTION_CLASSIFIER}}, - {"data": {"type": BuiltinNodeTypes.PARAMETER_EXTRACTOR}}, - {"data": {"type": BuiltinNodeTypes.KNOWLEDGE_RETRIEVAL}}, - {"data": {"type": "unknown"}}, - ] - } - - deps = AppDslService._extract_dependencies_from_workflow_graph(graph) - assert deps == ["tool:p1", "model:m1", "model:m2", "model:m3", "model:m4"] - - -def test_extract_dependencies_from_workflow_graph_handles_exceptions(monkeypatch): - monkeypatch.setattr( - app_dsl_service.ToolNodeData, "model_validate", lambda _d: (_ for _ in ()).throw(ValueError("bad")) - ) - deps = AppDslService._extract_dependencies_from_workflow_graph( - {"nodes": [{"data": {"type": BuiltinNodeTypes.TOOL}}]} - ) - assert deps == [] - - -def test_extract_dependencies_from_model_config_parses_providers(monkeypatch): - monkeypatch.setattr( - app_dsl_service.DependenciesAnalysisService, - "analyze_model_provider_dependency", - lambda provider: f"model:{provider}", - ) - monkeypatch.setattr( - app_dsl_service.DependenciesAnalysisService, - "analyze_tool_dependency", - lambda provider_id: f"tool:{provider_id}", - ) - - deps = AppDslService._extract_dependencies_from_model_config( - { - "model": {"provider": "p1"}, - "dataset_configs": { - "datasets": {"datasets": [{"reranking_model": {"reranking_provider_name": {"provider": "p2"}}}]} - }, - "agent_mode": {"tools": [{"provider_id": "t1"}]}, - } - ) - assert deps == ["model:p1", "model:p2", "tool:t1"] - - -def test_extract_dependencies_from_model_config_handles_exceptions(monkeypatch): - monkeypatch.setattr( - app_dsl_service.DependenciesAnalysisService, - "analyze_model_provider_dependency", - lambda _p: (_ for _ in ()).throw(ValueError("bad")), - ) - deps = AppDslService._extract_dependencies_from_model_config({"model": {"provider": "p1"}}) - assert deps == [] - - -def test_get_leaked_dependencies_empty_returns_empty(): - assert AppDslService.get_leaked_dependencies("tenant-1", []) == [] - - -def test_get_leaked_dependencies_delegates(monkeypatch): - monkeypatch.setattr( - app_dsl_service.DependenciesAnalysisService, - "get_leaked_dependencies", - lambda *, tenant_id, dependencies: [SimpleNamespace(tenant_id=tenant_id, deps=dependencies)], - ) - res = AppDslService.get_leaked_dependencies("tenant-1", [SimpleNamespace(id="x")]) - assert len(res) == 1 - - -def test_encrypt_decrypt_dataset_id_respects_config(monkeypatch): - tenant_id = "tenant-1" - dataset_uuid = "00000000-0000-0000-0000-000000000000" - - monkeypatch.setattr(app_dsl_service.dify_config, "DSL_EXPORT_ENCRYPT_DATASET_ID", False) - assert AppDslService.encrypt_dataset_id(dataset_id=dataset_uuid, tenant_id=tenant_id) == dataset_uuid - - monkeypatch.setattr(app_dsl_service.dify_config, "DSL_EXPORT_ENCRYPT_DATASET_ID", True) - encrypted = AppDslService.encrypt_dataset_id(dataset_id=dataset_uuid, tenant_id=tenant_id) - assert encrypted != dataset_uuid - assert base64.b64decode(encrypted.encode()) - assert AppDslService.decrypt_dataset_id(encrypted_data=encrypted, tenant_id=tenant_id) == dataset_uuid - - -def test_decrypt_dataset_id_returns_plain_uuid_unchanged(): - value = "00000000-0000-0000-0000-000000000000" - assert AppDslService.decrypt_dataset_id(encrypted_data=value, tenant_id="tenant-1") == value - - -def test_decrypt_dataset_id_returns_none_on_invalid_data(monkeypatch): - monkeypatch.setattr(app_dsl_service.dify_config, "DSL_EXPORT_ENCRYPT_DATASET_ID", True) - assert AppDslService.decrypt_dataset_id(encrypted_data="not-base64", tenant_id="tenant-1") is None - - -def test_decrypt_dataset_id_returns_none_when_decrypted_is_not_uuid(monkeypatch): - monkeypatch.setattr(app_dsl_service.dify_config, "DSL_EXPORT_ENCRYPT_DATASET_ID", True) - encrypted = AppDslService.encrypt_dataset_id(dataset_id="not-a-uuid", tenant_id="tenant-1") - assert AppDslService.decrypt_dataset_id(encrypted_data=encrypted, tenant_id="tenant-1") is None - - -def test_is_valid_uuid_handles_bad_inputs(): - assert AppDslService._is_valid_uuid("00000000-0000-0000-0000-000000000000") is True - assert AppDslService._is_valid_uuid("nope") is False diff --git a/api/tests/unit_tests/services/test_app_dsl_service_import_yaml_url.py b/api/tests/unit_tests/services/test_app_dsl_service_import_yaml_url.py deleted file mode 100644 index 41c1d0ea2a..0000000000 --- a/api/tests/unit_tests/services/test_app_dsl_service_import_yaml_url.py +++ /dev/null @@ -1,71 +0,0 @@ -from unittest.mock import MagicMock - -import httpx - -from models import Account -from services import app_dsl_service -from services.app_dsl_service import AppDslService, ImportMode, ImportStatus - - -def _build_response(url: str, status_code: int, content: bytes = b"") -> httpx.Response: - request = httpx.Request("GET", url) - return httpx.Response(status_code=status_code, request=request, content=content) - - -def _pending_yaml_content(version: str = "99.0.0") -> bytes: - return (f'version: "{version}"\nkind: app\napp:\n name: Loop Test\n mode: workflow\n').encode() - - -def _account_mock() -> MagicMock: - account = MagicMock(spec=Account) - account.current_tenant_id = "tenant-1" - return account - - -def test_import_app_yaml_url_user_attachments_keeps_original_url(monkeypatch): - yaml_url = "https://github.com/user-attachments/files/24290802/loop-test.yml" - raw_url = "https://raw.githubusercontent.com/user-attachments/files/24290802/loop-test.yml" - yaml_bytes = _pending_yaml_content() - - def fake_get(url: str, **kwargs): - if url == raw_url: - return _build_response(url, status_code=404) - assert url == yaml_url - return _build_response(url, status_code=200, content=yaml_bytes) - - monkeypatch.setattr(app_dsl_service.ssrf_proxy, "get", fake_get) - - service = AppDslService(MagicMock()) - result = service.import_app( - account=_account_mock(), - import_mode=ImportMode.YAML_URL, - yaml_url=yaml_url, - ) - - assert result.status == ImportStatus.PENDING - assert result.imported_dsl_version == "99.0.0" - - -def test_import_app_yaml_url_github_blob_rewrites_to_raw(monkeypatch): - yaml_url = "https://github.com/acme/repo/blob/main/app.yml" - raw_url = "https://raw.githubusercontent.com/acme/repo/main/app.yml" - yaml_bytes = _pending_yaml_content() - - requested_urls: list[str] = [] - - def fake_get(url: str, **kwargs): - requested_urls.append(url) - assert url == raw_url - return _build_response(url, status_code=200, content=yaml_bytes) - - monkeypatch.setattr(app_dsl_service.ssrf_proxy, "get", fake_get) - - service = AppDslService(MagicMock()) - result = service.import_app( - account=_account_mock(), - import_mode=ImportMode.YAML_URL, - yaml_url=yaml_url, - ) - - assert result.status == ImportStatus.PENDING - assert requested_urls == [raw_url] diff --git a/api/tests/unit_tests/services/test_app_generate_service_streaming_integration.py b/api/tests/unit_tests/services/test_app_generate_service_streaming_integration.py index e66d52f66b..30aa359b45 100644 --- a/api/tests/unit_tests/services/test_app_generate_service_streaming_integration.py +++ b/api/tests/unit_tests/services/test_app_generate_service_streaming_integration.py @@ -1,6 +1,7 @@ import json import uuid from collections import defaultdict, deque +from typing import Any import pytest @@ -60,7 +61,7 @@ class _FakeStreams: self._data: dict[str, list[tuple[str, dict]]] = defaultdict(list) self._seq: dict[str, int] = defaultdict(int) - def xadd(self, key: str, fields: dict, *, maxlen: int | None = None) -> str: + def xadd(self, key: str, fields: dict[str, Any], *, maxlen: int | None = None) -> str: # maxlen is accepted for API compatibility with redis-py; ignored in this test double self._seq[key] += 1 eid = f"{self._seq[key]}-0" @@ -71,7 +72,7 @@ class _FakeStreams: # no-op for tests return None - def xread(self, streams: dict, block: int | None = None, count: int | None = None): + def xread(self, streams: dict[str, Any], block: int | None = None, count: int | None = None): assert len(streams) == 1 key, last_id = next(iter(streams.items())) entries = self._data.get(key, []) diff --git a/api/tests/unit_tests/services/test_audio_service.py b/api/tests/unit_tests/services/test_audio_service.py index cede6671ce..83258fd1b7 100644 --- a/api/tests/unit_tests/services/test_audio_service.py +++ b/api/tests/unit_tests/services/test_audio_service.py @@ -53,6 +53,7 @@ Tests available voice retrieval: - text_to_speech: Enables TTS functionality """ +from typing import Any from unittest.mock import MagicMock, Mock, create_autospec, patch import pytest @@ -109,7 +110,7 @@ class AudioServiceTestDataFactory: return app @staticmethod - def create_workflow_mock(features_dict: dict | None = None, **kwargs) -> Mock: + def create_workflow_mock(features_dict: dict[str, Any] | None = None, **kwargs) -> Mock: """ Create a mock Workflow object. @@ -128,8 +129,8 @@ class AudioServiceTestDataFactory: @staticmethod def create_app_model_config_mock( - speech_to_text_dict: dict | None = None, - text_to_speech_dict: dict | None = None, + speech_to_text_dict: dict[str, Any] | None = None, + text_to_speech_dict: dict[str, Any] | None = None, **kwargs, ) -> Mock: """ @@ -403,43 +404,6 @@ class TestAudioServiceTTS: voice="en-US-Neural", ) - @patch("services.audio_service.db.session", autospec=True) - @patch("services.audio_service.ModelManager.for_tenant", autospec=True) - def test_transcript_tts_with_message_id_success(self, mock_model_manager_class, mock_db_session, factory): - """Test successful TTS with message ID.""" - # Arrange - app_model_config = factory.create_app_model_config_mock( - text_to_speech_dict={"enabled": True, "voice": "en-US-Neural"} - ) - app = factory.create_app_mock( - mode=AppMode.CHAT, - app_model_config=app_model_config, - ) - - message = factory.create_message_mock( - message_id="550e8400-e29b-41d4-a716-446655440000", - answer="Message answer text", - ) - - # Mock database lookup - mock_db_session.get.return_value = message - - # Mock ModelManager - mock_model_manager = mock_model_manager_class.return_value - mock_model_instance = MagicMock() - mock_model_instance.invoke_tts.return_value = b"audio from message" - mock_model_manager.get_default_model_instance.return_value = mock_model_instance - - # Act - result = AudioService.transcript_tts( - app_model=app, - message_id="550e8400-e29b-41d4-a716-446655440000", - ) - - # Assert - assert result == b"audio from message" - mock_model_instance.invoke_tts.assert_called_once() - @patch("services.audio_service.ModelManager.for_tenant", autospec=True) def test_transcript_tts_with_default_voice(self, mock_model_manager_class, factory): """Test TTS uses default voice when none specified.""" @@ -544,62 +508,6 @@ class TestAudioServiceTTS: with pytest.raises(ValueError, match="Text is required"): AudioService.transcript_tts(app_model=app, text=None) - @patch("services.audio_service.db.session") - def test_transcript_tts_returns_none_for_invalid_message_id(self, mock_db_session, factory): - """Test that TTS returns None for invalid message ID format.""" - # Arrange - app = factory.create_app_mock() - - # Act - result = AudioService.transcript_tts( - app_model=app, - message_id="invalid-uuid", - ) - - # Assert - assert result is None - - @patch("services.audio_service.db.session") - def test_transcript_tts_returns_none_for_nonexistent_message(self, mock_db_session, factory): - """Test that TTS returns None when message doesn't exist.""" - # Arrange - app = factory.create_app_mock() - - # Mock database lookup returning None - mock_db_session.get.return_value = None - - # Act - result = AudioService.transcript_tts( - app_model=app, - message_id="550e8400-e29b-41d4-a716-446655440000", - ) - - # Assert - assert result is None - - @patch("services.audio_service.db.session") - def test_transcript_tts_returns_none_for_empty_message_answer(self, mock_db_session, factory): - """Test that TTS returns None when message answer is empty.""" - # Arrange - app = factory.create_app_mock() - - message = factory.create_message_mock( - answer="", - status=MessageStatus.NORMAL, - ) - - # Mock database lookup - mock_db_session.get.return_value = message - - # Act - result = AudioService.transcript_tts( - app_model=app, - message_id="550e8400-e29b-41d4-a716-446655440000", - ) - - # Assert - assert result is None - @patch("services.audio_service.ModelManager.for_tenant", autospec=True) def test_transcript_tts_raises_error_when_no_voices_available(self, mock_model_manager_class, factory): """Test that TTS raises error when no voices are available.""" diff --git a/api/tests/unit_tests/services/test_billing_service.py b/api/tests/unit_tests/services/test_billing_service.py index 34f718ba02..36592196c6 100644 --- a/api/tests/unit_tests/services/test_billing_service.py +++ b/api/tests/unit_tests/services/test_billing_service.py @@ -1275,42 +1275,6 @@ class TestBillingServiceEdgeCases: # Assert assert result["history_id"] == history_id - def test_is_tenant_owner_or_admin_editor_role_raises_error(self): - """Test tenant owner/admin check raises error for editor role.""" - # Arrange - current_user = MagicMock(spec=Account) - current_user.id = "account-123" - current_user.current_tenant_id = "tenant-456" - - mock_join = MagicMock(spec=TenantAccountJoin) - mock_join.role = TenantAccountRole.EDITOR # Editor is not privileged - - with patch("services.billing_service.db.session") as mock_session: - mock_session.scalar.return_value = mock_join - - # Act & Assert - with pytest.raises(ValueError) as exc_info: - BillingService.is_tenant_owner_or_admin(current_user) - assert "Only team owner or team admin can perform this action" in str(exc_info.value) - - def test_is_tenant_owner_or_admin_dataset_operator_raises_error(self): - """Test tenant owner/admin check raises error for dataset operator role.""" - # Arrange - current_user = MagicMock(spec=Account) - current_user.id = "account-123" - current_user.current_tenant_id = "tenant-456" - - mock_join = MagicMock(spec=TenantAccountJoin) - mock_join.role = TenantAccountRole.DATASET_OPERATOR # Dataset operator is not privileged - - with patch("services.billing_service.db.session") as mock_session: - mock_session.scalar.return_value = mock_join - - # Act & Assert - with pytest.raises(ValueError) as exc_info: - BillingService.is_tenant_owner_or_admin(current_user) - assert "Only team owner or team admin can perform this action" in str(exc_info.value) - class TestBillingServiceSubscriptionOperations: """Unit tests for subscription operations in BillingService. diff --git a/api/tests/unit_tests/services/test_clear_free_plan_tenant_expired_logs.py b/api/tests/unit_tests/services/test_clear_free_plan_tenant_expired_logs.py index f393a4b10b..1bbd214110 100644 --- a/api/tests/unit_tests/services/test_clear_free_plan_tenant_expired_logs.py +++ b/api/tests/unit_tests/services/test_clear_free_plan_tenant_expired_logs.py @@ -17,8 +17,7 @@ class TestClearFreePlanTenantExpiredLogs: def mock_session(self): """Create a mock database session.""" session = Mock(spec=Session) - session.query.return_value.filter.return_value.all.return_value = [] - session.query.return_value.filter.return_value.delete.return_value = 0 + session.scalars.return_value.all.return_value = [] return session @pytest.fixture @@ -54,18 +53,18 @@ class TestClearFreePlanTenantExpiredLogs: ClearFreePlanTenantExpiredLogs._clear_message_related_tables(mock_session, "tenant-123", []) # Should not call any database operations - mock_session.query.assert_not_called() + mock_session.scalars.assert_not_called() mock_storage.save.assert_not_called() def test_clear_message_related_tables_no_records_found(self, mock_session, sample_message_ids): """Test when no related records are found.""" with patch("services.clear_free_plan_tenant_expired_logs.storage") as mock_storage: - mock_session.query.return_value.where.return_value.all.return_value = [] + mock_session.scalars.return_value.all.return_value = [] ClearFreePlanTenantExpiredLogs._clear_message_related_tables(mock_session, "tenant-123", sample_message_ids) - # Should call query for each related table but find no records - assert mock_session.query.call_count > 0 + # Should call scalars for each related table but find no records + assert mock_session.scalars.call_count > 0 mock_storage.save.assert_not_called() def test_clear_message_related_tables_with_records_and_to_dict( @@ -73,7 +72,7 @@ class TestClearFreePlanTenantExpiredLogs: ): """Test when records are found and have to_dict method.""" with patch("services.clear_free_plan_tenant_expired_logs.storage") as mock_storage: - mock_session.query.return_value.where.return_value.all.return_value = sample_records + mock_session.scalars.return_value.all.return_value = sample_records ClearFreePlanTenantExpiredLogs._clear_message_related_tables(mock_session, "tenant-123", sample_message_ids) @@ -104,7 +103,7 @@ class TestClearFreePlanTenantExpiredLogs: records.append(record) # Mock records for first table only, empty for others - mock_session.query.return_value.where.return_value.all.side_effect = [ + mock_session.scalars.return_value.all.side_effect = [ records, [], [], @@ -126,13 +125,13 @@ class TestClearFreePlanTenantExpiredLogs: with patch("services.clear_free_plan_tenant_expired_logs.storage") as mock_storage: mock_storage.save.side_effect = Exception("Storage error") - mock_session.query.return_value.where.return_value.all.return_value = sample_records + mock_session.scalars.return_value.all.return_value = sample_records # Should not raise exception ClearFreePlanTenantExpiredLogs._clear_message_related_tables(mock_session, "tenant-123", sample_message_ids) # Should still delete records even if backup fails - assert mock_session.query.return_value.where.return_value.delete.called + assert mock_session.execute.called def test_clear_message_related_tables_serialization_error_continues(self, mock_session, sample_message_ids): """Test that method continues even when record serialization fails.""" @@ -141,23 +140,23 @@ class TestClearFreePlanTenantExpiredLogs: record.id = "record-1" record.to_dict.side_effect = Exception("Serialization error") - mock_session.query.return_value.where.return_value.all.return_value = [record] + mock_session.scalars.return_value.all.return_value = [record] # Should not raise exception ClearFreePlanTenantExpiredLogs._clear_message_related_tables(mock_session, "tenant-123", sample_message_ids) # Should still delete records even if serialization fails - assert mock_session.query.return_value.where.return_value.delete.called + assert mock_session.execute.called def test_clear_message_related_tables_deletion_called(self, mock_session, sample_message_ids, sample_records): """Test that deletion is called for found records.""" with patch("services.clear_free_plan_tenant_expired_logs.storage") as mock_storage: - mock_session.query.return_value.where.return_value.all.return_value = sample_records + mock_session.scalars.return_value.all.return_value = sample_records ClearFreePlanTenantExpiredLogs._clear_message_related_tables(mock_session, "tenant-123", sample_message_ids) - # Should call delete for each table that has records - assert mock_session.query.return_value.where.return_value.delete.called + # Should call execute(delete(...)) for each table that has records + assert mock_session.execute.called def test_clear_message_related_tables_all_serialization_fails_skips_backup_but_deletes( self, mock_session, sample_message_ids @@ -167,12 +166,12 @@ class TestClearFreePlanTenantExpiredLogs: record.to_dict.side_effect = Exception("Serialization error") with patch("services.clear_free_plan_tenant_expired_logs.storage") as mock_storage: - mock_session.query.return_value.where.return_value.all.return_value = [record] + mock_session.scalars.return_value.all.return_value = [record] ClearFreePlanTenantExpiredLogs._clear_message_related_tables(mock_session, "tenant-123", sample_message_ids) mock_storage.save.assert_not_called() - assert mock_session.query.return_value.where.return_value.delete.called + assert mock_session.execute.called class _ImmediateFuture: @@ -263,60 +262,39 @@ def test_process_tenant_processes_all_batches(monkeypatch: pytest.MonkeyPatch) - conv1 = SimpleNamespace(id="c1", to_dict=lambda: {"id": "c1"}) log1 = SimpleNamespace(id="l1", to_dict=lambda: {"id": "l1"}) - def make_query_with_batches(batches: list[list[object]]): - q = MagicMock() - q.where.return_value = q - q.limit.return_value = q - q.all.side_effect = batches - q.delete.return_value = 1 - return q - msg_session_1 = MagicMock() - msg_session_1.query.side_effect = lambda model: ( - make_query_with_batches([[msg1], []]) if model == service_module.Message else MagicMock() - ) - msg_session_1.commit.return_value = None + msg_session_1.scalars.return_value.all.return_value = [msg1] msg_session_2 = MagicMock() - msg_session_2.query.side_effect = lambda model: ( - make_query_with_batches([[]]) if model == service_module.Message else MagicMock() - ) - msg_session_2.commit.return_value = None + msg_session_2.scalars.return_value.all.return_value = [] conv_session_1 = MagicMock() - conv_session_1.query.side_effect = lambda model: ( - make_query_with_batches([[conv1], []]) if model == service_module.Conversation else MagicMock() - ) - conv_session_1.commit.return_value = None + conv_session_1.scalars.return_value.all.return_value = [conv1] conv_session_2 = MagicMock() - conv_session_2.query.side_effect = lambda model: ( - make_query_with_batches([[]]) if model == service_module.Conversation else MagicMock() - ) - conv_session_2.commit.return_value = None + conv_session_2.scalars.return_value.all.return_value = [] wal_session_1 = MagicMock() - wal_session_1.query.side_effect = lambda model: ( - make_query_with_batches([[log1], []]) if model == service_module.WorkflowAppLog else MagicMock() - ) - wal_session_1.commit.return_value = None + wal_session_1.scalars.return_value.all.return_value = [log1] wal_session_2 = MagicMock() - wal_session_2.query.side_effect = lambda model: ( - make_query_with_batches([[]]) if model == service_module.WorkflowAppLog else MagicMock() - ) - wal_session_2.commit.return_value = None + wal_session_2.scalars.return_value.all.return_value = [] session_wrappers = [ - _session_wrapper_for_no_autoflush(msg_session_1), - _session_wrapper_for_no_autoflush(msg_session_2), - _session_wrapper_for_no_autoflush(conv_session_1), - _session_wrapper_for_no_autoflush(conv_session_2), - _session_wrapper_for_no_autoflush(wal_session_1), - _session_wrapper_for_no_autoflush(wal_session_2), + _sessionmaker_wrapper_for_begin(msg_session_1), + _sessionmaker_wrapper_for_begin(msg_session_2), + _sessionmaker_wrapper_for_begin(conv_session_1), + _sessionmaker_wrapper_for_begin(conv_session_2), + _sessionmaker_wrapper_for_begin(wal_session_1), + _sessionmaker_wrapper_for_begin(wal_session_2), ] - monkeypatch.setattr(service_module, "Session", lambda _engine: session_wrappers.pop(0)) + def fake_sessionmaker(*args, **kwargs): + if kwargs.get("autoflush") is False: + return session_wrappers.pop(0) + return object() + + monkeypatch.setattr(service_module, "sessionmaker", fake_sessionmaker) def fake_select(*_args, **_kwargs): stmt = MagicMock() @@ -333,8 +311,6 @@ def test_process_tenant_processes_all_batches(monkeypatch: pytest.MonkeyPatch) - run_repo = MagicMock() run_repo.get_expired_runs_batch.side_effect = [[SimpleNamespace(id="wr-1", to_dict=lambda: {"id": "wr-1"})], []] run_repo.delete_runs_by_ids.return_value = 1 - - monkeypatch.setattr(service_module, "sessionmaker", lambda **_kwargs: object()) monkeypatch.setattr( service_module.DifyAPIRepositoryFactory, "create_api_workflow_node_execution_repository", @@ -358,9 +334,7 @@ def test_process_with_tenant_ids_filters_by_plan_and_logs_errors(monkeypatch: py # Total tenant count query count_session = MagicMock() - count_query = MagicMock() - count_query.count.return_value = 2 - count_session.query.return_value = count_query + count_session.scalar.return_value = 2 monkeypatch.setattr(service_module, "sessionmaker", lambda _engine: _sessionmaker_wrapper_for_begin(count_session)) @@ -425,32 +399,15 @@ def test_process_without_tenant_ids_batches_and_scales_interval(monkeypatch: pyt # Sessions used: # 1) total tenant count - # 2) per-batch tenant scan (count + tenant list) + # 2) per-batch tenant scan (interval counts + tenant list) total_session = MagicMock() - total_query = MagicMock() - total_query.count.return_value = 250 - total_session.query.return_value = total_query - - batch_session = MagicMock() - q1 = MagicMock() - q1.where.return_value = q1 - q1.count.return_value = 200 - q2 = MagicMock() - q2.where.return_value = q2 - q2.count.return_value = 200 - q3 = MagicMock() - q3.where.return_value = q3 - q3.count.return_value = 200 - q4 = MagicMock() - q4.where.return_value = q4 - q4.count.return_value = 50 # choose this interval, then scale it + total_session.scalar.return_value = 250 rows = [SimpleNamespace(id="tenant-a"), SimpleNamespace(id="tenant-b")] - q_rs = MagicMock() - q_rs.where.return_value = q_rs - q_rs.order_by.return_value = rows - - batch_session.query.side_effect = [q1, q2, q3, q4, q_rs] + batch_session = MagicMock() + # 4 test intervals queried: 200, 200, 200, 50 — breaks on 50 <= 100 (4th interval = 3h) + batch_session.scalar.side_effect = [200, 200, 200, 50] + batch_session.execute.return_value = rows sessions = [_sessionmaker_wrapper_for_begin(total_session), _sessionmaker_wrapper_for_begin(batch_session)] monkeypatch.setattr(service_module, "sessionmaker", lambda _engine: sessions.pop(0)) @@ -468,9 +425,7 @@ def test_process_with_tenant_ids_emits_progress_every_100(monkeypatch: pytest.Mo monkeypatch.setattr(service_module, "db", SimpleNamespace(engine=object())) count_session = MagicMock() - count_query = MagicMock() - count_query.count.return_value = 100 - count_session.query.return_value = count_query + count_session.scalar.return_value = 100 monkeypatch.setattr(service_module, "sessionmaker", lambda _engine: _sessionmaker_wrapper_for_begin(count_session)) flask_app = service_module.Flask("test-app") @@ -517,25 +472,13 @@ def test_process_without_tenant_ids_all_intervals_too_many_uses_min_interval(mon monkeypatch.setattr(service_module.click, "echo", lambda *_args, **_kwargs: None) total_session = MagicMock() - total_query = MagicMock() - total_query.count.return_value = 250 - total_session.query.return_value = total_query - - batch_session = MagicMock() - # Count results for all 5 intervals, all > 100 => take the for-else path. - count_queries = [] - for _ in range(5): - q = MagicMock() - q.where.return_value = q - q.count.return_value = 200 - count_queries.append(q) + total_session.scalar.return_value = 250 rows = [SimpleNamespace(id="tenant-a")] - q_rs = MagicMock() - q_rs.where.return_value = q_rs - q_rs.order_by.return_value = rows - - batch_session.query.side_effect = [*count_queries, q_rs] + batch_session = MagicMock() + # All 5 intervals have > 100 tenants => for-else falls through to min interval (1h) + batch_session.scalar.side_effect = [200, 200, 200, 200, 200] + batch_session.execute.return_value = rows sessions = [_sessionmaker_wrapper_for_begin(total_session), _sessionmaker_wrapper_for_begin(batch_session)] monkeypatch.setattr(service_module, "sessionmaker", lambda _engine: sessions.pop(0)) @@ -546,8 +489,7 @@ def test_process_without_tenant_ids_all_intervals_too_many_uses_min_interval(mon ClearFreePlanTenantExpiredLogs.process(days=7, batch=10, tenant_ids=[]) assert process_tenant_mock.call_count == 1 - assert len(count_queries) == 5 - assert batch_session.query.call_count >= 6 + assert batch_session.scalar.call_count == 5 def test_process_tenant_repo_loops_break_on_empty_second_batch(monkeypatch: pytest.MonkeyPatch) -> None: @@ -569,18 +511,19 @@ def test_process_tenant_repo_loops_break_on_empty_second_batch(monkeypatch: pyte # Make message/conversation/workflow_app_log loops no-op (empty immediately) empty_session = MagicMock() - q_empty = MagicMock() - q_empty.where.return_value = q_empty - q_empty.limit.return_value = q_empty - q_empty.all.return_value = [] - empty_session.query.return_value = q_empty - empty_session.commit.return_value = None + empty_session.scalars.return_value.all.return_value = [] session_wrappers = [ - _session_wrapper_for_no_autoflush(empty_session), - _session_wrapper_for_no_autoflush(empty_session), - _session_wrapper_for_no_autoflush(empty_session), + _sessionmaker_wrapper_for_begin(empty_session), + _sessionmaker_wrapper_for_begin(empty_session), + _sessionmaker_wrapper_for_begin(empty_session), ] - monkeypatch.setattr(service_module, "Session", lambda _engine: session_wrappers.pop(0)) + + def fake_sessionmaker(*args, **kwargs): + if kwargs.get("autoflush") is False: + return session_wrappers.pop(0) + return object() + + monkeypatch.setattr(service_module, "sessionmaker", fake_sessionmaker) def fake_select(*_args, **_kwargs): stmt = MagicMock() @@ -606,8 +549,6 @@ def test_process_tenant_repo_loops_break_on_empty_second_batch(monkeypatch: pyte [], ] run_repo.delete_runs_by_ids.return_value = 2 - - monkeypatch.setattr(service_module, "sessionmaker", lambda **_kwargs: object()) monkeypatch.setattr( service_module.DifyAPIRepositoryFactory, "create_api_workflow_node_execution_repository", diff --git a/api/tests/unit_tests/services/test_conversation_service.py b/api/tests/unit_tests/services/test_conversation_service.py index a4359f00b8..2c7f13b79f 100644 --- a/api/tests/unit_tests/services/test_conversation_service.py +++ b/api/tests/unit_tests/services/test_conversation_service.py @@ -6,26 +6,15 @@ Tests are organized by functionality and include edge cases, error handling, and both positive and negative test scenarios. """ -from datetime import timedelta from unittest.mock import MagicMock, Mock, create_autospec, patch -import pytest from sqlalchemy import asc, desc from core.app.entities.app_invoke_entities import InvokeFrom from libs.datetime_utils import naive_utc_now -from libs.infinite_scroll_pagination import InfiniteScrollPagination from models import Account, ConversationVariable -from models.enums import ConversationFromSource from models.model import App, Conversation, EndUser, Message from services.conversation_service import ConversationService -from services.errors.conversation import ( - ConversationNotExistsError, - ConversationVariableNotExistsError, - ConversationVariableTypeMismatchError, - LastConversationNotExistsError, -) -from services.errors.message import MessageNotExistsError class ConversationServiceTestDataFactory: @@ -338,383 +327,9 @@ class TestConversationServiceHelpers: assert condition is not None -class TestConversationServiceGetConversation: - """Test conversation retrieval operations.""" - - @patch("services.conversation_service.db.session") - def test_get_conversation_success_with_account(self, mock_db_session): - """ - Test successful conversation retrieval with account user. - - Should return conversation when found with proper filters. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock( - from_account_id=user.id, from_source=ConversationFromSource.CONSOLE - ) - - mock_db_session.scalar.return_value = conversation - - # Act - result = ConversationService.get_conversation(app_model, "conv-123", user) - - # Assert - assert result == conversation - - @patch("services.conversation_service.db.session") - def test_get_conversation_success_with_end_user(self, mock_db_session): - """ - Test successful conversation retrieval with end user. - - Should return conversation when found with proper filters for API user. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_end_user_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock( - from_end_user_id=user.id, from_source=ConversationFromSource.API - ) - - mock_db_session.scalar.return_value = conversation - - # Act - result = ConversationService.get_conversation(app_model, "conv-123", user) - - # Assert - assert result == conversation - - @patch("services.conversation_service.db.session") - def test_get_conversation_not_found_raises_error(self, mock_db_session): - """ - Test that get_conversation raises error when conversation not found. - - Should raise ConversationNotExistsError when no matching conversation found. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - mock_db_session.scalar.return_value = None - - # Act & Assert - with pytest.raises(ConversationNotExistsError): - ConversationService.get_conversation(app_model, "conv-123", user) - - -class TestConversationServiceRename: - """Test conversation rename operations.""" - - @patch("services.conversation_service.db.session") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_rename_with_manual_name(self, mock_get_conversation, mock_db_session): - """ - Test renaming conversation with manual name. - - Should update conversation name and timestamp when auto_generate is False. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Act - result = ConversationService.rename( - app_model=app_model, - conversation_id="conv-123", - user=user, - name="New Name", - auto_generate=False, - ) - - # Assert - assert result == conversation - assert conversation.name == "New Name" - mock_db_session.commit.assert_called_once() - - @patch("services.conversation_service.db.session") - @patch("services.conversation_service.ConversationService.get_conversation") - @patch("services.conversation_service.ConversationService.auto_generate_name") - def test_rename_with_auto_generate(self, mock_auto_generate, mock_get_conversation, mock_db_session): - """ - Test renaming conversation with auto-generation. - - Should call auto_generate_name when auto_generate is True. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - mock_auto_generate.return_value = conversation - - # Act - result = ConversationService.rename( - app_model=app_model, - conversation_id="conv-123", - user=user, - name=None, - auto_generate=True, - ) - - # Assert - assert result == conversation - mock_auto_generate.assert_called_once_with(app_model, conversation) - - -class TestConversationServiceAutoGenerateName: - """Test conversation auto-name generation operations.""" - - @patch("services.conversation_service.db.session") - @patch("services.conversation_service.LLMGenerator") - def test_auto_generate_name_success(self, mock_llm_generator, mock_db_session): - """ - Test successful auto-generation of conversation name. - - Should generate name using LLMGenerator and update conversation. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - message = ConversationServiceTestDataFactory.create_message_mock( - conversation_id=conversation.id, app_id=app_model.id - ) - - # Mock database query to return message - mock_db_session.scalar.return_value = message - - # Mock LLM generator - mock_llm_generator.generate_conversation_name.return_value = "Generated Name" - - # Act - result = ConversationService.auto_generate_name(app_model, conversation) - - # Assert - assert result == conversation - assert conversation.name == "Generated Name" - mock_llm_generator.generate_conversation_name.assert_called_once_with( - app_model.tenant_id, message.query, conversation.id, app_model.id - ) - mock_db_session.commit.assert_called_once() - - @patch("services.conversation_service.db.session") - def test_auto_generate_name_no_message_raises_error(self, mock_db_session): - """ - Test auto-generation fails when no message found. - - Should raise MessageNotExistsError when conversation has no messages. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - # Mock database query to return None - mock_db_session.scalar.return_value = None - - # Act & Assert - with pytest.raises(MessageNotExistsError): - ConversationService.auto_generate_name(app_model, conversation) - - @patch("services.conversation_service.db.session") - @patch("services.conversation_service.LLMGenerator") - def test_auto_generate_name_handles_llm_exception(self, mock_llm_generator, mock_db_session): - """ - Test auto-generation handles LLM generator exceptions gracefully. - - Should continue without name when LLMGenerator fails. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - message = ConversationServiceTestDataFactory.create_message_mock( - conversation_id=conversation.id, app_id=app_model.id - ) - - # Mock database query to return message - mock_db_session.scalar.return_value = message - - # Mock LLM generator to raise exception - mock_llm_generator.generate_conversation_name.side_effect = Exception("LLM Error") - - # Act - result = ConversationService.auto_generate_name(app_model, conversation) - - # Assert - assert result == conversation - # Name should remain unchanged due to exception - mock_db_session.commit.assert_called_once() - - -class TestConversationServiceDelete: - """Test conversation deletion operations.""" - - @patch("services.conversation_service.delete_conversation_related_data") - @patch("services.conversation_service.db.session") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_delete_success(self, mock_get_conversation, mock_db_session, mock_delete_task): - """ - Test successful conversation deletion. - - Should delete conversation and schedule cleanup task. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock(name="Test App") - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Act - ConversationService.delete(app_model, "conv-123", user) - - # Assert - mock_db_session.delete.assert_called_once_with(conversation) - mock_db_session.commit.assert_called_once() - mock_delete_task.delay.assert_called_once_with(conversation.id) - - @patch("services.conversation_service.db.session") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_delete_handles_exception_and_rollback(self, mock_get_conversation, mock_db_session): - """ - Test deletion handles exceptions and rolls back transaction. - - Should rollback database changes when deletion fails. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - mock_db_session.delete.side_effect = Exception("Database Error") - - # Act & Assert - with pytest.raises(Exception, match="Database Error"): - ConversationService.delete(app_model, "conv-123", user) - - # Assert rollback was called - mock_db_session.rollback.assert_called_once() - - class TestConversationServiceConversationalVariable: """Test conversational variable operations.""" - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_get_conversational_variable_success(self, mock_get_conversation, mock_session_factory): - """ - Test successful retrieval of conversational variables. - - Should return paginated list of variables for conversation. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Mock session and variables - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - variable1 = ConversationServiceTestDataFactory.create_conversation_variable_mock() - variable2 = ConversationServiceTestDataFactory.create_conversation_variable_mock(variable_id="var-456") - - mock_session.scalars.return_value.all.return_value = [variable1, variable2] - - # Act - result = ConversationService.get_conversational_variable( - app_model=app_model, - conversation_id="conv-123", - user=user, - limit=10, - last_id=None, - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - assert len(result.data) == 2 - assert result.limit == 10 - assert result.has_more is False - - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_get_conversational_variable_with_last_id(self, mock_get_conversation, mock_session_factory): - """ - Test retrieval of variables with last_id pagination. - - Should filter variables created after last_id. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Mock session and variables - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - last_variable = ConversationServiceTestDataFactory.create_conversation_variable_mock( - created_at=naive_utc_now() - timedelta(hours=1) - ) - variable = ConversationServiceTestDataFactory.create_conversation_variable_mock(created_at=naive_utc_now()) - - mock_session.scalar.return_value = last_variable - mock_session.scalars.return_value.all.return_value = [variable] - - # Act - result = ConversationService.get_conversational_variable( - app_model=app_model, - conversation_id="conv-123", - user=user, - limit=10, - last_id="var-123", - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - assert len(result.data) == 1 - assert result.limit == 10 - - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_get_conversational_variable_last_id_not_found_raises_error( - self, mock_get_conversation, mock_session_factory - ): - """ - Test that invalid last_id raises ConversationVariableNotExistsError. - - Should raise error when last_id doesn't exist. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Mock session - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - mock_session.scalar.return_value = None - - # Act & Assert - with pytest.raises(ConversationVariableNotExistsError): - ConversationService.get_conversational_variable( - app_model=app_model, - conversation_id="conv-123", - user=user, - limit=10, - last_id="invalid-id", - ) - @patch("services.conversation_service.session_factory") @patch("services.conversation_service.ConversationService.get_conversation") @patch("services.conversation_service.dify_config") @@ -751,466 +366,3 @@ class TestConversationServiceConversationalVariable: # Assert - JSON filter should be applied assert mock_session.scalars.called - - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - @patch("services.conversation_service.dify_config") - def test_get_conversational_variable_with_name_filter_postgresql( - self, mock_config, mock_get_conversation, mock_session_factory - ): - """ - Test variable filtering by name for PostgreSQL databases. - - Should apply JSON extraction filter for variable names. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - mock_config.DB_TYPE = "postgresql" - - # Mock session - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - mock_session.scalars.return_value.all.return_value = [] - - # Act - ConversationService.get_conversational_variable( - app_model=app_model, - conversation_id="conv-123", - user=user, - limit=10, - last_id=None, - variable_name="test_var", - ) - - # Assert - JSON filter should be applied - assert mock_session.scalars.called - - -class TestConversationServiceUpdateVariable: - """Test conversation variable update operations.""" - - @patch("services.conversation_service.variable_factory") - @patch("services.conversation_service.ConversationVariableUpdater") - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_update_conversation_variable_success( - self, mock_get_conversation, mock_session_factory, mock_updater_class, mock_variable_factory - ): - """ - Test successful update of conversation variable. - - Should update variable value and return updated data. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Mock session and existing variable - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - existing_variable = ConversationServiceTestDataFactory.create_conversation_variable_mock(value_type="string") - mock_session.scalar.return_value = existing_variable - - # Mock variable factory and updater - updated_variable = Mock() - updated_variable.model_dump.return_value = {"id": "var-123", "name": "test_var", "value": "new_value"} - mock_variable_factory.build_conversation_variable_from_mapping.return_value = updated_variable - - mock_updater = MagicMock() - mock_updater_class.return_value = mock_updater - - # Act - result = ConversationService.update_conversation_variable( - app_model=app_model, - conversation_id="conv-123", - variable_id="var-123", - user=user, - new_value="new_value", - ) - - # Assert - assert result["id"] == "var-123" - assert result["value"] == "new_value" - mock_updater.update.assert_called_once_with("conv-123", updated_variable) - mock_updater.flush.assert_called_once() - - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_update_conversation_variable_not_found_raises_error(self, mock_get_conversation, mock_session_factory): - """ - Test update fails when variable doesn't exist. - - Should raise ConversationVariableNotExistsError. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Mock session - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - mock_session.scalar.return_value = None - - # Act & Assert - with pytest.raises(ConversationVariableNotExistsError): - ConversationService.update_conversation_variable( - app_model=app_model, - conversation_id="conv-123", - variable_id="invalid-id", - user=user, - new_value="new_value", - ) - - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_update_conversation_variable_type_mismatch_raises_error(self, mock_get_conversation, mock_session_factory): - """ - Test update fails when value type doesn't match expected type. - - Should raise ConversationVariableTypeMismatchError. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Mock session and existing variable - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - existing_variable = ConversationServiceTestDataFactory.create_conversation_variable_mock(value_type="number") - mock_session.scalar.return_value = existing_variable - - # Act & Assert - Try to set string value for number variable - with pytest.raises(ConversationVariableTypeMismatchError): - ConversationService.update_conversation_variable( - app_model=app_model, - conversation_id="conv-123", - variable_id="var-123", - user=user, - new_value="string_value", # Wrong type - ) - - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_update_conversation_variable_integer_number_compatibility( - self, mock_get_conversation, mock_session_factory - ): - """ - Test that integer type accepts number values. - - Should allow number values for integer type variables. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Mock session and existing variable - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - existing_variable = ConversationServiceTestDataFactory.create_conversation_variable_mock(value_type="integer") - mock_session.scalar.return_value = existing_variable - - # Mock variable factory and updater - updated_variable = Mock() - updated_variable.model_dump.return_value = {"id": "var-123", "name": "test_var", "value": 42} - - with ( - patch("services.conversation_service.variable_factory") as mock_variable_factory, - patch("services.conversation_service.ConversationVariableUpdater") as mock_updater_class, - ): - mock_variable_factory.build_conversation_variable_from_mapping.return_value = updated_variable - mock_updater = MagicMock() - mock_updater_class.return_value = mock_updater - - # Act - result = ConversationService.update_conversation_variable( - app_model=app_model, - conversation_id="conv-123", - variable_id="var-123", - user=user, - new_value=42, # Number value for integer type - ) - - # Assert - assert result["value"] == 42 - mock_updater.update.assert_called_once() - - -class TestConversationServicePaginationAdvanced: - """Advanced pagination tests for ConversationService.""" - - @patch("services.conversation_service.session_factory") - def test_pagination_by_last_id_with_last_id_not_found(self, mock_session_factory): - """ - Test pagination with invalid last_id raises error. - - Should raise LastConversationNotExistsError when last_id doesn't exist. - """ - # Arrange - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - mock_session.scalar.return_value = None - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - # Act & Assert - with pytest.raises(LastConversationNotExistsError): - ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id="invalid-id", - limit=20, - invoke_from=InvokeFrom.WEB_APP, - ) - - @patch("services.conversation_service.session_factory") - def test_pagination_by_last_id_with_exclude_ids(self, mock_session_factory): - """ - Test pagination with exclude_ids filter. - - Should exclude specified conversation IDs from results. - """ - # Arrange - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - mock_session.scalars.return_value.all.return_value = [conversation] - mock_session.scalar.return_value = conversation - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - # Act - result = ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id=None, - limit=20, - invoke_from=InvokeFrom.WEB_APP, - exclude_ids=["excluded-123"], - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - assert len(result.data) == 1 - - @patch("services.conversation_service.session_factory") - def test_pagination_by_last_id_has_more_detection(self, mock_session_factory): - """ - Test pagination has_more detection logic. - - Should set has_more=True when there are more results beyond limit. - """ - # Arrange - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - # Return exactly limit items to trigger has_more check - conversations = [ - ConversationServiceTestDataFactory.create_conversation_mock(conversation_id=f"conv-{i}") for i in range(20) - ] - mock_session.scalars.return_value.all.return_value = conversations - mock_session.scalar.return_value = conversations[-1] - - # Mock count query to return > 0 - mock_session.scalar.return_value = 5 # Additional items exist - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - # Act - result = ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id=None, - limit=20, - invoke_from=InvokeFrom.WEB_APP, - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - assert result.has_more is True - - @patch("services.conversation_service.session_factory") - def test_pagination_by_last_id_with_different_sort_by(self, mock_session_factory): - """ - Test pagination with different sort fields. - - Should handle various sort_by parameters correctly. - """ - # Arrange - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - mock_session.scalars.return_value.all.return_value = [conversation] - mock_session.scalar.return_value = conversation - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - # Test different sort fields - sort_fields = ["created_at", "-updated_at", "name", "-status"] - - for sort_by in sort_fields: - # Act - result = ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id=None, - limit=20, - invoke_from=InvokeFrom.WEB_APP, - sort_by=sort_by, - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - - -class TestConversationServiceEdgeCases: - """Test edge cases and error scenarios.""" - - @patch("services.conversation_service.session_factory") - def test_pagination_with_end_user_api_source(self, mock_session_factory): - """ - Test pagination correctly handles EndUser with API source. - - Should use 'api' as from_source for EndUser instances. - """ - # Arrange - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - conversation = ConversationServiceTestDataFactory.create_conversation_mock( - from_source=ConversationFromSource.API, from_end_user_id="user-123" - ) - mock_session.scalars.return_value.all.return_value = [conversation] - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_end_user_mock() - - # Act - result = ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id=None, - limit=20, - invoke_from=InvokeFrom.WEB_APP, - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - - @patch("services.conversation_service.session_factory") - def test_pagination_with_account_console_source(self, mock_session_factory): - """ - Test pagination correctly handles Account with console source. - - Should use 'console' as from_source for Account instances. - """ - # Arrange - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - conversation = ConversationServiceTestDataFactory.create_conversation_mock( - from_source=ConversationFromSource.CONSOLE, from_account_id="account-123" - ) - mock_session.scalars.return_value.all.return_value = [conversation] - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - # Act - result = ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id=None, - limit=20, - invoke_from=InvokeFrom.WEB_APP, - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - - def test_pagination_with_include_ids_filter(self): - """ - Test pagination with include_ids filter. - - Should only return conversations with IDs in include_ids list. - """ - # Arrange - mock_session = MagicMock() - mock_session.scalars.return_value.all.return_value = [] - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - # Act - result = ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id=None, - limit=20, - invoke_from=InvokeFrom.WEB_APP, - include_ids=["conv-123", "conv-456"], - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - # Verify that include_ids filter was applied - assert mock_session.scalars.called - - def test_pagination_with_empty_exclude_ids(self): - """ - Test pagination with empty exclude_ids list. - - Should handle empty exclude_ids gracefully. - """ - # Arrange - mock_session = MagicMock() - mock_session.scalars.return_value.all.return_value = [] - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - # Act - result = ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id=None, - limit=20, - invoke_from=InvokeFrom.WEB_APP, - exclude_ids=[], - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - assert result.has_more is False diff --git a/api/tests/unit_tests/services/test_dataset_service_dataset.py b/api/tests/unit_tests/services/test_dataset_service_dataset.py index b2c40763ea..3d08b6fd09 100644 --- a/api/tests/unit_tests/services/test_dataset_service_dataset.py +++ b/api/tests/unit_tests/services/test_dataset_service_dataset.py @@ -1,29 +1,20 @@ """Unit tests for DatasetService and dataset-related collaborators.""" from .dataset_service_test_helpers import ( - CloudPlan, - Dataset, - DatasetCollectionBindingService, DatasetNameDuplicateError, DatasetPermissionEnum, DatasetPermissionService, - DatasetProcessRule, DatasetService, DatasetServiceUnitDataFactory, - DocumentIndexingError, - DocumentService, LLMBadRequestError, MagicMock, - Mock, ModelFeature, ModelType, NoPermissionError, - NotFound, PipelineIconInfo, ProviderTokenNotInitError, RagPipelineDatasetCreateEntity, SimpleNamespace, - TenantAccountRole, _make_knowledge_configuration, _make_retrieval_model, _make_session_context, @@ -33,127 +24,6 @@ from .dataset_service_test_helpers import ( ) -class TestDatasetServiceQueries: - """Unit tests for DatasetService query composition and fallback branches.""" - - @pytest.fixture - def mock_dataset_query_dependencies(self): - with ( - patch("services.dataset_service.db") as mock_db, - patch("services.dataset_service.helper.escape_like_pattern", return_value="escaped-search") as escape_like, - patch("services.dataset_service.TagService.get_target_ids_by_tag_ids") as get_target_ids, - ): - mock_db.paginate.return_value = SimpleNamespace(items=["dataset"], total=1) - yield { - "db": mock_db, - "escape_like_pattern": escape_like, - "get_target_ids": get_target_ids, - } - - def test_get_datasets_returns_paginated_results_for_public_view(self, mock_dataset_query_dependencies): - items, total = DatasetService.get_datasets(page=1, per_page=20, tenant_id="tenant-1") - - assert items == ["dataset"] - assert total == 1 - mock_dataset_query_dependencies["db"].paginate.assert_called_once() - mock_dataset_query_dependencies["escape_like_pattern"].assert_not_called() - - def test_get_datasets_short_circuits_for_dataset_operator_without_permissions( - self, mock_dataset_query_dependencies - ): - user = DatasetServiceUnitDataFactory.create_user_mock(role=TenantAccountRole.DATASET_OPERATOR) - mock_dataset_query_dependencies["db"].session.scalars.return_value.all.return_value = [] - - items, total = DatasetService.get_datasets(page=1, per_page=20, tenant_id="tenant-1", user=user) - - assert items == [] - assert total == 0 - mock_dataset_query_dependencies["db"].paginate.assert_not_called() - - def test_get_datasets_short_circuits_when_tag_lookup_returns_no_target_ids(self, mock_dataset_query_dependencies): - mock_dataset_query_dependencies["get_target_ids"].return_value = [] - - items, total = DatasetService.get_datasets( - page=1, - per_page=20, - tenant_id="tenant-1", - tag_ids=["tag-1"], - ) - - assert items == [] - assert total == 0 - mock_dataset_query_dependencies["get_target_ids"].assert_called_once_with("knowledge", "tenant-1", ["tag-1"]) - mock_dataset_query_dependencies["db"].paginate.assert_not_called() - - def test_get_datasets_search_and_tag_filters_call_collaborators(self, mock_dataset_query_dependencies): - mock_dataset_query_dependencies["get_target_ids"].return_value = ["dataset-1"] - - items, total = DatasetService.get_datasets( - page=2, - per_page=10, - tenant_id="tenant-1", - search="report", - tag_ids=["tag-1"], - ) - - assert items == ["dataset"] - assert total == 1 - mock_dataset_query_dependencies["escape_like_pattern"].assert_called_once_with("report") - mock_dataset_query_dependencies["get_target_ids"].assert_called_once_with("knowledge", "tenant-1", ["tag-1"]) - mock_dataset_query_dependencies["db"].paginate.assert_called_once() - - def test_get_process_rules_returns_latest_rule_when_present(self): - dataset_process_rule = Mock(spec=DatasetProcessRule) - dataset_process_rule.mode = "automatic" - dataset_process_rule.rules_dict = {"delimiter": "\n"} - - with patch("services.dataset_service.db") as mock_db: - (mock_db.session.execute.return_value.scalar_one_or_none.return_value) = dataset_process_rule - - result = DatasetService.get_process_rules("dataset-1") - - assert result == {"mode": "automatic", "rules": {"delimiter": "\n"}} - - def test_get_process_rules_falls_back_to_default_rules_when_missing(self): - with patch("services.dataset_service.db") as mock_db: - (mock_db.session.execute.return_value.scalar_one_or_none.return_value) = None - - result = DatasetService.get_process_rules("dataset-1") - - assert result == { - "mode": DocumentService.DEFAULT_RULES["mode"], - "rules": DocumentService.DEFAULT_RULES["rules"], - } - - def test_get_datasets_by_ids_returns_empty_for_missing_ids(self): - with patch("services.dataset_service.db") as mock_db: - items, total = DatasetService.get_datasets_by_ids([], "tenant-1") - - assert items == [] - assert total == 0 - mock_db.paginate.assert_not_called() - - def test_get_datasets_by_ids_uses_paginate_for_non_empty_input(self): - with patch("services.dataset_service.db") as mock_db: - mock_db.paginate.return_value = SimpleNamespace(items=["dataset-1"], total=1) - - items, total = DatasetService.get_datasets_by_ids(["dataset-1"], "tenant-1") - - assert items == ["dataset-1"] - assert total == 1 - mock_db.paginate.assert_called_once() - - def test_get_dataset_returns_first_match(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock() - - with patch("services.dataset_service.db") as mock_db: - mock_db.session.get.return_value = dataset - - result = DatasetService.get_dataset(dataset.id) - - assert result is dataset - - class TestDatasetServiceValidation: """Unit tests for DatasetService validation helpers.""" @@ -532,6 +402,9 @@ class TestDatasetServiceCreationAndUpdate: with ( patch.object(DatasetService, "_update_external_knowledge_binding") as update_binding, + patch( + "services.dataset_service.ExternalDatasetService.get_external_knowledge_api", return_value=object() + ) as get_external_knowledge_api, patch("services.dataset_service.naive_utc_now", return_value=now), patch("services.dataset_service.db") as mock_db, ): @@ -557,6 +430,7 @@ class TestDatasetServiceCreationAndUpdate: assert dataset.permission == DatasetPermissionEnum.PARTIAL_TEAM assert dataset.updated_by == "user-1" assert dataset.updated_at is now + get_external_knowledge_api.assert_called_once_with("api-1", dataset.tenant_id) update_binding.assert_called_once_with("dataset-1", "knowledge-1", "api-1") mock_db.session.add.assert_called_once_with(dataset) mock_db.session.commit.assert_called_once() @@ -574,10 +448,35 @@ class TestDatasetServiceCreationAndUpdate: with pytest.raises(ValueError, match=message): DatasetService._update_external_dataset(dataset, payload, SimpleNamespace(id="user-1")) + def test_update_external_dataset_rejects_cross_tenant_external_api_id(self): + dataset = DatasetServiceUnitDataFactory.create_dataset_mock(dataset_id="dataset-1") + + with ( + patch( + "services.dataset_service.ExternalDatasetService.get_external_knowledge_api", + side_effect=ValueError("api template not found"), + ) as get_external_knowledge_api, + patch.object(DatasetService, "_update_external_knowledge_binding") as update_binding, + patch("services.dataset_service.db") as mock_db, + ): + with pytest.raises(ValueError, match="api template not found"): + DatasetService._update_external_dataset( + dataset, + { + "external_knowledge_id": "knowledge-1", + "external_knowledge_api_id": "foreign-api", + }, + SimpleNamespace(id="user-1"), + ) + + get_external_knowledge_api.assert_called_once_with("foreign-api", dataset.tenant_id) + update_binding.assert_not_called() + mock_db.session.commit.assert_not_called() + def test_update_external_knowledge_binding_updates_changed_binding_values(self): binding = SimpleNamespace(external_knowledge_id="old-knowledge", external_knowledge_api_id="old-api") session = MagicMock() - session.query.return_value.filter_by.return_value.first.return_value = binding + session.scalar.return_value = binding session.add = MagicMock() session_context = _make_session_context(session) @@ -596,7 +495,7 @@ class TestDatasetServiceCreationAndUpdate: def test_update_external_knowledge_binding_raises_for_missing_binding(self): session = MagicMock() - session.query.return_value.filter_by.return_value.first.return_value = None + session.scalar.return_value = None session_context = _make_session_context(session) mock_sessionmaker = MagicMock() @@ -1308,103 +1207,6 @@ class TestDatasetServiceRagPipelineSettings: class TestDatasetServicePermissionsAndLifecycle: """Unit tests for dataset permissions, deletion, and metadata helpers.""" - def test_delete_dataset_returns_false_when_dataset_is_missing(self): - with patch.object(DatasetService, "get_dataset", return_value=None): - result = DatasetService.delete_dataset("dataset-1", user=SimpleNamespace(id="user-1")) - - assert result is False - - def test_delete_dataset_checks_permission_and_deletes_dataset(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock() - - with ( - patch.object(DatasetService, "get_dataset", return_value=dataset), - patch.object(DatasetService, "check_dataset_permission") as check_permission, - patch("services.dataset_service.dataset_was_deleted.send") as send_deleted_signal, - patch("services.dataset_service.db") as mock_db, - ): - result = DatasetService.delete_dataset(dataset.id, user=SimpleNamespace(id="user-1")) - - assert result is True - check_permission.assert_called_once_with(dataset, SimpleNamespace(id="user-1")) - send_deleted_signal.assert_called_once_with(dataset) - mock_db.session.delete.assert_called_once_with(dataset) - mock_db.session.commit.assert_called_once() - - def test_dataset_use_check_returns_scalar_result(self): - with patch("services.dataset_service.db") as mock_db: - mock_db.session.execute.return_value.scalar_one.return_value = True - - result = DatasetService.dataset_use_check("dataset-1") - - assert result is True - - def test_check_dataset_permission_rejects_cross_tenant_access(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock(tenant_id="tenant-a") - user = DatasetServiceUnitDataFactory.create_user_mock(tenant_id="tenant-b") - - with pytest.raises(NoPermissionError, match="do not have permission"): - DatasetService.check_dataset_permission(dataset, user) - - def test_check_dataset_permission_rejects_only_me_dataset_for_non_creator(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock( - permission=DatasetPermissionEnum.ONLY_ME, - created_by="owner-1", - ) - user = DatasetServiceUnitDataFactory.create_user_mock( - user_id="member-1", - role=TenantAccountRole.EDITOR, - ) - - with pytest.raises(NoPermissionError, match="do not have permission"): - DatasetService.check_dataset_permission(dataset, user) - - def test_check_dataset_permission_rejects_partial_team_user_without_binding(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock( - permission=DatasetPermissionEnum.PARTIAL_TEAM, - created_by="owner-1", - ) - user = DatasetServiceUnitDataFactory.create_user_mock( - user_id="member-1", - role=TenantAccountRole.EDITOR, - ) - - with patch("services.dataset_service.db") as mock_db: - mock_db.session.scalar.return_value = None - - with pytest.raises(NoPermissionError, match="do not have permission"): - DatasetService.check_dataset_permission(dataset, user) - - def test_check_dataset_permission_allows_partial_team_creator_without_lookup(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock( - permission=DatasetPermissionEnum.PARTIAL_TEAM, - created_by="creator-1", - ) - user = DatasetServiceUnitDataFactory.create_user_mock( - user_id="creator-1", - role=TenantAccountRole.EDITOR, - ) - - with patch("services.dataset_service.db") as mock_db: - DatasetService.check_dataset_permission(dataset, user) - - mock_db.session.scalar.assert_not_called() - - def test_check_dataset_permission_allows_partial_team_member_with_binding(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock( - permission=DatasetPermissionEnum.PARTIAL_TEAM, - created_by="owner-1", - ) - user = DatasetServiceUnitDataFactory.create_user_mock( - user_id="member-1", - role=TenantAccountRole.EDITOR, - ) - - with patch("services.dataset_service.db") as mock_db: - mock_db.session.scalar.return_value = object() - - DatasetService.check_dataset_permission(dataset, user) - def test_check_dataset_operator_permission_validates_required_arguments(self): with pytest.raises(ValueError, match="Dataset not found"): DatasetService.check_dataset_operator_permission(user=SimpleNamespace(id="user-1"), dataset=None) @@ -1412,279 +1214,14 @@ class TestDatasetServicePermissionsAndLifecycle: with pytest.raises(ValueError, match="User not found"): DatasetService.check_dataset_operator_permission(user=None, dataset=SimpleNamespace(id="dataset-1")) - def test_check_dataset_operator_permission_rejects_only_me_for_non_creator(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock( - permission=DatasetPermissionEnum.ONLY_ME, - created_by="owner-1", - ) - user = DatasetServiceUnitDataFactory.create_user_mock( - user_id="member-1", - role=TenantAccountRole.EDITOR, - ) - - with pytest.raises(NoPermissionError, match="do not have permission"): - DatasetService.check_dataset_operator_permission(user=user, dataset=dataset) - - def test_check_dataset_operator_permission_rejects_partial_team_without_binding(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock(permission=DatasetPermissionEnum.PARTIAL_TEAM) - user = DatasetServiceUnitDataFactory.create_user_mock( - user_id="member-1", - role=TenantAccountRole.EDITOR, - ) - - with patch("services.dataset_service.db") as mock_db: - mock_db.session.scalars.return_value.all.return_value = [] - - with pytest.raises(NoPermissionError, match="do not have permission"): - DatasetService.check_dataset_operator_permission(user=user, dataset=dataset) - - def test_get_dataset_queries_delegates_to_paginate(self): - with patch("services.dataset_service.db") as mock_db: - mock_db.desc.side_effect = lambda column: column - mock_db.paginate.return_value = SimpleNamespace(items=["query"], total=1) - - items, total = DatasetService.get_dataset_queries("dataset-1", page=1, per_page=20) - - assert items == ["query"] - assert total == 1 - mock_db.paginate.assert_called_once() - - def test_get_related_apps_returns_ordered_query_results(self): - with patch("services.dataset_service.db") as mock_db: - mock_db.desc.side_effect = lambda column: column - mock_db.session.scalars.return_value.all.return_value = ["relation-1"] - - result = DatasetService.get_related_apps("dataset-1") - - assert result == ["relation-1"] - - def test_update_dataset_api_status_raises_not_found_for_missing_dataset(self): - with patch.object(DatasetService, "get_dataset", return_value=None): - with pytest.raises(NotFound, match="Dataset not found"): - DatasetService.update_dataset_api_status("dataset-1", True) - - def test_update_dataset_api_status_requires_current_user_id(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock(enable_api=False) - - with ( - patch.object(DatasetService, "get_dataset", return_value=dataset), - patch("services.dataset_service.current_user", SimpleNamespace(id=None)), - ): - with pytest.raises(ValueError, match="Current user or current user id not found"): - DatasetService.update_dataset_api_status(dataset.id, True) - - def test_update_dataset_api_status_updates_fields_and_commits(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock(enable_api=False) - now = object() - - with ( - patch.object(DatasetService, "get_dataset", return_value=dataset), - patch("services.dataset_service.current_user", SimpleNamespace(id="user-1")), - patch("services.dataset_service.naive_utc_now", return_value=now), - patch("services.dataset_service.db") as mock_db, - ): - DatasetService.update_dataset_api_status(dataset.id, True) - - assert dataset.enable_api is True - assert dataset.updated_by == "user-1" - assert dataset.updated_at is now - mock_db.session.commit.assert_called_once() - - def test_get_dataset_auto_disable_logs_returns_empty_when_billing_is_disabled(self): - class FakeAccount: - pass - - current_user = FakeAccount() - current_user.current_tenant_id = "tenant-1" - - features = SimpleNamespace( - billing=SimpleNamespace(enabled=False, subscription=SimpleNamespace(plan=CloudPlan.PROFESSIONAL)) - ) - - with ( - patch("services.dataset_service.Account", FakeAccount), - patch("services.dataset_service.current_user", current_user), - patch("services.dataset_service.FeatureService.get_features", return_value=features), - patch("services.dataset_service.db") as mock_db, - ): - result = DatasetService.get_dataset_auto_disable_logs("dataset-1") - - assert result == {"document_ids": [], "count": 0} - mock_db.session.scalars.assert_not_called() - - def test_get_dataset_auto_disable_logs_returns_recent_document_ids(self): - class FakeAccount: - pass - - current_user = FakeAccount() - current_user.current_tenant_id = "tenant-1" - logs = [SimpleNamespace(document_id="doc-1"), SimpleNamespace(document_id="doc-2")] - features = SimpleNamespace( - billing=SimpleNamespace(enabled=True, subscription=SimpleNamespace(plan=CloudPlan.PROFESSIONAL)) - ) - - with ( - patch("services.dataset_service.Account", FakeAccount), - patch("services.dataset_service.current_user", current_user), - patch("services.dataset_service.FeatureService.get_features", return_value=features), - patch("services.dataset_service.db") as mock_db, - ): - mock_db.session.scalars.return_value.all.return_value = logs - - result = DatasetService.get_dataset_auto_disable_logs("dataset-1") - - assert result == {"document_ids": ["doc-1", "doc-2"], "count": 2} - - -class TestDatasetServiceDocumentIndexing: - """Unit tests for pause/recover/retry orchestration without SQL assertions.""" - - @pytest.fixture - def mock_document_service_dependencies(self): - with ( - patch("services.dataset_service.redis_client") as mock_redis, - patch("services.dataset_service.db.session") as mock_db_session, - patch("services.dataset_service.current_user") as mock_current_user, - ): - mock_current_user.id = "user-123" - yield { - "redis_client": mock_redis, - "db_session": mock_db_session, - "current_user": mock_current_user, - } - - def test_pause_document_success(self, mock_document_service_dependencies): - document = DatasetServiceUnitDataFactory.create_document_mock(indexing_status="indexing") - - DocumentService.pause_document(document) - - assert document.is_paused is True - assert document.paused_by == "user-123" - mock_document_service_dependencies["db_session"].add.assert_called_once_with(document) - mock_document_service_dependencies["db_session"].commit.assert_called_once() - mock_document_service_dependencies["redis_client"].setnx.assert_called_once_with( - f"document_{document.id}_is_paused", - "True", - ) - - def test_pause_document_invalid_status_error(self, mock_document_service_dependencies): - document = DatasetServiceUnitDataFactory.create_document_mock(indexing_status="completed") - - with pytest.raises(DocumentIndexingError): - DocumentService.pause_document(document) - - def test_recover_document_success(self, mock_document_service_dependencies): - document = DatasetServiceUnitDataFactory.create_document_mock(indexing_status="indexing", is_paused=True) - - with patch("services.dataset_service.recover_document_indexing_task") as recover_task: - DocumentService.recover_document(document) - - assert document.is_paused is False - assert document.paused_by is None - assert document.paused_at is None - mock_document_service_dependencies["db_session"].add.assert_called_once_with(document) - mock_document_service_dependencies["db_session"].commit.assert_called_once() - mock_document_service_dependencies["redis_client"].delete.assert_called_once_with( - f"document_{document.id}_is_paused" - ) - recover_task.delay.assert_called_once_with(document.dataset_id, document.id) - - def test_retry_document_indexing_success(self, mock_document_service_dependencies): - dataset_id = "dataset-123" - documents = [ - DatasetServiceUnitDataFactory.create_document_mock(document_id="doc-1", indexing_status="error"), - DatasetServiceUnitDataFactory.create_document_mock(document_id="doc-2", indexing_status="error"), - ] - mock_document_service_dependencies["redis_client"].get.return_value = None - - with patch("services.dataset_service.retry_document_indexing_task") as retry_task: - DocumentService.retry_document(dataset_id, documents) - - assert all(document.indexing_status == "waiting" for document in documents) - assert mock_document_service_dependencies["db_session"].add.call_count == 2 - assert mock_document_service_dependencies["db_session"].commit.call_count == 2 - assert mock_document_service_dependencies["redis_client"].setex.call_count == 2 - retry_task.delay.assert_called_once_with(dataset_id, ["doc-1", "doc-2"], "user-123") - class TestDatasetCollectionBindingService: """Unit tests for dataset collection binding lookups and creation.""" - def test_get_dataset_collection_binding_returns_existing_binding(self): - binding = SimpleNamespace(id="binding-1") - - with patch("services.dataset_service.db") as mock_db: - mock_db.session.scalar.return_value = binding - - result = DatasetCollectionBindingService.get_dataset_collection_binding("provider", "model") - - assert result is binding - mock_db.session.add.assert_not_called() - - def test_get_dataset_collection_binding_creates_binding_when_missing(self): - created_binding = SimpleNamespace(id="binding-2") - - with ( - patch("services.dataset_service.db") as mock_db, - patch("services.dataset_service.select"), - patch("services.dataset_service.DatasetCollectionBinding", return_value=created_binding) as binding_cls, - patch.object(Dataset, "gen_collection_name_by_id", return_value="generated-collection"), - ): - mock_db.session.scalar.return_value = None - - result = DatasetCollectionBindingService.get_dataset_collection_binding("provider", "model", "dataset") - - assert result is created_binding - binding_cls.assert_called_once_with( - provider_name="provider", - model_name="model", - collection_name="generated-collection", - type="dataset", - ) - mock_db.session.add.assert_called_once_with(created_binding) - mock_db.session.commit.assert_called_once() - - def test_get_dataset_collection_binding_by_id_and_type_raises_when_missing(self): - with patch("services.dataset_service.db") as mock_db: - mock_db.session.scalar.return_value = None - - with pytest.raises(ValueError, match="Dataset collection binding not found"): - DatasetCollectionBindingService.get_dataset_collection_binding_by_id_and_type("binding-1") - - def test_get_dataset_collection_binding_by_id_and_type_returns_binding(self): - binding = SimpleNamespace(id="binding-1") - - with patch("services.dataset_service.db") as mock_db: - mock_db.session.scalar.return_value = binding - - result = DatasetCollectionBindingService.get_dataset_collection_binding_by_id_and_type("binding-1") - - assert result is binding - class TestDatasetPermissionService: """Unit tests for dataset partial-member management helpers.""" - def test_get_dataset_partial_member_list_returns_scalar_results(self): - with patch("services.dataset_service.db") as mock_db: - mock_db.session.scalars.return_value.all.return_value = ["user-1", "user-2"] - - result = DatasetPermissionService.get_dataset_partial_member_list("dataset-1") - - assert result == ["user-1", "user-2"] - - def test_update_partial_member_list_replaces_permissions_and_commits(self): - with patch("services.dataset_service.db") as mock_db: - DatasetPermissionService.update_partial_member_list( - "tenant-1", - "dataset-1", - [{"user_id": "user-1"}, {"user_id": "user-2"}], - ) - - mock_db.session.execute.assert_called() - mock_db.session.add_all.assert_called_once() - mock_db.session.commit.assert_called_once() - def test_update_partial_member_list_rolls_back_on_exception(self): with patch("services.dataset_service.db") as mock_db: mock_db.session.add_all.side_effect = RuntimeError("boom") @@ -1748,13 +1285,6 @@ class TestDatasetPermissionService: [{"user_id": "user-1"}], ) - def test_clear_partial_member_list_deletes_permissions_and_commits(self): - with patch("services.dataset_service.db") as mock_db: - DatasetPermissionService.clear_partial_member_list("dataset-1") - - mock_db.session.execute.assert_called() - mock_db.session.commit.assert_called_once() - def test_clear_partial_member_list_rolls_back_on_exception(self): with patch("services.dataset_service.db") as mock_db: mock_db.session.execute.side_effect = RuntimeError("boom") diff --git a/api/tests/unit_tests/services/test_dataset_service_document.py b/api/tests/unit_tests/services/test_dataset_service_document.py index e5a2541da7..3f9386e704 100644 --- a/api/tests/unit_tests/services/test_dataset_service_document.py +++ b/api/tests/unit_tests/services/test_dataset_service_document.py @@ -129,7 +129,7 @@ class TestDocumentServiceQueryAndDownloadHelpers: def test_update_documents_need_summary_updates_matching_documents_and_commits(self): session = MagicMock() - session.query.return_value.filter.return_value.update.return_value = 2 + session.execute.return_value.rowcount = 2 with patch("services.dataset_service.session_factory") as session_factory_mock: session_factory_mock.create_session.return_value = _make_session_context(session) @@ -1069,6 +1069,33 @@ class TestDocumentServiceCreateValidation: assert len(knowledge_config.process_rule.rules.pre_processing_rules) == 1 assert knowledge_config.process_rule.rules.pre_processing_rules[0].enabled is False + def test_process_rule_args_validate_hierarchical_defaults_parent_mode_to_paragraph(self): + knowledge_config = KnowledgeConfig( + indexing_technique="economy", + data_source=DataSource( + info_list=InfoList( + data_source_type="upload_file", + file_info_list=FileInfo(file_ids=["file-1"]), + ) + ), + process_rule=ProcessRule( + mode="hierarchical", + rules=Rule( + pre_processing_rules=[ + PreProcessingRule(id="remove_extra_spaces", enabled=True), + ], + segmentation=Segmentation(separator="\n", max_tokens=1024), + subchunk_segmentation=Segmentation(separator="\n", max_tokens=512), + ), + ), + ) + + DocumentService.process_rule_args_validate(knowledge_config) + + assert knowledge_config.process_rule is not None + assert knowledge_config.process_rule.rules is not None + assert knowledge_config.process_rule.rules.parent_mode == "paragraph" + class TestDocumentServiceSaveDocumentWithDatasetId: """Unit tests for non-SQL validation branches in save_document_with_dataset_id.""" diff --git a/api/tests/unit_tests/services/test_datasource_provider_service.py b/api/tests/unit_tests/services/test_datasource_provider_service.py index bc4120e2af..d304e0ec44 100644 --- a/api/tests/unit_tests/services/test_datasource_provider_service.py +++ b/api/tests/unit_tests/services/test_datasource_provider_service.py @@ -2,10 +2,10 @@ from unittest.mock import MagicMock, patch import httpx import pytest -from graphon.model_runtime.entities.provider_entities import FormType from sqlalchemy.orm import Session from core.plugin.entities.plugin_daemon import CredentialType +from graphon.model_runtime.entities.provider_entities import FormType from models.account import Account from models.model import EndUser from models.oauth import DatasourceProvider @@ -40,7 +40,10 @@ class TestDatasourceProviderService: q returns itself for .filter_by(), .order_by(), .where() so any SQLAlchemy chaining pattern works without multiple brittle sub-mocks. """ - with patch("services.datasource_provider_service.Session") as mock_cls: + with ( + patch("services.datasource_provider_service.Session") as mock_cls, + patch("services.datasource_provider_service.sessionmaker") as mock_sm, + ): sess = MagicMock(spec=Session) q = MagicMock() @@ -63,6 +66,8 @@ class TestDatasourceProviderService: mock_cls.return_value.__enter__.return_value = sess mock_cls.return_value.no_autoflush.__enter__.return_value = sess + mock_sm.return_value.begin.return_value.__enter__.return_value = sess + mock_sm.return_value.begin.return_value.__exit__ = MagicMock(return_value=False) yield sess @@ -174,11 +179,11 @@ class TestDatasourceProviderService: # ----------------------------------------------------------------------- def test_should_return_true_when_system_oauth_params_exist(self, service, mock_db_session): - mock_db_session.query().first.return_value = MagicMock() + mock_db_session.scalar.return_value = MagicMock() assert service.is_system_oauth_params_exist(make_id()) is True def test_should_return_false_when_system_oauth_params_missing(self, service, mock_db_session): - mock_db_session.query().first.return_value = None + mock_db_session.scalar.return_value = None assert service.is_system_oauth_params_exist(make_id()) is False # ----------------------------------------------------------------------- @@ -200,7 +205,7 @@ class TestDatasourceProviderService: def test_should_delete_tenant_config_when_removing_oauth_params(self, service, mock_db_session): service.remove_oauth_custom_client_params("t1", make_id()) - mock_db_session.query().delete.assert_called_once() + mock_db_session.execute.assert_called_once() # ----------------------------------------------------------------------- # setup_oauth_custom_client_params (315-351) @@ -212,14 +217,14 @@ class TestDatasourceProviderService: mock_db_session.add.assert_not_called() def test_should_create_new_config_when_none_exists(self, service, mock_db_session): - mock_db_session.query().first.return_value = None + mock_db_session.scalar.return_value = None with patch.object(service, "get_oauth_encrypter", return_value=(self._enc, None)): service.setup_oauth_custom_client_params("t1", make_id(), {"k": "v"}, True) mock_db_session.add.assert_called_once() def test_should_update_existing_config_when_record_found(self, service, mock_db_session): existing = MagicMock() - mock_db_session.query().first.return_value = existing + mock_db_session.scalar.return_value = existing with patch.object(service, "get_oauth_encrypter", return_value=(self._enc, None)): service.setup_oauth_custom_client_params("t1", make_id(), {"k": "v"}, False) mock_db_session.add.assert_not_called() # update in place, no add @@ -250,7 +255,7 @@ class TestDatasourceProviderService: def test_should_return_empty_dict_when_credential_not_found(self, service, mock_db_session, mock_user): with patch("services.datasource_provider_service.get_current_user", return_value=mock_user): - mock_db_session.query().first.return_value = None + mock_db_session.scalar.return_value = None assert service.get_datasource_credentials("t1", "prov", "org/plug") == {} def test_should_refresh_oauth_tokens_when_expired(self, service, mock_db_session, mock_user): @@ -259,14 +264,13 @@ class TestDatasourceProviderService: p.auth_type = "oauth2" p.expires_at = 0 # expired p.encrypted_credentials = {"tok": "x"} - mock_db_session.query().first.return_value = p + mock_db_session.scalar.return_value = p with ( patch("services.datasource_provider_service.get_current_user", return_value=mock_user), patch.object(service, "get_oauth_client", return_value={"oc": "v"}), patch.object(service, "decrypt_datasource_provider_credentials", return_value={"tok": "plain"}), ): service.get_datasource_credentials("t1", "prov", "org/plug") - mock_db_session.commit.assert_called_once() def test_should_return_decrypted_credentials_when_api_key_not_expired(self, service, mock_db_session, mock_user): """API key credentials with expires_at=-1 skip refresh and return directly.""" @@ -274,7 +278,7 @@ class TestDatasourceProviderService: p.auth_type = "api_key" p.expires_at = -1 # sentinel: never expires p.encrypted_credentials = {"k": "v"} - mock_db_session.query().first.return_value = p + mock_db_session.scalar.return_value = p with ( patch("services.datasource_provider_service.get_current_user", return_value=mock_user), patch.object(service, "decrypt_datasource_provider_credentials", return_value={"k": "plain"}), @@ -288,7 +292,7 @@ class TestDatasourceProviderService: p.auth_type = "api_key" p.expires_at = -1 p.encrypted_credentials = {} - mock_db_session.query().first.return_value = p + mock_db_session.scalar.return_value = p with ( patch("services.datasource_provider_service.get_current_user", return_value=mock_user), patch.object(service, "decrypt_datasource_provider_credentials", return_value={"k": "v"}), @@ -302,7 +306,7 @@ class TestDatasourceProviderService: def test_should_return_empty_list_when_no_provider_credentials_exist(self, service, mock_db_session, mock_user): with patch("services.datasource_provider_service.get_current_user", return_value=mock_user): - mock_db_session.query().all.return_value = [] + mock_db_session.scalars.return_value.all.return_value = [] assert service.get_all_datasource_credentials_by_provider("t1", "prov", "org/plug") == [] def test_should_refresh_and_return_credentials_when_oauth_expired(self, service, mock_db_session, mock_user): @@ -310,7 +314,7 @@ class TestDatasourceProviderService: p.auth_type = "oauth2" p.expires_at = 0 p.encrypted_credentials = {"t": "x"} - mock_db_session.query().all.return_value = [p] + mock_db_session.scalars.return_value.all.return_value = [p] with ( patch("services.datasource_provider_service.get_current_user", return_value=mock_user), patch.object(service, "get_oauth_client", return_value={"oc": "v"}), @@ -324,23 +328,21 @@ class TestDatasourceProviderService: # ----------------------------------------------------------------------- def test_should_raise_value_error_when_provider_not_found_on_name_update(self, service, mock_db_session): - mock_db_session.query().first.return_value = None + mock_db_session.scalar.return_value = None with pytest.raises(ValueError, match="not found"): service.update_datasource_provider_name("t1", make_id(), "new", "cred-id") def test_should_return_early_when_new_name_matches_current(self, service, mock_db_session): p = MagicMock(spec=DatasourceProvider) p.name = "same" - mock_db_session.query().first.return_value = p + mock_db_session.scalar.return_value = p service.update_datasource_provider_name("t1", make_id(), "same", "cred-id") - mock_db_session.commit.assert_not_called() def test_should_raise_value_error_when_name_already_exists(self, service, mock_db_session): p = MagicMock(spec=DatasourceProvider) p.name = "old_name" p.is_default = False - mock_db_session.query().first.return_value = p - mock_db_session.query().count.return_value = 1 # conflict + mock_db_session.scalar.side_effect = [p, 1] # first: fetch provider, second: name conflict count with pytest.raises(ValueError, match="already exists"): service.update_datasource_provider_name("t1", make_id(), "new_name", "some-id") @@ -348,18 +350,16 @@ class TestDatasourceProviderService: p = MagicMock(spec=DatasourceProvider) p.name = "old_name" p.is_default = False - mock_db_session.query().first.return_value = p - mock_db_session.query().count.return_value = 0 + mock_db_session.scalar.side_effect = [p, 0] # first: fetch provider, second: name conflict count service.update_datasource_provider_name("t1", make_id(), "new_name", "some-id") assert p.name == "new_name" - mock_db_session.commit.assert_called_once() # ----------------------------------------------------------------------- # set_default_datasource_provider (lines 277-303) # ----------------------------------------------------------------------- def test_should_raise_value_error_when_target_provider_not_found(self, service, mock_db_session): - mock_db_session.query().first.return_value = None + mock_db_session.scalar.return_value = None with pytest.raises(ValueError, match="not found"): service.set_default_datasource_provider("t1", make_id(), "bad-id") @@ -367,10 +367,9 @@ class TestDatasourceProviderService: target = MagicMock(spec=DatasourceProvider) target.provider = "provider" target.plugin_id = "org/plug" - mock_db_session.query().first.return_value = target + mock_db_session.scalar.return_value = target service.set_default_datasource_provider("t1", make_id(), "new-id") assert target.is_default is True - mock_db_session.commit.assert_called_once() # ----------------------------------------------------------------------- # get_oauth_encrypter (lines 404-420) @@ -427,13 +426,13 @@ class TestDatasourceProviderService: # ----------------------------------------------------------------------- def test_should_use_tenant_config_when_available(self, service, mock_db_session): - mock_db_session.query().first.return_value = MagicMock(client_params={"k": "v"}) + mock_db_session.scalar.return_value = MagicMock(client_params={"k": "v"}) with patch.object(service, "get_oauth_encrypter", return_value=(self._enc, None)): result = service.get_oauth_client("t1", make_id()) assert result == {"k": "dec"} def test_should_fallback_to_system_credentials_when_tenant_config_missing(self, service, mock_db_session): - mock_db_session.query().first.side_effect = [None, MagicMock(system_credentials={"k": "sys"})] + mock_db_session.scalar.side_effect = [None, MagicMock(system_credentials={"k": "sys"})] with ( patch.object(service.provider_manager, "fetch_datasource_provider"), patch("services.datasource_provider_service.PluginService.is_plugin_verified", return_value=True), @@ -443,7 +442,7 @@ class TestDatasourceProviderService: def test_should_raise_value_error_when_no_oauth_config_available(self, service, mock_db_session): """Neither tenant nor system credentials → raises ValueError.""" - mock_db_session.query().first.side_effect = [None, None] + mock_db_session.scalar.side_effect = [None, None] with ( patch.object(service.provider_manager, "fetch_datasource_provider"), patch("services.datasource_provider_service.PluginService.is_plugin_verified", return_value=False), @@ -456,16 +455,14 @@ class TestDatasourceProviderService: # ----------------------------------------------------------------------- def test_should_add_oauth_provider_successfully_when_name_is_unique(self, service, mock_db_session): - mock_db_session.query().count.return_value = 0 + mock_db_session.scalar.return_value = 0 with patch.object(service, "extract_secret_variables", return_value=[]): service.add_datasource_oauth_provider("new", "t1", make_id(), "http://cb", 9999, {}) mock_db_session.add.assert_called_once() - mock_db_session.commit.assert_called_once() def test_should_auto_rename_when_oauth_provider_name_conflicts(self, service, mock_db_session): """Conflict on name results in auto-incremented name, not an error.""" - mock_db_session.query().count.return_value = 1 # conflict first, then auto-named - mock_db_session.query().all.return_value = [] + mock_db_session.scalar.return_value = 1 # conflict first, then auto-named with ( patch.object(service, "extract_secret_variables", return_value=[]), patch.object(service, "generate_next_datasource_provider_name", return_value="new_gen"), @@ -475,8 +472,7 @@ class TestDatasourceProviderService: def test_should_auto_generate_name_when_none_provided_for_oauth(self, service, mock_db_session): """name=None causes auto-generation via generate_next_datasource_provider_name.""" - mock_db_session.query().count.return_value = 0 - mock_db_session.query().all.return_value = [] + mock_db_session.scalar.return_value = 0 with ( patch.object(service, "extract_secret_variables", return_value=[]), patch.object(service, "generate_next_datasource_provider_name", return_value="auto"), @@ -485,13 +481,13 @@ class TestDatasourceProviderService: mock_db_session.add.assert_called_once() def test_should_encrypt_secret_fields_when_adding_oauth_provider(self, service, mock_db_session): - mock_db_session.query().count.return_value = 0 + mock_db_session.scalar.return_value = 0 with patch.object(service, "extract_secret_variables", return_value=["secret_key"]): service.add_datasource_oauth_provider("nm", "t1", make_id(), "http://cb", 9999, {"secret_key": "value"}) self._enc.encrypt_token.assert_called() def test_should_acquire_redis_lock_when_adding_oauth_provider(self, service, mock_db_session): - mock_db_session.query().count.return_value = 0 + mock_db_session.scalar.return_value = 0 with patch.object(service, "extract_secret_variables", return_value=[]): service.add_datasource_oauth_provider("nm", "t1", make_id(), "http://cb", 9999, {}) self._redis.lock.assert_called() @@ -501,42 +497,36 @@ class TestDatasourceProviderService: # ----------------------------------------------------------------------- def test_should_raise_value_error_when_credential_id_not_found_on_reauth(self, service, mock_db_session): - mock_db_session.query().first.return_value = None + mock_db_session.scalar.return_value = None with patch.object(service, "extract_secret_variables", return_value=[]): with pytest.raises(ValueError, match="not found"): service.reauthorize_datasource_oauth_provider("n", "t1", make_id(), "u", 1, {}, "bad-id") def test_should_reauthorize_and_commit_when_credential_found(self, service, mock_db_session): p = MagicMock(spec=DatasourceProvider) - mock_db_session.query().first.return_value = p - mock_db_session.query().count.return_value = 0 + mock_db_session.scalar.side_effect = [p, 0] # first: fetch provider, second: name conflict count with patch.object(service, "extract_secret_variables", return_value=[]): service.reauthorize_datasource_oauth_provider("n", "t1", make_id(), "u", 1, {}, "oid") - mock_db_session.commit.assert_called_once() def test_should_auto_rename_when_reauth_name_conflicts(self, service, mock_db_session): p = MagicMock(spec=DatasourceProvider) - mock_db_session.query().first.return_value = p - mock_db_session.query().count.return_value = 1 # conflict - mock_db_session.query().all.return_value = [] + mock_db_session.scalar.side_effect = [p, 1] # first: fetch provider, second: name conflict count + mock_db_session.scalars.return_value.all.return_value = [] with patch.object(service, "extract_secret_variables", return_value=["tok"]): service.reauthorize_datasource_oauth_provider( "conflict_name", "t1", make_id(), "u", 9999, {"tok": "v"}, "cred-id" ) - mock_db_session.commit.assert_called_once() def test_should_encrypt_secret_fields_when_reauthorizing(self, service, mock_db_session): p = MagicMock(spec=DatasourceProvider) - mock_db_session.query().first.return_value = p - mock_db_session.query().count.return_value = 0 + mock_db_session.scalar.side_effect = [p, 0] # first: fetch provider, second: name conflict count with patch.object(service, "extract_secret_variables", return_value=["tok"]): service.reauthorize_datasource_oauth_provider(None, "t1", make_id(), "u", 9999, {"tok": "val"}, "cred-id") self._enc.encrypt_token.assert_called() def test_should_acquire_redis_lock_when_reauthorizing(self, service, mock_db_session): p = MagicMock(spec=DatasourceProvider) - mock_db_session.query().first.return_value = p - mock_db_session.query().count.return_value = 0 + mock_db_session.scalar.side_effect = [p, 0] # first: fetch provider, second: name conflict count with patch.object(service, "extract_secret_variables", return_value=[]): service.reauthorize_datasource_oauth_provider("n", "t1", make_id(), "u", 1, {}, "oid") self._redis.lock.assert_called() @@ -547,13 +537,13 @@ class TestDatasourceProviderService: def test_should_raise_value_error_when_api_key_name_already_exists(self, service, mock_db_session, mock_user): """explicit name supplied + conflict → raises ValueError immediately.""" - mock_db_session.query().count.return_value = 1 + mock_db_session.scalar.return_value = 1 with patch("services.datasource_provider_service.get_current_user", return_value=mock_user): with pytest.raises(ValueError, match="already exists"): service.add_datasource_api_key_provider("clash", "t1", make_id(), {"sk": "v"}) def test_should_raise_value_error_when_credentials_validation_fails(self, service, mock_db_session, mock_user): - mock_db_session.query().count.return_value = 0 + mock_db_session.scalar.return_value = 0 with ( patch("services.datasource_provider_service.get_current_user", return_value=mock_user), patch.object(service.provider_manager, "validate_provider_credentials", side_effect=Exception("bad cred")), @@ -563,7 +553,7 @@ class TestDatasourceProviderService: service.add_datasource_api_key_provider("nm", "t1", make_id(), {"k": "v"}) def test_should_add_api_key_provider_and_commit_when_valid(self, service, mock_db_session, mock_user): - mock_db_session.query().count.return_value = 0 + mock_db_session.scalar.return_value = 0 with ( patch("services.datasource_provider_service.get_current_user", return_value=mock_user), patch.object(service.provider_manager, "validate_provider_credentials"), @@ -571,10 +561,9 @@ class TestDatasourceProviderService: ): service.add_datasource_api_key_provider(None, "t1", make_id(), {"sk": "v"}) mock_db_session.add.assert_called_once() - mock_db_session.commit.assert_called_once() def test_should_acquire_redis_lock_when_adding_api_key_provider(self, service, mock_db_session, mock_user): - mock_db_session.query().count.return_value = 0 + mock_db_session.scalar.return_value = 0 with ( patch("services.datasource_provider_service.get_current_user", return_value=mock_user), patch.object(service.provider_manager, "validate_provider_credentials"), @@ -697,7 +686,7 @@ class TestDatasourceProviderService: # ----------------------------------------------------------------------- def test_should_raise_value_error_when_credential_not_found_on_update(self, service, mock_db_session, mock_user): - mock_db_session.query().first.return_value = None + mock_db_session.scalar.return_value = None with patch("services.datasource_provider_service.get_current_user", return_value=mock_user): with pytest.raises(ValueError, match="not found"): service.update_datasource_credentials("t1", "id", "prov", "org/plug", {}, "name") @@ -707,8 +696,7 @@ class TestDatasourceProviderService: p.name = "old_name" p.auth_type = "api_key" p.encrypted_credentials = {"sk": "e"} - mock_db_session.query().first.return_value = p - mock_db_session.query().count.return_value = 1 + mock_db_session.scalar.side_effect = [p, 1] # first: fetch provider, second: name conflict count with patch("services.datasource_provider_service.get_current_user", return_value=mock_user): with pytest.raises(ValueError, match="already exists"): service.update_datasource_credentials("t1", "id", "prov", "org/plug", {}, "new_name") @@ -720,8 +708,7 @@ class TestDatasourceProviderService: p.name = "old_name" p.auth_type = "api_key" p.encrypted_credentials = {"sk": "e"} - mock_db_session.query().first.return_value = p - mock_db_session.query().count.return_value = 0 + mock_db_session.scalar.side_effect = [p, 0] # first: fetch provider, second: name conflict count with ( patch("services.datasource_provider_service.get_current_user", return_value=mock_user), patch.object(service, "extract_secret_variables", return_value=["sk"]), @@ -736,8 +723,7 @@ class TestDatasourceProviderService: p.name = "old_name" p.auth_type = "api_key" p.encrypted_credentials = {"sk": "old_enc"} - mock_db_session.query().first.return_value = p - mock_db_session.query().count.return_value = 0 + mock_db_session.scalar.side_effect = [p, 0] # first: fetch provider, second: name conflict count with ( patch("services.datasource_provider_service.get_current_user", return_value=mock_user), patch.object(service, "extract_secret_variables", return_value=["sk"]), @@ -747,7 +733,6 @@ class TestDatasourceProviderService: # encrypter must have been called with the new secret value self._enc.encrypt_token.assert_called() # commit must be called exactly once - mock_db_session.commit.assert_called_once() # ----------------------------------------------------------------------- # remove_datasource_credentials (lines 980-997) @@ -758,7 +743,6 @@ class TestDatasourceProviderService: mock_db_session.scalar.return_value = p service.remove_datasource_credentials("t1", "id", "prov", "org/plug") mock_db_session.delete.assert_called_once_with(p) - mock_db_session.commit.assert_called_once() def test_should_do_nothing_when_credential_not_found_on_remove(self, service, mock_db_session): """No error raised; no delete called when record doesn't exist (lines 994 branch).""" diff --git a/api/tests/unit_tests/services/test_external_dataset_service.py b/api/tests/unit_tests/services/test_external_dataset_service.py index 7c8dab5029..fdea0ba869 100644 --- a/api/tests/unit_tests/services/test_external_dataset_service.py +++ b/api/tests/unit_tests/services/test_external_dataset_service.py @@ -8,6 +8,7 @@ Target: 1500+ lines of comprehensive test coverage. import json import re from datetime import datetime +from typing import Any from unittest.mock import MagicMock, Mock, patch import pytest @@ -31,7 +32,7 @@ class ExternalDatasetServiceTestDataFactory: api_id: str = "api-123", tenant_id: str = "tenant-123", name: str = "Test API", - settings: dict | None = None, + settings: dict[str, Any] | None = None, **kwargs, ) -> Mock: """Create a mock ExternalKnowledgeApis object.""" @@ -120,8 +121,8 @@ class ExternalDatasetServiceTestDataFactory: def create_api_setting_mock( url: str = "https://api.example.com/retrieval", request_method: str = "post", - headers: dict | None = None, - params: dict | None = None, + headers: dict[str, Any] | None = None, + params: dict[str, Any] | None = None, ) -> ExternalKnowledgeApiSetting: """Create an ExternalKnowledgeApiSetting object.""" if headers is None: @@ -974,26 +975,29 @@ class TestExternalDatasetServiceAPIUseCheck: """Test API use check when API has one binding.""" # Arrange api_id = "api-123" + tenant_id = "tenant-123" mock_db.session.scalar.return_value = 1 # Act - in_use, count = ExternalDatasetService.external_knowledge_api_use_check(api_id) + in_use, count = ExternalDatasetService.external_knowledge_api_use_check(api_id, tenant_id) # Assert assert in_use is True assert count == 1 + assert "tenant_id" in str(mock_db.session.scalar.call_args.args[0]) @patch("services.external_knowledge_service.db") def test_external_knowledge_api_use_check_in_use_multiple(self, mock_db, factory): """Test API use check with multiple bindings.""" # Arrange api_id = "api-123" + tenant_id = "tenant-123" mock_db.session.scalar.return_value = 10 # Act - in_use, count = ExternalDatasetService.external_knowledge_api_use_check(api_id) + in_use, count = ExternalDatasetService.external_knowledge_api_use_check(api_id, tenant_id) # Assert assert in_use is True @@ -1004,11 +1008,12 @@ class TestExternalDatasetServiceAPIUseCheck: """Test API use check when API is not in use.""" # Arrange api_id = "api-123" + tenant_id = "tenant-123" mock_db.session.scalar.return_value = 0 # Act - in_use, count = ExternalDatasetService.external_knowledge_api_use_check(api_id) + in_use, count = ExternalDatasetService.external_knowledge_api_use_check(api_id, tenant_id) # Assert assert in_use is False @@ -1556,6 +1561,17 @@ class TestExternalDatasetServiceFetchRetrieval: with pytest.raises(ValueError, match="external knowledge binding not found"): ExternalDatasetService.fetch_external_knowledge_retrieval("tenant-123", "dataset-123", "query", {}) + @patch("services.external_knowledge_service.db") + def test_fetch_external_knowledge_retrieval_cross_tenant_api_template_error(self, mock_db, factory): + """Test error when a binding points to an API template outside the dataset tenant.""" + # Arrange + binding = factory.create_external_knowledge_binding_mock() + mock_db.session.scalar.side_effect = [binding, None] + + # Act & Assert + with pytest.raises(ValueError, match="external api template not found"): + ExternalDatasetService.fetch_external_knowledge_retrieval("tenant-123", "dataset-123", "query", {}) + @patch("services.external_knowledge_service.ExternalDatasetService.process_external_api") @patch("services.external_knowledge_service.db") def test_fetch_external_knowledge_retrieval_empty_results(self, mock_db, mock_process, factory): @@ -1687,7 +1703,7 @@ class TestExternalDatasetServiceFetchRetrieval: mock_process.return_value = mock_response # Act & Assert - with pytest.raises(Exception, match=""): + with pytest.raises(ValueError): ExternalDatasetService.fetch_external_knowledge_retrieval( "tenant-123", "dataset-123", "query", {"top_k": 5} ) diff --git a/api/tests/unit_tests/services/test_file_service.py b/api/tests/unit_tests/services/test_file_service.py index b7259c3e82..8e1b22886b 100644 --- a/api/tests/unit_tests/services/test_file_service.py +++ b/api/tests/unit_tests/services/test_file_service.py @@ -165,7 +165,7 @@ class TestFileService: upload_file = MagicMock(spec=UploadFile) upload_file.id = "file_id" upload_file.key = "test_key" - mock_db_session.query().where().first.return_value = upload_file + mock_db_session.scalar.return_value = upload_file with patch("services.file_service.storage") as mock_storage: mock_storage.load_once.return_value = b"test content" @@ -178,7 +178,7 @@ class TestFileService: mock_storage.load_once.assert_called_once_with("test_key") def test_get_file_base64_not_found(self, file_service, mock_db_session): - mock_db_session.query().where().first.return_value = None + mock_db_session.scalar.return_value = None with pytest.raises(NotFound, match="File not found"): file_service.get_file_base64("non_existent") @@ -215,7 +215,7 @@ class TestFileService: upload_file = MagicMock(spec=UploadFile) upload_file.id = "file_id" upload_file.extension = "pdf" - mock_db_session.query().where().first.return_value = upload_file + mock_db_session.scalar.return_value = upload_file with patch("services.file_service.ExtractProcessor.load_from_upload_file") as mock_extract: mock_extract.return_value = "Extracted text content" @@ -227,7 +227,7 @@ class TestFileService: assert result == "Extracted text content" def test_get_file_preview_not_found(self, file_service, mock_db_session): - mock_db_session.query().where().first.return_value = None + mock_db_session.scalar.return_value = None with pytest.raises(NotFound, match="File not found"): file_service.get_file_preview("non_existent") @@ -235,7 +235,7 @@ class TestFileService: upload_file = MagicMock(spec=UploadFile) upload_file.id = "file_id" upload_file.extension = "exe" - mock_db_session.query().where().first.return_value = upload_file + mock_db_session.scalar.return_value = upload_file with pytest.raises(UnsupportedFileTypeError): file_service.get_file_preview("file_id") @@ -246,7 +246,7 @@ class TestFileService: upload_file.extension = "jpg" upload_file.mime_type = "image/jpeg" upload_file.key = "key" - mock_db_session.query().where().first.return_value = upload_file + mock_db_session.scalar.return_value = upload_file with ( patch("services.file_service.file_helpers.verify_image_signature") as mock_verify, @@ -269,7 +269,7 @@ class TestFileService: file_service.get_image_preview("file_id", "ts", "nonce", "sign") def test_get_image_preview_not_found(self, file_service, mock_db_session): - mock_db_session.query().where().first.return_value = None + mock_db_session.scalar.return_value = None with patch("services.file_service.file_helpers.verify_image_signature") as mock_verify: mock_verify.return_value = True with pytest.raises(NotFound, match="File not found or signature is invalid"): @@ -279,7 +279,7 @@ class TestFileService: upload_file = MagicMock(spec=UploadFile) upload_file.id = "file_id" upload_file.extension = "txt" - mock_db_session.query().where().first.return_value = upload_file + mock_db_session.scalar.return_value = upload_file with patch("services.file_service.file_helpers.verify_image_signature") as mock_verify: mock_verify.return_value = True with pytest.raises(UnsupportedFileTypeError): @@ -289,7 +289,7 @@ class TestFileService: upload_file = MagicMock(spec=UploadFile) upload_file.id = "file_id" upload_file.key = "key" - mock_db_session.query().where().first.return_value = upload_file + mock_db_session.scalar.return_value = upload_file with ( patch("services.file_service.file_helpers.verify_file_signature") as mock_verify, @@ -309,7 +309,7 @@ class TestFileService: file_service.get_file_generator_by_file_id("file_id", "ts", "nonce", "sign") def test_get_file_generator_by_file_id_not_found(self, file_service, mock_db_session): - mock_db_session.query().where().first.return_value = None + mock_db_session.scalar.return_value = None with patch("services.file_service.file_helpers.verify_file_signature") as mock_verify: mock_verify.return_value = True with pytest.raises(NotFound, match="File not found or signature is invalid"): @@ -321,7 +321,7 @@ class TestFileService: upload_file.extension = "png" upload_file.mime_type = "image/png" upload_file.key = "key" - mock_db_session.query().where().first.return_value = upload_file + mock_db_session.scalar.return_value = upload_file with patch("services.file_service.storage") as mock_storage: mock_storage.load.return_value = b"image content" @@ -330,7 +330,7 @@ class TestFileService: assert mime == "image/png" def test_get_public_image_preview_not_found(self, file_service, mock_db_session): - mock_db_session.query().where().first.return_value = None + mock_db_session.scalar.return_value = None with pytest.raises(NotFound, match="File not found or signature is invalid"): file_service.get_public_image_preview("file_id") @@ -338,7 +338,7 @@ class TestFileService: upload_file = MagicMock(spec=UploadFile) upload_file.id = "file_id" upload_file.extension = "txt" - mock_db_session.query().where().first.return_value = upload_file + mock_db_session.scalar.return_value = upload_file with pytest.raises(UnsupportedFileTypeError): file_service.get_public_image_preview("file_id") @@ -346,7 +346,7 @@ class TestFileService: upload_file = MagicMock(spec=UploadFile) upload_file.id = "file_id" upload_file.key = "key" - mock_db_session.query().where().first.return_value = upload_file + mock_db_session.scalar.return_value = upload_file with patch("services.file_service.storage") as mock_storage: mock_storage.load.return_value = b"hello world" @@ -354,7 +354,7 @@ class TestFileService: assert result == "hello world" def test_get_file_content_not_found(self, file_service, mock_db_session): - mock_db_session.query().where().first.return_value = None + mock_db_session.scalar.return_value = None with pytest.raises(NotFound, match="File not found"): file_service.get_file_content("file_id") diff --git a/api/tests/unit_tests/services/test_human_input_service.py b/api/tests/unit_tests/services/test_human_input_service.py index 9be475d043..55af564821 100644 --- a/api/tests/unit_tests/services/test_human_input_service.py +++ b/api/tests/unit_tests/services/test_human_input_service.py @@ -3,18 +3,18 @@ from datetime import datetime, timedelta from unittest.mock import MagicMock import pytest -from graphon.nodes.human_input.entities import ( - FormDefinition, - FormInput, - UserAction, -) -from graphon.nodes.human_input.enums import FormInputType, HumanInputFormKind, HumanInputFormStatus import services.human_input_service as human_input_service_module from core.repositories.human_input_repository import ( HumanInputFormRecord, HumanInputFormSubmissionRepository, ) +from graphon.nodes.human_input.entities import ( + FormDefinition, + FormInput, + UserAction, +) +from graphon.nodes.human_input.enums import FormInputType, HumanInputFormKind, HumanInputFormStatus from libs.datetime_utils import naive_utc_now from models.human_input import RecipientType from services.human_input_service import ( diff --git a/api/tests/unit_tests/services/test_message_service.py b/api/tests/unit_tests/services/test_message_service.py index b6e990ebe0..969132cfd8 100644 --- a/api/tests/unit_tests/services/test_message_service.py +++ b/api/tests/unit_tests/services/test_message_service.py @@ -131,9 +131,12 @@ class TestMessageServicePaginationByFirstId: assert result.has_more is False # Test 03: Basic pagination without first_id (desc order) + @patch("services.message_service._create_execution_extra_content_repository") @patch("services.message_service.db") @patch("services.message_service.ConversationService") - def test_pagination_by_first_id_without_first_id_desc(self, mock_conversation_service, mock_db, factory): + def test_pagination_by_first_id_without_first_id_desc( + self, mock_conversation_service, mock_db, mock_create_repo, factory + ): """Test basic pagination without first_id in descending order.""" # Arrange app = factory.create_app_mock() @@ -171,9 +174,12 @@ class TestMessageServicePaginationByFirstId: assert result.data[0].id == "msg-000" # Test 04: Basic pagination without first_id (asc order) + @patch("services.message_service._create_execution_extra_content_repository") @patch("services.message_service.db") @patch("services.message_service.ConversationService") - def test_pagination_by_first_id_without_first_id_asc(self, mock_conversation_service, mock_db, factory): + def test_pagination_by_first_id_without_first_id_asc( + self, mock_conversation_service, mock_db, mock_create_repo, factory + ): """Test basic pagination without first_id in ascending order.""" # Arrange app = factory.create_app_mock() @@ -211,9 +217,10 @@ class TestMessageServicePaginationByFirstId: assert result.data[4].id == "msg-000" # Test 05: Pagination with first_id + @patch("services.message_service._create_execution_extra_content_repository") @patch("services.message_service.db") @patch("services.message_service.ConversationService") - def test_pagination_by_first_id_with_first_id(self, mock_conversation_service, mock_db, factory): + def test_pagination_by_first_id_with_first_id(self, mock_conversation_service, mock_db, mock_create_repo, factory): """Test pagination with first_id to get messages before a specific message.""" # Arrange app = factory.create_app_mock() @@ -278,9 +285,10 @@ class TestMessageServicePaginationByFirstId: ) # Test 07: Has_more flag when results exceed limit + @patch("services.message_service._create_execution_extra_content_repository") @patch("services.message_service.db") @patch("services.message_service.ConversationService") - def test_pagination_by_first_id_has_more_true(self, mock_conversation_service, mock_db, factory): + def test_pagination_by_first_id_has_more_true(self, mock_conversation_service, mock_db, mock_create_repo, factory): """Test has_more flag is True when results exceed limit.""" # Arrange app = factory.create_app_mock() diff --git a/api/tests/unit_tests/services/test_messages_clean_service.py b/api/tests/unit_tests/services/test_messages_clean_service.py index f3efc4463e..5fcad615c8 100644 --- a/api/tests/unit_tests/services/test_messages_clean_service.py +++ b/api/tests/unit_tests/services/test_messages_clean_service.py @@ -1,4 +1,5 @@ import datetime +from typing import Any from unittest.mock import MagicMock, patch import pytest @@ -18,7 +19,7 @@ def make_simple_message(msg_id: str, app_id: str) -> SimpleMessage: return SimpleMessage(id=msg_id, app_id=app_id, created_at=datetime.datetime(2024, 1, 1)) -def make_plan_provider(tenant_plans: dict) -> MagicMock: +def make_plan_provider(tenant_plans: dict[str, Any]) -> MagicMock: """Helper to create a mock plan_provider that returns the given tenant_plans.""" provider = MagicMock() provider.return_value = tenant_plans diff --git a/api/tests/unit_tests/services/test_model_load_balancing_service.py b/api/tests/unit_tests/services/test_model_load_balancing_service.py index bea288fb9b..3119af40a2 100644 --- a/api/tests/unit_tests/services/test_model_load_balancing_service.py +++ b/api/tests/unit_tests/services/test_model_load_balancing_service.py @@ -6,6 +6,9 @@ from typing import Any, cast from unittest.mock import MagicMock import pytest +from pytest_mock import MockerFixture + +from constants import HIDDEN_VALUE from graphon.model_runtime.entities.common_entities import I18nObject from graphon.model_runtime.entities.model_entities import ModelType from graphon.model_runtime.entities.provider_entities import ( @@ -15,9 +18,6 @@ from graphon.model_runtime.entities.provider_entities import ( ModelCredentialSchema, ProviderCredentialSchema, ) -from pytest_mock import MockerFixture - -from constants import HIDDEN_VALUE from models.provider import LoadBalancingModelConfig from services.model_load_balancing_service import ModelLoadBalancingService diff --git a/api/tests/unit_tests/services/test_model_provider_service.py b/api/tests/unit_tests/services/test_model_provider_service.py new file mode 100644 index 0000000000..28d459eac9 --- /dev/null +++ b/api/tests/unit_tests/services/test_model_provider_service.py @@ -0,0 +1,602 @@ +from types import SimpleNamespace +from typing import Any +from unittest.mock import MagicMock + +import pytest + +from core.entities.model_entities import ModelStatus +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import FetchFrom, ModelType, ParameterRule, ParameterType +from models.provider import ProviderType +from services import model_provider_service as service_module +from services.errors.app_model_config import ProviderNotFoundError +from services.model_provider_service import ModelProviderService + + +def _create_service_with_mocked_manager() -> tuple[ModelProviderService, MagicMock]: + manager = MagicMock() + service = ModelProviderService() + service._get_provider_manager = MagicMock(return_value=manager) + return service, manager + + +def _build_provider_configuration( + *, + provider_name: str = "openai", + supported_model_types: list[ModelType] | None = None, + custom_models: list[Any] | None = None, + custom_config_available: bool = True, +) -> SimpleNamespace: + if supported_model_types is None: + supported_model_types = [ModelType.LLM] + + return SimpleNamespace( + provider=SimpleNamespace( + provider=provider_name, + label=I18nObject(en_US=provider_name), + description=None, + icon_small=None, + icon_small_dark=None, + background=None, + help=None, + supported_model_types=supported_model_types, + configurate_methods=[], + provider_credential_schema=None, + model_credential_schema=None, + ), + preferred_provider_type=ProviderType.CUSTOM, + custom_configuration=SimpleNamespace( + provider=SimpleNamespace( + current_credential_id="cred-1", + current_credential_name="Credential 1", + available_credentials=[], + ), + models=custom_models, + can_added_models=[], + ), + system_configuration=SimpleNamespace(enabled=False, current_quota_type=None, quota_configurations=[]), + is_custom_configuration_available=lambda: custom_config_available, + ) + + +class TestModelProviderServiceConfiguration: + def test__get_provider_configuration_should_return_configuration_when_provider_exists(self) -> None: + service, manager = _create_service_with_mocked_manager() + provider_configuration = SimpleNamespace(name="provider-config") + manager.get_configurations.return_value = {"openai": provider_configuration} + + result = service._get_provider_configuration(tenant_id="tenant-1", provider="openai") + + assert result is provider_configuration + + def test__get_provider_configuration_should_raise_error_when_provider_is_missing(self) -> None: + service, manager = _create_service_with_mocked_manager() + manager.get_configurations.return_value = {} + + with pytest.raises(ProviderNotFoundError, match="does not exist"): + service._get_provider_configuration(tenant_id="tenant-1", provider="missing") + + def test_get_provider_list_should_filter_by_model_type_and_build_no_configure_status(self) -> None: + service, manager = _create_service_with_mocked_manager() + allowed = _build_provider_configuration( + provider_name="openai", + supported_model_types=[ModelType.LLM], + custom_config_available=False, + ) + filtered = _build_provider_configuration( + provider_name="embedding", + supported_model_types=[ModelType.TEXT_EMBEDDING], + custom_config_available=True, + ) + manager.get_configurations.return_value = {"openai": allowed, "embedding": filtered} + + result = service.get_provider_list(tenant_id="tenant-1", model_type=ModelType.LLM.value) + + assert len(result) == 1 + assert result[0].provider == "openai" + assert result[0].custom_configuration.status.value == "no-configure" + + def test_get_models_by_provider_should_wrap_model_entities_with_tenant_context(self) -> None: + service, manager = _create_service_with_mocked_manager() + + class _Model: + def __init__(self, model_name: str) -> None: + self.model_name = model_name + + def model_dump(self) -> dict[str, Any]: + return { + "model": self.model_name, + "label": {"en_US": self.model_name}, + "model_type": ModelType.LLM, + "features": [], + "fetch_from": FetchFrom.PREDEFINED_MODEL, + "model_properties": {}, + "deprecated": False, + "status": ModelStatus.ACTIVE, + "load_balancing_enabled": False, + "has_invalid_load_balancing_configs": False, + "provider": { + "provider": "openai", + "label": {"en_US": "OpenAI"}, + "icon_small": None, + "icon_small_dark": None, + "supported_model_types": [ModelType.LLM], + }, + } + + provider_configurations = SimpleNamespace( + get_models=MagicMock(return_value=[_Model("gpt-4o"), _Model("gpt-4o-mini")]) + ) + manager.get_configurations.return_value = provider_configurations + + result = service.get_models_by_provider(tenant_id="tenant-1", provider="openai") + + assert len(result) == 2 + assert result[0].model == "gpt-4o" + assert result[1].provider.provider == "openai" + provider_configurations.get_models.assert_called_once_with(provider="openai") + + +class TestModelProviderServiceDelegation: + @pytest.mark.parametrize( + ("method_name", "method_kwargs", "provider_method_name", "provider_call_kwargs", "provider_return"), + [ + ( + "get_provider_credential", + {"tenant_id": "tenant-1", "provider": "openai", "credential_id": "cred-1"}, + "get_provider_credential", + {"credential_id": "cred-1"}, + {"token": "abc"}, + ), + ( + "validate_provider_credentials", + {"tenant_id": "tenant-1", "provider": "openai", "credentials": {"token": "abc"}}, + "validate_provider_credentials", + ({"token": "abc"},), + None, + ), + ( + "create_provider_credential", + { + "tenant_id": "tenant-1", + "provider": "openai", + "credentials": {"token": "abc"}, + "credential_name": "A", + }, + "create_provider_credential", + ({"token": "abc"}, "A"), + None, + ), + ( + "update_provider_credential", + { + "tenant_id": "tenant-1", + "provider": "openai", + "credentials": {"token": "abc"}, + "credential_id": "cred-1", + "credential_name": "B", + }, + "update_provider_credential", + {"credential_id": "cred-1", "credentials": {"token": "abc"}, "credential_name": "B"}, + None, + ), + ( + "remove_provider_credential", + {"tenant_id": "tenant-1", "provider": "openai", "credential_id": "cred-1"}, + "delete_provider_credential", + {"credential_id": "cred-1"}, + None, + ), + ( + "switch_active_provider_credential", + {"tenant_id": "tenant-1", "provider": "openai", "credential_id": "cred-1"}, + "switch_active_provider_credential", + {"credential_id": "cred-1"}, + None, + ), + ], + ) + def test_provider_credential_methods_should_delegate_to_provider_configuration( + self, + method_name: str, + method_kwargs: dict[str, Any], + provider_method_name: str, + provider_call_kwargs: Any, + provider_return: Any, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = ModelProviderService() + provider_configuration = MagicMock() + getattr(provider_configuration, provider_method_name).return_value = provider_return + get_provider_config_mock = MagicMock(return_value=provider_configuration) + monkeypatch.setattr(service, "_get_provider_configuration", get_provider_config_mock) + + result = getattr(service, method_name)(**method_kwargs) + + get_provider_config_mock.assert_called_once_with("tenant-1", "openai") + provider_method = getattr(provider_configuration, provider_method_name) + if isinstance(provider_call_kwargs, tuple): + provider_method.assert_called_once_with(*provider_call_kwargs) + elif isinstance(provider_call_kwargs, dict): + provider_method.assert_called_once_with(**provider_call_kwargs) + else: + provider_method.assert_called_once_with(provider_call_kwargs) + if method_name == "get_provider_credential": + assert result == {"token": "abc"} + + @pytest.mark.parametrize( + ("method_name", "method_kwargs", "provider_method_name", "expected_kwargs", "provider_return"), + [ + ( + "get_model_credential", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + "credential_id": "cred-1", + }, + "get_custom_model_credential", + {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, + {"api_key": "x"}, + ), + ( + "validate_model_credentials", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + "credentials": {"api_key": "x"}, + }, + "validate_custom_model_credentials", + {"model_type": ModelType.LLM, "model": "gpt-4o", "credentials": {"api_key": "x"}}, + None, + ), + ( + "create_model_credential", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + "credentials": {"api_key": "x"}, + "credential_name": "cred-a", + }, + "create_custom_model_credential", + { + "model_type": ModelType.LLM, + "model": "gpt-4o", + "credentials": {"api_key": "x"}, + "credential_name": "cred-a", + }, + None, + ), + ( + "update_model_credential", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + "credentials": {"api_key": "x"}, + "credential_id": "cred-1", + "credential_name": "cred-b", + }, + "update_custom_model_credential", + { + "model_type": ModelType.LLM, + "model": "gpt-4o", + "credentials": {"api_key": "x"}, + "credential_id": "cred-1", + "credential_name": "cred-b", + }, + None, + ), + ( + "remove_model_credential", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + "credential_id": "cred-1", + }, + "delete_custom_model_credential", + {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, + None, + ), + ( + "switch_active_custom_model_credential", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + "credential_id": "cred-1", + }, + "switch_custom_model_credential", + {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, + None, + ), + ( + "add_model_credential_to_model_list", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + "credential_id": "cred-1", + }, + "add_model_credential_to_model", + {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, + None, + ), + ( + "remove_model", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + }, + "delete_custom_model", + {"model_type": ModelType.LLM, "model": "gpt-4o"}, + None, + ), + ], + ) + def test_custom_model_methods_should_convert_model_type_and_delegate( + self, + method_name: str, + method_kwargs: dict[str, Any], + provider_method_name: str, + expected_kwargs: dict[str, Any], + provider_return: Any, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = ModelProviderService() + provider_configuration = MagicMock() + getattr(provider_configuration, provider_method_name).return_value = provider_return + get_provider_config_mock = MagicMock(return_value=provider_configuration) + monkeypatch.setattr(service, "_get_provider_configuration", get_provider_config_mock) + + result = getattr(service, method_name)(**method_kwargs) + + get_provider_config_mock.assert_called_once_with("tenant-1", "openai") + getattr(provider_configuration, provider_method_name).assert_called_once_with(**expected_kwargs) + if method_name == "get_model_credential": + assert result == {"api_key": "x"} + + +class TestModelProviderServiceListingsAndDefaults: + def test_get_models_by_model_type_should_group_active_non_deprecated_models(self) -> None: + service, manager = _create_service_with_mocked_manager() + openai_provider = SimpleNamespace( + provider="openai", + label=I18nObject(en_US="OpenAI"), + icon_small=None, + icon_small_dark=None, + ) + anthropic_provider = SimpleNamespace( + provider="anthropic", + label=I18nObject(en_US="Anthropic"), + icon_small=None, + icon_small_dark=None, + ) + models = [ + SimpleNamespace( + provider=openai_provider, + model="gpt-4o", + label=I18nObject(en_US="GPT-4o"), + model_type=ModelType.LLM, + features=[], + fetch_from=FetchFrom.PREDEFINED_MODEL, + model_properties={}, + status=ModelStatus.ACTIVE, + load_balancing_enabled=False, + deprecated=False, + ), + SimpleNamespace( + provider=openai_provider, + model="old-openai", + label=I18nObject(en_US="Old OpenAI"), + model_type=ModelType.LLM, + features=[], + fetch_from=FetchFrom.PREDEFINED_MODEL, + model_properties={}, + status=ModelStatus.ACTIVE, + load_balancing_enabled=False, + deprecated=True, + ), + SimpleNamespace( + provider=anthropic_provider, + model="old-anthropic", + label=I18nObject(en_US="Old Anthropic"), + model_type=ModelType.LLM, + features=[], + fetch_from=FetchFrom.PREDEFINED_MODEL, + model_properties={}, + status=ModelStatus.ACTIVE, + load_balancing_enabled=False, + deprecated=True, + ), + ] + provider_configurations = SimpleNamespace(get_models=MagicMock(return_value=models)) + manager.get_configurations.return_value = provider_configurations + + result = service.get_models_by_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) + + provider_configurations.get_models.assert_called_once_with(model_type=ModelType.LLM, only_active=True) + assert len(result) == 1 + assert result[0].provider == "openai" + assert len(result[0].models) == 1 + assert result[0].models[0].model == "gpt-4o" + + @pytest.mark.parametrize( + ("credentials", "schema", "expected_count"), + [ + (None, None, 0), + ({"api_key": "x"}, None, 0), + ( + {"api_key": "x"}, + SimpleNamespace( + parameter_rules=[ + ParameterRule( + name="temperature", + label=I18nObject(en_US="Temperature"), + type=ParameterType.FLOAT, + ) + ] + ), + 1, + ), + ], + ) + def test_get_model_parameter_rules_should_handle_missing_credentials_and_schema( + self, + credentials: dict[str, Any] | None, + schema: Any, + expected_count: int, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = ModelProviderService() + provider_configuration = MagicMock() + provider_configuration.get_current_credentials.return_value = credentials + provider_configuration.get_model_schema.return_value = schema + monkeypatch.setattr(service, "_get_provider_configuration", MagicMock(return_value=provider_configuration)) + + result = service.get_model_parameter_rules(tenant_id="tenant-1", provider="openai", model="gpt-4o") + + assert len(result) == expected_count + provider_configuration.get_current_credentials.assert_called_once_with( + model_type=ModelType.LLM, + model="gpt-4o", + ) + if credentials: + provider_configuration.get_model_schema.assert_called_once_with( + model_type=ModelType.LLM, + model="gpt-4o", + credentials=credentials, + ) + else: + provider_configuration.get_model_schema.assert_not_called() + + def test_get_default_model_of_model_type_should_return_response_when_manager_returns_model(self) -> None: + service, manager = _create_service_with_mocked_manager() + manager.get_default_model.return_value = SimpleNamespace( + model="gpt-4o", + model_type=ModelType.LLM, + provider=SimpleNamespace( + provider="openai", + label=I18nObject(en_US="OpenAI"), + icon_small=None, + supported_model_types=[ModelType.LLM], + ), + ) + + result = service.get_default_model_of_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) + + assert result is not None + assert result.model == "gpt-4o" + assert result.provider.provider == "openai" + manager.get_default_model.assert_called_once_with(tenant_id="tenant-1", model_type=ModelType.LLM) + + def test_get_default_model_of_model_type_should_return_none_when_manager_returns_none(self) -> None: + service, manager = _create_service_with_mocked_manager() + manager.get_default_model.return_value = None + + result = service.get_default_model_of_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) + + assert result is None + + def test_get_default_model_of_model_type_should_return_none_when_manager_raises_exception(self) -> None: + service, manager = _create_service_with_mocked_manager() + manager.get_default_model.side_effect = RuntimeError("boom") + + result = service.get_default_model_of_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) + + assert result is None + + def test_update_default_model_of_model_type_should_delegate_to_provider_manager(self) -> None: + service, manager = _create_service_with_mocked_manager() + + service.update_default_model_of_model_type( + tenant_id="tenant-1", + model_type=ModelType.LLM.value, + provider="openai", + model="gpt-4o", + ) + + manager.update_default_model_record.assert_called_once_with( + tenant_id="tenant-1", + model_type=ModelType.LLM, + provider="openai", + model="gpt-4o", + ) + + def test_get_model_provider_icon_should_fetch_icon_bytes_from_factory( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = ModelProviderService() + factory_instance = MagicMock() + factory_instance.get_provider_icon.return_value = (b"icon-bytes", "image/png") + factory_constructor = MagicMock(return_value=factory_instance) + monkeypatch.setattr(service_module, "create_plugin_model_provider_factory", factory_constructor) + + result = service.get_model_provider_icon( + tenant_id="tenant-1", + provider="openai", + icon_type="icon_small", + lang="en_US", + ) + + factory_constructor.assert_called_once_with(tenant_id="tenant-1") + factory_instance.get_provider_icon.assert_called_once_with("openai", "icon_small", "en_US") + assert result == (b"icon-bytes", "image/png") + + def test_switch_preferred_provider_should_convert_enum_and_delegate( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = ModelProviderService() + provider_configuration = MagicMock() + monkeypatch.setattr(service, "_get_provider_configuration", MagicMock(return_value=provider_configuration)) + + service.switch_preferred_provider( + tenant_id="tenant-1", + provider="openai", + preferred_provider_type=ProviderType.SYSTEM.value, + ) + + provider_configuration.switch_preferred_provider_type.assert_called_once_with(ProviderType.SYSTEM) + + @pytest.mark.parametrize( + ("method_name", "provider_method_name"), + [ + ("enable_model", "enable_model"), + ("disable_model", "disable_model"), + ], + ) + def test_model_enablement_methods_should_convert_model_type_and_delegate( + self, + method_name: str, + provider_method_name: str, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = ModelProviderService() + provider_configuration = MagicMock() + monkeypatch.setattr(service, "_get_provider_configuration", MagicMock(return_value=provider_configuration)) + + getattr(service, method_name)( + tenant_id="tenant-1", + provider="openai", + model="gpt-4o", + model_type=ModelType.LLM.value, + ) + + getattr(provider_configuration, provider_method_name).assert_called_once_with( + model="gpt-4o", + model_type=ModelType.LLM, + ) diff --git a/api/tests/unit_tests/services/test_model_provider_service_sanitization.py b/api/tests/unit_tests/services/test_model_provider_service_sanitization.py index acf5dff634..97f3bd6f01 100644 --- a/api/tests/unit_tests/services/test_model_provider_service_sanitization.py +++ b/api/tests/unit_tests/services/test_model_provider_service_sanitization.py @@ -1,11 +1,11 @@ import types import pytest + +from core.entities.provider_entities import CredentialConfiguration, CustomModelConfiguration from graphon.model_runtime.entities.common_entities import I18nObject from graphon.model_runtime.entities.model_entities import ModelType from graphon.model_runtime.entities.provider_entities import ConfigurateMethod - -from core.entities.provider_entities import CredentialConfiguration, CustomModelConfiguration from models.provider import ProviderType from services.model_provider_service import ModelProviderService @@ -85,644 +85,3 @@ def test_get_provider_list_strips_credentials(service_with_fake_configurations: assert len(custom_models) == 1 # The sanitizer should drop credentials in list response assert custom_models[0].credentials is None - - -# === Merged from test_model_provider_service.py === - - -from types import SimpleNamespace -from typing import Any -from unittest.mock import MagicMock - -import pytest -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import FetchFrom, ModelType, ParameterRule, ParameterType - -from core.entities.model_entities import ModelStatus -from models.provider import ProviderType -from services import model_provider_service as service_module -from services.errors.app_model_config import ProviderNotFoundError -from services.model_provider_service import ModelProviderService - - -def _create_service_with_mocked_manager() -> tuple[ModelProviderService, MagicMock]: - manager = MagicMock() - service = ModelProviderService() - service._get_provider_manager = MagicMock(return_value=manager) - return service, manager - - -def _build_provider_configuration( - *, - provider_name: str = "openai", - supported_model_types: list[ModelType] | None = None, - custom_models: list[Any] | None = None, - custom_config_available: bool = True, -) -> SimpleNamespace: - if supported_model_types is None: - supported_model_types = [ModelType.LLM] - return SimpleNamespace( - provider=SimpleNamespace( - provider=provider_name, - label=I18nObject(en_US=provider_name), - description=None, - icon_small=None, - icon_small_dark=None, - background=None, - help=None, - supported_model_types=supported_model_types, - configurate_methods=[], - provider_credential_schema=None, - model_credential_schema=None, - ), - preferred_provider_type=ProviderType.CUSTOM, - custom_configuration=SimpleNamespace( - provider=SimpleNamespace( - current_credential_id="cred-1", - current_credential_name="Credential 1", - available_credentials=[], - ), - models=custom_models, - can_added_models=[], - ), - system_configuration=SimpleNamespace(enabled=False, current_quota_type=None, quota_configurations=[]), - is_custom_configuration_available=lambda: custom_config_available, - ) - - -def test__get_provider_configuration_should_return_configuration_when_provider_exists() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - provider_configuration = SimpleNamespace(name="provider-config") - manager.get_configurations.return_value = {"openai": provider_configuration} - - # Act - result = service._get_provider_configuration(tenant_id="tenant-1", provider="openai") - - # Assert - assert result is provider_configuration - - -def test__get_provider_configuration_should_raise_error_when_provider_is_missing() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - manager.get_configurations.return_value = {} - - # Act / Assert - with pytest.raises(ProviderNotFoundError, match="does not exist"): - service._get_provider_configuration(tenant_id="tenant-1", provider="missing") - - -def test_get_provider_list_should_filter_by_model_type_and_build_no_configure_status() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - allowed = _build_provider_configuration( - provider_name="openai", - supported_model_types=[ModelType.LLM], - custom_config_available=False, - ) - filtered = _build_provider_configuration( - provider_name="embedding", - supported_model_types=[ModelType.TEXT_EMBEDDING], - custom_config_available=True, - ) - manager.get_configurations.return_value = {"openai": allowed, "embedding": filtered} - - # Act - result = service.get_provider_list(tenant_id="tenant-1", model_type=ModelType.LLM.value) - - # Assert - assert len(result) == 1 - assert result[0].provider == "openai" - assert result[0].custom_configuration.status.value == "no-configure" - - -def test_get_models_by_provider_should_wrap_model_entities_with_tenant_context() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - - class _Model: - def __init__(self, model_name: str) -> None: - self.model_name = model_name - - def model_dump(self) -> dict[str, Any]: - return { - "model": self.model_name, - "label": {"en_US": self.model_name}, - "model_type": ModelType.LLM, - "features": [], - "fetch_from": FetchFrom.PREDEFINED_MODEL, - "model_properties": {}, - "deprecated": False, - "status": ModelStatus.ACTIVE, - "load_balancing_enabled": False, - "has_invalid_load_balancing_configs": False, - "provider": { - "provider": "openai", - "label": {"en_US": "OpenAI"}, - "icon_small": None, - "icon_small_dark": None, - "supported_model_types": [ModelType.LLM], - }, - } - - provider_configurations = SimpleNamespace( - get_models=MagicMock(return_value=[_Model("gpt-4o"), _Model("gpt-4o-mini")]) - ) - manager.get_configurations.return_value = provider_configurations - - # Act - result = service.get_models_by_provider(tenant_id="tenant-1", provider="openai") - - # Assert - assert len(result) == 2 - assert result[0].model == "gpt-4o" - assert result[1].provider.provider == "openai" - provider_configurations.get_models.assert_called_once_with(provider="openai") - - -@pytest.mark.parametrize( - ("method_name", "method_kwargs", "provider_method_name", "provider_call_kwargs", "provider_return"), - [ - ( - "get_provider_credential", - {"tenant_id": "tenant-1", "provider": "openai", "credential_id": "cred-1"}, - "get_provider_credential", - {"credential_id": "cred-1"}, - {"token": "abc"}, - ), - ( - "validate_provider_credentials", - {"tenant_id": "tenant-1", "provider": "openai", "credentials": {"token": "abc"}}, - "validate_provider_credentials", - ({"token": "abc"},), - None, - ), - ( - "create_provider_credential", - {"tenant_id": "tenant-1", "provider": "openai", "credentials": {"token": "abc"}, "credential_name": "A"}, - "create_provider_credential", - ({"token": "abc"}, "A"), - None, - ), - ( - "update_provider_credential", - { - "tenant_id": "tenant-1", - "provider": "openai", - "credentials": {"token": "abc"}, - "credential_id": "cred-1", - "credential_name": "B", - }, - "update_provider_credential", - {"credential_id": "cred-1", "credentials": {"token": "abc"}, "credential_name": "B"}, - None, - ), - ( - "remove_provider_credential", - {"tenant_id": "tenant-1", "provider": "openai", "credential_id": "cred-1"}, - "delete_provider_credential", - {"credential_id": "cred-1"}, - None, - ), - ( - "switch_active_provider_credential", - {"tenant_id": "tenant-1", "provider": "openai", "credential_id": "cred-1"}, - "switch_active_provider_credential", - {"credential_id": "cred-1"}, - None, - ), - ], -) -def test_provider_credential_methods_should_delegate_to_provider_configuration( - method_name: str, - method_kwargs: dict[str, Any], - provider_method_name: str, - provider_call_kwargs: Any, - provider_return: Any, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - service = ModelProviderService() - provider_configuration = MagicMock() - getattr(provider_configuration, provider_method_name).return_value = provider_return - get_provider_config_mock = MagicMock(return_value=provider_configuration) - monkeypatch.setattr(service, "_get_provider_configuration", get_provider_config_mock) - - # Act - result = getattr(service, method_name)(**method_kwargs) - - # Assert - get_provider_config_mock.assert_called_once_with("tenant-1", "openai") - provider_method = getattr(provider_configuration, provider_method_name) - if isinstance(provider_call_kwargs, tuple): - provider_method.assert_called_once_with(*provider_call_kwargs) - elif isinstance(provider_call_kwargs, dict): - provider_method.assert_called_once_with(**provider_call_kwargs) - else: - provider_method.assert_called_once_with(provider_call_kwargs) - if method_name == "get_provider_credential": - assert result == {"token": "abc"} - - -@pytest.mark.parametrize( - ("method_name", "method_kwargs", "provider_method_name", "expected_kwargs", "provider_return"), - [ - ( - "get_model_credential", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - "credential_id": "cred-1", - }, - "get_custom_model_credential", - {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, - {"api_key": "x"}, - ), - ( - "validate_model_credentials", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - "credentials": {"api_key": "x"}, - }, - "validate_custom_model_credentials", - {"model_type": ModelType.LLM, "model": "gpt-4o", "credentials": {"api_key": "x"}}, - None, - ), - ( - "create_model_credential", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - "credentials": {"api_key": "x"}, - "credential_name": "cred-a", - }, - "create_custom_model_credential", - { - "model_type": ModelType.LLM, - "model": "gpt-4o", - "credentials": {"api_key": "x"}, - "credential_name": "cred-a", - }, - None, - ), - ( - "update_model_credential", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - "credentials": {"api_key": "x"}, - "credential_id": "cred-1", - "credential_name": "cred-b", - }, - "update_custom_model_credential", - { - "model_type": ModelType.LLM, - "model": "gpt-4o", - "credentials": {"api_key": "x"}, - "credential_id": "cred-1", - "credential_name": "cred-b", - }, - None, - ), - ( - "remove_model_credential", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - "credential_id": "cred-1", - }, - "delete_custom_model_credential", - {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, - None, - ), - ( - "switch_active_custom_model_credential", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - "credential_id": "cred-1", - }, - "switch_custom_model_credential", - {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, - None, - ), - ( - "add_model_credential_to_model_list", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - "credential_id": "cred-1", - }, - "add_model_credential_to_model", - {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, - None, - ), - ( - "remove_model", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - }, - "delete_custom_model", - {"model_type": ModelType.LLM, "model": "gpt-4o"}, - None, - ), - ], -) -def test_custom_model_methods_should_convert_model_type_and_delegate( - method_name: str, - method_kwargs: dict[str, Any], - provider_method_name: str, - expected_kwargs: dict[str, Any], - provider_return: Any, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - service = ModelProviderService() - provider_configuration = MagicMock() - getattr(provider_configuration, provider_method_name).return_value = provider_return - get_provider_config_mock = MagicMock(return_value=provider_configuration) - monkeypatch.setattr(service, "_get_provider_configuration", get_provider_config_mock) - - # Act - result = getattr(service, method_name)(**method_kwargs) - - # Assert - get_provider_config_mock.assert_called_once_with("tenant-1", "openai") - getattr(provider_configuration, provider_method_name).assert_called_once_with(**expected_kwargs) - if method_name == "get_model_credential": - assert result == {"api_key": "x"} - - -def test_get_models_by_model_type_should_group_active_non_deprecated_models() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - openai_provider = SimpleNamespace( - provider="openai", - label=I18nObject(en_US="OpenAI"), - icon_small=None, - icon_small_dark=None, - ) - anthropic_provider = SimpleNamespace( - provider="anthropic", - label=I18nObject(en_US="Anthropic"), - icon_small=None, - icon_small_dark=None, - ) - models = [ - SimpleNamespace( - provider=openai_provider, - model="gpt-4o", - label=I18nObject(en_US="GPT-4o"), - model_type=ModelType.LLM, - features=[], - fetch_from=FetchFrom.PREDEFINED_MODEL, - model_properties={}, - status=ModelStatus.ACTIVE, - load_balancing_enabled=False, - deprecated=False, - ), - SimpleNamespace( - provider=openai_provider, - model="old-openai", - label=I18nObject(en_US="Old OpenAI"), - model_type=ModelType.LLM, - features=[], - fetch_from=FetchFrom.PREDEFINED_MODEL, - model_properties={}, - status=ModelStatus.ACTIVE, - load_balancing_enabled=False, - deprecated=True, - ), - SimpleNamespace( - provider=anthropic_provider, - model="old-anthropic", - label=I18nObject(en_US="Old Anthropic"), - model_type=ModelType.LLM, - features=[], - fetch_from=FetchFrom.PREDEFINED_MODEL, - model_properties={}, - status=ModelStatus.ACTIVE, - load_balancing_enabled=False, - deprecated=True, - ), - ] - provider_configurations = SimpleNamespace(get_models=MagicMock(return_value=models)) - manager.get_configurations.return_value = provider_configurations - - # Act - result = service.get_models_by_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) - - # Assert - provider_configurations.get_models.assert_called_once_with(model_type=ModelType.LLM, only_active=True) - assert len(result) == 1 - assert result[0].provider == "openai" - assert len(result[0].models) == 1 - assert result[0].models[0].model == "gpt-4o" - - -@pytest.mark.parametrize( - ("credentials", "schema", "expected_count"), - [ - (None, None, 0), - ({"api_key": "x"}, None, 0), - ( - {"api_key": "x"}, - SimpleNamespace( - parameter_rules=[ - ParameterRule( - name="temperature", - label=I18nObject(en_US="Temperature"), - type=ParameterType.FLOAT, - ) - ] - ), - 1, - ), - ], -) -def test_get_model_parameter_rules_should_handle_missing_credentials_and_schema( - credentials: dict[str, Any] | None, - schema: Any, - expected_count: int, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - service = ModelProviderService() - provider_configuration = MagicMock() - provider_configuration.get_current_credentials.return_value = credentials - provider_configuration.get_model_schema.return_value = schema - monkeypatch.setattr(service, "_get_provider_configuration", MagicMock(return_value=provider_configuration)) - - # Act - result = service.get_model_parameter_rules(tenant_id="tenant-1", provider="openai", model="gpt-4o") - - # Assert - assert len(result) == expected_count - provider_configuration.get_current_credentials.assert_called_once_with(model_type=ModelType.LLM, model="gpt-4o") - if credentials: - provider_configuration.get_model_schema.assert_called_once_with( - model_type=ModelType.LLM, - model="gpt-4o", - credentials=credentials, - ) - else: - provider_configuration.get_model_schema.assert_not_called() - - -def test_get_default_model_of_model_type_should_return_response_when_manager_returns_model() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - manager.get_default_model.return_value = SimpleNamespace( - model="gpt-4o", - model_type=ModelType.LLM, - provider=SimpleNamespace( - provider="openai", - label=I18nObject(en_US="OpenAI"), - icon_small=None, - supported_model_types=[ModelType.LLM], - ), - ) - - # Act - result = service.get_default_model_of_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) - - # Assert - assert result is not None - assert result.model == "gpt-4o" - assert result.provider.provider == "openai" - manager.get_default_model.assert_called_once_with(tenant_id="tenant-1", model_type=ModelType.LLM) - - -def test_get_default_model_of_model_type_should_return_none_when_manager_returns_none() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - manager.get_default_model.return_value = None - - # Act - result = service.get_default_model_of_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) - - # Assert - assert result is None - - -def test_get_default_model_of_model_type_should_return_none_when_manager_raises_exception() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - manager.get_default_model.side_effect = RuntimeError("boom") - - # Act - result = service.get_default_model_of_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) - - # Assert - assert result is None - - -def test_update_default_model_of_model_type_should_delegate_to_provider_manager() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - - # Act - service.update_default_model_of_model_type( - tenant_id="tenant-1", - model_type=ModelType.LLM.value, - provider="openai", - model="gpt-4o", - ) - - # Assert - manager.update_default_model_record.assert_called_once_with( - tenant_id="tenant-1", - model_type=ModelType.LLM, - provider="openai", - model="gpt-4o", - ) - - -def test_get_model_provider_icon_should_fetch_icon_bytes_from_factory(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - service = ModelProviderService() - factory_instance = MagicMock() - factory_instance.get_provider_icon.return_value = (b"icon-bytes", "image/png") - factory_constructor = MagicMock(return_value=factory_instance) - monkeypatch.setattr(service_module, "create_plugin_model_provider_factory", factory_constructor) - - # Act - result = service.get_model_provider_icon( - tenant_id="tenant-1", - provider="openai", - icon_type="icon_small", - lang="en_US", - ) - - # Assert - factory_constructor.assert_called_once_with(tenant_id="tenant-1") - factory_instance.get_provider_icon.assert_called_once_with("openai", "icon_small", "en_US") - assert result == (b"icon-bytes", "image/png") - - -def test_switch_preferred_provider_should_convert_enum_and_delegate(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - service = ModelProviderService() - provider_configuration = MagicMock() - monkeypatch.setattr(service, "_get_provider_configuration", MagicMock(return_value=provider_configuration)) - - # Act - service.switch_preferred_provider( - tenant_id="tenant-1", - provider="openai", - preferred_provider_type=ProviderType.SYSTEM.value, - ) - - # Assert - provider_configuration.switch_preferred_provider_type.assert_called_once_with(ProviderType.SYSTEM) - - -@pytest.mark.parametrize( - ("method_name", "provider_method_name"), - [ - ("enable_model", "enable_model"), - ("disable_model", "disable_model"), - ], -) -def test_model_enablement_methods_should_convert_model_type_and_delegate( - method_name: str, - provider_method_name: str, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - service = ModelProviderService() - provider_configuration = MagicMock() - monkeypatch.setattr(service, "_get_provider_configuration", MagicMock(return_value=provider_configuration)) - - # Act - getattr(service, method_name)( - tenant_id="tenant-1", - provider="openai", - model="gpt-4o", - model_type=ModelType.LLM.value, - ) - - # Assert - getattr(provider_configuration, provider_method_name).assert_called_once_with( - model="gpt-4o", - model_type=ModelType.LLM, - ) diff --git a/api/tests/unit_tests/services/test_operation_service.py b/api/tests/unit_tests/services/test_operation_service.py index a4c69b23ac..e43a7fa649 100644 --- a/api/tests/unit_tests/services/test_operation_service.py +++ b/api/tests/unit_tests/services/test_operation_service.py @@ -1,3 +1,4 @@ +from typing import Any from unittest.mock import MagicMock, patch import httpx @@ -105,7 +106,7 @@ class TestOperationService: ) @patch.object(OperationService, "_send_request") def test_should_map_parameters_correctly_when_record_utm_called( - self, mock_send: MagicMock, utm_info: dict, expected_params: dict + self, mock_send: MagicMock, utm_info: dict[str, Any], expected_params: dict[str, Any] ): """Test that record_utm correctly maps utm_info to parameters and calls _send_request""" # Arrange diff --git a/api/tests/unit_tests/services/test_ops_service.py b/api/tests/unit_tests/services/test_ops_service.py deleted file mode 100644 index 7067e3b3dd..0000000000 --- a/api/tests/unit_tests/services/test_ops_service.py +++ /dev/null @@ -1,392 +0,0 @@ -from unittest.mock import MagicMock, patch - -import pytest - -from core.ops.entities.config_entity import TracingProviderEnum -from models.model import App, TraceAppConfig -from services.ops_service import OpsService - - -class TestOpsService: - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_get_tracing_app_config_no_config(self, mock_ops_trace_manager, mock_db): - # Arrange - mock_db.session.scalar.return_value = None - - # Act - result = OpsService.get_tracing_app_config("app_id", "arize") - - # Assert - assert result is None - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_get_tracing_app_config_no_app(self, mock_ops_trace_manager, mock_db): - # Arrange - trace_config = MagicMock(spec=TraceAppConfig) - mock_db.session.scalar.return_value = trace_config - mock_db.session.get.return_value = None - - # Act - result = OpsService.get_tracing_app_config("app_id", "arize") - - # Assert - assert result is None - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_get_tracing_app_config_none_config(self, mock_ops_trace_manager, mock_db): - # Arrange - trace_config = MagicMock(spec=TraceAppConfig) - trace_config.tracing_config = None - app = MagicMock(spec=App) - app.tenant_id = "tenant_id" - mock_db.session.scalar.return_value = trace_config - mock_db.session.get.return_value = app - - # Act & Assert - with pytest.raises(ValueError, match="Tracing config cannot be None."): - OpsService.get_tracing_app_config("app_id", "arize") - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - @pytest.mark.parametrize( - ("provider", "default_url"), - [ - ("arize", "https://app.arize.com/"), - ("phoenix", "https://app.phoenix.arize.com/projects/"), - ("langsmith", "https://smith.langchain.com/"), - ("opik", "https://www.comet.com/opik/"), - ("weave", "https://wandb.ai/"), - ("aliyun", "https://arms.console.aliyun.com/"), - ("tencent", "https://console.cloud.tencent.com/apm"), - ("mlflow", "http://localhost:5000/"), - ("databricks", "https://www.databricks.com/"), - ], - ) - def test_get_tracing_app_config_providers_exception(self, mock_ops_trace_manager, mock_db, provider, default_url): - # Arrange - trace_config = MagicMock(spec=TraceAppConfig) - trace_config.tracing_config = {"some": "config"} - trace_config.to_dict.return_value = {"tracing_config": {"project_url": default_url}} - app = MagicMock(spec=App) - app.tenant_id = "tenant_id" - mock_db.session.scalar.return_value = trace_config - mock_db.session.get.return_value = app - - mock_ops_trace_manager.decrypt_tracing_config.return_value = {} - mock_ops_trace_manager.obfuscated_decrypt_token.return_value = {} - mock_ops_trace_manager.get_trace_config_project_url.side_effect = Exception("error") - mock_ops_trace_manager.get_trace_config_project_key.side_effect = Exception("error") - - # Act - result = OpsService.get_tracing_app_config("app_id", provider) - - # Assert - assert result["tracing_config"]["project_url"] == default_url - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - @pytest.mark.parametrize( - "provider", ["arize", "phoenix", "langsmith", "opik", "weave", "aliyun", "tencent", "mlflow", "databricks"] - ) - def test_get_tracing_app_config_providers_success(self, mock_ops_trace_manager, mock_db, provider): - # Arrange - trace_config = MagicMock(spec=TraceAppConfig) - trace_config.tracing_config = {"some": "config"} - trace_config.to_dict.return_value = {"tracing_config": {"project_url": "success_url"}} - app = MagicMock(spec=App) - app.tenant_id = "tenant_id" - mock_db.session.scalar.return_value = trace_config - mock_db.session.get.return_value = app - - mock_ops_trace_manager.decrypt_tracing_config.return_value = {} - mock_ops_trace_manager.obfuscated_decrypt_token.return_value = {} - mock_ops_trace_manager.get_trace_config_project_url.return_value = "success_url" - - # Act - result = OpsService.get_tracing_app_config("app_id", provider) - - # Assert - assert result["tracing_config"]["project_url"] == "success_url" - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_get_tracing_app_config_langfuse_success(self, mock_ops_trace_manager, mock_db): - # Arrange - trace_config = MagicMock(spec=TraceAppConfig) - trace_config.tracing_config = {"some": "config"} - trace_config.to_dict.return_value = {"tracing_config": {"project_url": "https://api.langfuse.com/project/key"}} - app = MagicMock(spec=App) - app.tenant_id = "tenant_id" - mock_db.session.scalar.return_value = trace_config - mock_db.session.get.return_value = app - - mock_ops_trace_manager.decrypt_tracing_config.return_value = {"host": "https://api.langfuse.com"} - mock_ops_trace_manager.obfuscated_decrypt_token.return_value = {"host": "https://api.langfuse.com"} - mock_ops_trace_manager.get_trace_config_project_key.return_value = "key" - - # Act - result = OpsService.get_tracing_app_config("app_id", "langfuse") - - # Assert - assert result["tracing_config"]["project_url"] == "https://api.langfuse.com/project/key" - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_get_tracing_app_config_langfuse_exception(self, mock_ops_trace_manager, mock_db): - # Arrange - trace_config = MagicMock(spec=TraceAppConfig) - trace_config.tracing_config = {"some": "config"} - trace_config.to_dict.return_value = {"tracing_config": {"project_url": "https://api.langfuse.com/"}} - app = MagicMock(spec=App) - app.tenant_id = "tenant_id" - mock_db.session.scalar.return_value = trace_config - mock_db.session.get.return_value = app - - mock_ops_trace_manager.decrypt_tracing_config.return_value = {"host": "https://api.langfuse.com"} - mock_ops_trace_manager.obfuscated_decrypt_token.return_value = {"host": "https://api.langfuse.com"} - mock_ops_trace_manager.get_trace_config_project_key.side_effect = Exception("error") - - # Act - result = OpsService.get_tracing_app_config("app_id", "langfuse") - - # Assert - assert result["tracing_config"]["project_url"] == "https://api.langfuse.com/" - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_create_tracing_app_config_invalid_provider(self, mock_ops_trace_manager, mock_db): - # Act - result = OpsService.create_tracing_app_config("app_id", "invalid_provider", {}) - - # Assert - assert result == {"error": "Invalid tracing provider: invalid_provider"} - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_create_tracing_app_config_invalid_credentials(self, mock_ops_trace_manager, mock_db): - # Arrange - provider = TracingProviderEnum.LANGFUSE - mock_ops_trace_manager.check_trace_config_is_effective.return_value = False - - # Act - result = OpsService.create_tracing_app_config("app_id", provider, {"public_key": "p", "secret_key": "s"}) - - # Assert - assert result == {"error": "Invalid Credentials"} - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - @pytest.mark.parametrize( - ("provider", "config"), - [ - (TracingProviderEnum.ARIZE, {}), - (TracingProviderEnum.LANGFUSE, {"public_key": "p", "secret_key": "s"}), - (TracingProviderEnum.LANGSMITH, {"api_key": "k", "project": "p"}), - (TracingProviderEnum.ALIYUN, {"license_key": "k", "endpoint": "https://aliyun.com"}), - ], - ) - def test_create_tracing_app_config_project_url_exception(self, mock_ops_trace_manager, mock_db, provider, config): - # Arrange - mock_ops_trace_manager.check_trace_config_is_effective.return_value = True - mock_ops_trace_manager.get_trace_config_project_url.side_effect = Exception("error") - mock_ops_trace_manager.get_trace_config_project_key.side_effect = Exception("error") - mock_db.session.scalar.return_value = MagicMock(spec=TraceAppConfig) - - # Act - result = OpsService.create_tracing_app_config("app_id", provider, config) - - # Assert - assert result is None - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_create_tracing_app_config_langfuse_success(self, mock_ops_trace_manager, mock_db): - # Arrange - provider = TracingProviderEnum.LANGFUSE - mock_ops_trace_manager.check_trace_config_is_effective.return_value = True - mock_ops_trace_manager.get_trace_config_project_key.return_value = "key" - app = MagicMock(spec=App) - app.tenant_id = "tenant_id" - mock_db.session.scalar.return_value = None - mock_db.session.get.return_value = app - mock_ops_trace_manager.encrypt_tracing_config.return_value = {} - - # Act - result = OpsService.create_tracing_app_config( - "app_id", provider, {"public_key": "p", "secret_key": "s", "host": "https://api.langfuse.com"} - ) - - # Assert - assert result == {"result": "success"} - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_create_tracing_app_config_already_exists(self, mock_ops_trace_manager, mock_db): - # Arrange - provider = TracingProviderEnum.ARIZE - mock_ops_trace_manager.check_trace_config_is_effective.return_value = True - mock_db.session.scalar.return_value = MagicMock(spec=TraceAppConfig) - - # Act - result = OpsService.create_tracing_app_config("app_id", provider, {}) - - # Assert - assert result is None - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_create_tracing_app_config_no_app(self, mock_ops_trace_manager, mock_db): - # Arrange - provider = TracingProviderEnum.ARIZE - mock_ops_trace_manager.check_trace_config_is_effective.return_value = True - mock_db.session.scalar.return_value = None - mock_db.session.get.return_value = None - - # Act - result = OpsService.create_tracing_app_config("app_id", provider, {}) - - # Assert - assert result is None - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_create_tracing_app_config_with_empty_other_keys(self, mock_ops_trace_manager, mock_db): - # Arrange - provider = TracingProviderEnum.ARIZE - mock_ops_trace_manager.check_trace_config_is_effective.return_value = True - app = MagicMock(spec=App) - app.tenant_id = "tenant_id" - mock_db.session.scalar.return_value = None - mock_db.session.get.return_value = app - mock_ops_trace_manager.encrypt_tracing_config.return_value = {} - - # Act - # 'project' is in other_keys for Arize - # provide an empty string for the project in the tracing_config - # create_tracing_app_config will replace it with the default from the model - result = OpsService.create_tracing_app_config("app_id", provider, {"project": ""}) - - # Assert - assert result == {"result": "success"} - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_create_tracing_app_config_success(self, mock_ops_trace_manager, mock_db): - # Arrange - provider = TracingProviderEnum.ARIZE - mock_ops_trace_manager.check_trace_config_is_effective.return_value = True - mock_ops_trace_manager.get_trace_config_project_url.return_value = "http://project_url" - app = MagicMock(spec=App) - app.tenant_id = "tenant_id" - mock_db.session.scalar.return_value = None - mock_db.session.get.return_value = app - mock_ops_trace_manager.encrypt_tracing_config.return_value = {"encrypted": "config"} - - # Act - result = OpsService.create_tracing_app_config("app_id", provider, {}) - - # Assert - assert result == {"result": "success"} - mock_db.session.add.assert_called() - mock_db.session.commit.assert_called() - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_update_tracing_app_config_invalid_provider(self, mock_ops_trace_manager, mock_db): - # Act & Assert - with pytest.raises(ValueError, match="Invalid tracing provider: invalid_provider"): - OpsService.update_tracing_app_config("app_id", "invalid_provider", {}) - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_update_tracing_app_config_no_config(self, mock_ops_trace_manager, mock_db): - # Arrange - provider = TracingProviderEnum.ARIZE - mock_db.session.scalar.return_value = None - - # Act - result = OpsService.update_tracing_app_config("app_id", provider, {}) - - # Assert - assert result is None - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_update_tracing_app_config_no_app(self, mock_ops_trace_manager, mock_db): - # Arrange - provider = TracingProviderEnum.ARIZE - current_config = MagicMock(spec=TraceAppConfig) - mock_db.session.scalar.return_value = current_config - mock_db.session.get.return_value = None - - # Act - result = OpsService.update_tracing_app_config("app_id", provider, {}) - - # Assert - assert result is None - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_update_tracing_app_config_invalid_credentials(self, mock_ops_trace_manager, mock_db): - # Arrange - provider = TracingProviderEnum.ARIZE - current_config = MagicMock(spec=TraceAppConfig) - app = MagicMock(spec=App) - app.tenant_id = "tenant_id" - mock_db.session.scalar.return_value = current_config - mock_db.session.get.return_value = app - mock_ops_trace_manager.decrypt_tracing_config.return_value = {} - mock_ops_trace_manager.check_trace_config_is_effective.return_value = False - - # Act & Assert - with pytest.raises(ValueError, match="Invalid Credentials"): - OpsService.update_tracing_app_config("app_id", provider, {}) - - @patch("services.ops_service.db") - @patch("services.ops_service.OpsTraceManager") - def test_update_tracing_app_config_success(self, mock_ops_trace_manager, mock_db): - # Arrange - provider = TracingProviderEnum.ARIZE - current_config = MagicMock(spec=TraceAppConfig) - current_config.to_dict.return_value = {"some": "data"} - app = MagicMock(spec=App) - app.tenant_id = "tenant_id" - mock_db.session.scalar.return_value = current_config - mock_db.session.get.return_value = app - mock_ops_trace_manager.decrypt_tracing_config.return_value = {} - mock_ops_trace_manager.check_trace_config_is_effective.return_value = True - - # Act - result = OpsService.update_tracing_app_config("app_id", provider, {}) - - # Assert - assert result == {"some": "data"} - mock_db.session.commit.assert_called_once() - - @patch("services.ops_service.db") - def test_delete_tracing_app_config_no_config(self, mock_db): - # Arrange - mock_db.session.scalar.return_value = None - - # Act - result = OpsService.delete_tracing_app_config("app_id", "arize") - - # Assert - assert result is None - - @patch("services.ops_service.db") - def test_delete_tracing_app_config_success(self, mock_db): - # Arrange - trace_config = MagicMock(spec=TraceAppConfig) - mock_db.session.scalar.return_value = trace_config - - # Act - result = OpsService.delete_tracing_app_config("app_id", "arize") - - # Assert - assert result is True - mock_db.session.delete.assert_called_with(trace_config) - mock_db.session.commit.assert_called_once() diff --git a/api/tests/unit_tests/services/test_recommended_app_service.py b/api/tests/unit_tests/services/test_recommended_app_service.py deleted file mode 100644 index 12bc84db87..0000000000 --- a/api/tests/unit_tests/services/test_recommended_app_service.py +++ /dev/null @@ -1,628 +0,0 @@ -""" -Comprehensive unit tests for RecommendedAppService. - -This test suite provides complete coverage of recommended app operations in Dify, -following TDD principles with the Arrange-Act-Assert pattern. - -## Test Coverage - -### 1. Get Recommended Apps and Categories (TestRecommendedAppServiceGetApps) -Tests fetching recommended apps with categories: -- Successful retrieval with recommended apps -- Fallback to builtin when no recommended apps -- Different language support -- Factory mode selection (remote, builtin, db) -- Empty result handling - -### 2. Get Recommend App Detail (TestRecommendedAppServiceGetDetail) -Tests fetching individual app details: -- Successful app detail retrieval -- Different factory modes -- App not found scenarios -- Language-specific details - -## Testing Approach - -- **Mocking Strategy**: All external dependencies (dify_config, RecommendAppRetrievalFactory) - are mocked for fast, isolated unit tests -- **Factory Pattern**: Tests verify correct factory selection based on mode -- **Fixtures**: Mock objects are configured per test method -- **Assertions**: Each test verifies return values and factory method calls - -## Key Concepts - -**Factory Modes:** -- remote: Fetch from remote API -- builtin: Use built-in templates -- db: Fetch from database - -**Fallback Logic:** -- If remote/db returns no apps, fallback to builtin en-US templates -- Ensures users always see some recommended apps -""" - -from unittest.mock import MagicMock, patch - -import pytest - -from services.recommended_app_service import RecommendedAppService - - -class RecommendedAppServiceTestDataFactory: - """ - Factory for creating test data and mock objects. - - Provides reusable methods to create consistent mock objects for testing - recommended app operations. - """ - - @staticmethod - def create_recommended_apps_response( - recommended_apps: list[dict] | None = None, - categories: list[str] | None = None, - ) -> dict: - """ - Create a mock response for recommended apps. - - Args: - recommended_apps: List of recommended app dictionaries - categories: List of category names - - Returns: - Dictionary with recommended_apps and categories - """ - if recommended_apps is None: - recommended_apps = [ - { - "id": "app-1", - "name": "Test App 1", - "description": "Test description 1", - "category": "productivity", - }, - { - "id": "app-2", - "name": "Test App 2", - "description": "Test description 2", - "category": "communication", - }, - ] - if categories is None: - categories = ["productivity", "communication", "utilities"] - - return { - "recommended_apps": recommended_apps, - "categories": categories, - } - - @staticmethod - def create_app_detail_response( - app_id: str = "app-123", - name: str = "Test App", - description: str = "Test description", - **kwargs, - ) -> dict: - """ - Create a mock response for app detail. - - Args: - app_id: App identifier - name: App name - description: App description - **kwargs: Additional fields - - Returns: - Dictionary with app details - """ - detail = { - "id": app_id, - "name": name, - "description": description, - "category": kwargs.get("category", "productivity"), - "icon": kwargs.get("icon", "🚀"), - "model_config": kwargs.get("model_config", {}), - } - detail.update(kwargs) - return detail - - -@pytest.fixture -def factory(): - """Provide the test data factory to all tests.""" - return RecommendedAppServiceTestDataFactory - - -class TestRecommendedAppServiceGetApps: - """Test get_recommended_apps_and_categories operations.""" - - @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) - @patch("services.recommended_app_service.dify_config", autospec=True) - def test_get_recommended_apps_success_with_apps(self, mock_config, mock_factory_class, factory): - """Test successful retrieval of recommended apps when apps are returned.""" - # Arrange - mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "remote" - - expected_response = factory.create_recommended_apps_response() - - # Mock factory and retrieval instance - mock_retrieval_instance = MagicMock() - mock_retrieval_instance.get_recommended_apps_and_categories.return_value = expected_response - - mock_factory = MagicMock() - mock_factory.return_value = mock_retrieval_instance - mock_factory_class.get_recommend_app_factory.return_value = mock_factory - - # Act - result = RecommendedAppService.get_recommended_apps_and_categories("en-US") - - # Assert - assert result == expected_response - assert len(result["recommended_apps"]) == 2 - assert len(result["categories"]) == 3 - mock_factory_class.get_recommend_app_factory.assert_called_once_with("remote") - mock_retrieval_instance.get_recommended_apps_and_categories.assert_called_once_with("en-US") - - @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) - @patch("services.recommended_app_service.dify_config", autospec=True) - def test_get_recommended_apps_fallback_to_builtin_when_empty(self, mock_config, mock_factory_class, factory): - """Test fallback to builtin when no recommended apps are returned.""" - # Arrange - mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "remote" - - # Remote returns empty recommended_apps - empty_response = {"recommended_apps": [], "categories": []} - - # Builtin fallback response - builtin_response = factory.create_recommended_apps_response( - recommended_apps=[{"id": "builtin-1", "name": "Builtin App", "category": "default"}] - ) - - # Mock remote retrieval instance (returns empty) - mock_remote_instance = MagicMock() - mock_remote_instance.get_recommended_apps_and_categories.return_value = empty_response - - mock_remote_factory = MagicMock() - mock_remote_factory.return_value = mock_remote_instance - mock_factory_class.get_recommend_app_factory.return_value = mock_remote_factory - - # Mock builtin retrieval instance - mock_builtin_instance = MagicMock() - mock_builtin_instance.fetch_recommended_apps_from_builtin.return_value = builtin_response - mock_factory_class.get_buildin_recommend_app_retrieval.return_value = mock_builtin_instance - - # Act - result = RecommendedAppService.get_recommended_apps_and_categories("zh-CN") - - # Assert - assert result == builtin_response - assert len(result["recommended_apps"]) == 1 - assert result["recommended_apps"][0]["id"] == "builtin-1" - # Verify fallback was called with en-US (hardcoded) - mock_builtin_instance.fetch_recommended_apps_from_builtin.assert_called_once_with("en-US") - - @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) - @patch("services.recommended_app_service.dify_config", autospec=True) - def test_get_recommended_apps_fallback_when_none_recommended_apps(self, mock_config, mock_factory_class, factory): - """Test fallback when recommended_apps key is None.""" - # Arrange - mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "db" - - # Response with None recommended_apps - none_response = {"recommended_apps": None, "categories": ["test"]} - - # Builtin fallback response - builtin_response = factory.create_recommended_apps_response() - - # Mock db retrieval instance (returns None) - mock_db_instance = MagicMock() - mock_db_instance.get_recommended_apps_and_categories.return_value = none_response - - mock_db_factory = MagicMock() - mock_db_factory.return_value = mock_db_instance - mock_factory_class.get_recommend_app_factory.return_value = mock_db_factory - - # Mock builtin retrieval instance - mock_builtin_instance = MagicMock() - mock_builtin_instance.fetch_recommended_apps_from_builtin.return_value = builtin_response - mock_factory_class.get_buildin_recommend_app_retrieval.return_value = mock_builtin_instance - - # Act - result = RecommendedAppService.get_recommended_apps_and_categories("en-US") - - # Assert - assert result == builtin_response - mock_builtin_instance.fetch_recommended_apps_from_builtin.assert_called_once() - - @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) - @patch("services.recommended_app_service.dify_config", autospec=True) - def test_get_recommended_apps_with_different_languages(self, mock_config, mock_factory_class, factory): - """Test retrieval with different language codes.""" - # Arrange - mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "builtin" - - languages = ["en-US", "zh-CN", "ja-JP", "fr-FR"] - - for language in languages: - # Create language-specific response - lang_response = factory.create_recommended_apps_response( - recommended_apps=[{"id": f"app-{language}", "name": f"App {language}", "category": "test"}] - ) - - # Mock retrieval instance - mock_instance = MagicMock() - mock_instance.get_recommended_apps_and_categories.return_value = lang_response - - mock_factory = MagicMock() - mock_factory.return_value = mock_instance - mock_factory_class.get_recommend_app_factory.return_value = mock_factory - - # Act - result = RecommendedAppService.get_recommended_apps_and_categories(language) - - # Assert - assert result["recommended_apps"][0]["id"] == f"app-{language}" - mock_instance.get_recommended_apps_and_categories.assert_called_with(language) - - @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) - @patch("services.recommended_app_service.dify_config", autospec=True) - def test_get_recommended_apps_uses_correct_factory_mode(self, mock_config, mock_factory_class, factory): - """Test that correct factory is selected based on mode.""" - # Arrange - modes = ["remote", "builtin", "db"] - - for mode in modes: - mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = mode - - response = factory.create_recommended_apps_response() - - # Mock retrieval instance - mock_instance = MagicMock() - mock_instance.get_recommended_apps_and_categories.return_value = response - - mock_factory = MagicMock() - mock_factory.return_value = mock_instance - mock_factory_class.get_recommend_app_factory.return_value = mock_factory - - # Act - RecommendedAppService.get_recommended_apps_and_categories("en-US") - - # Assert - mock_factory_class.get_recommend_app_factory.assert_called_with(mode) - - -class TestRecommendedAppServiceGetDetail: - """Test get_recommend_app_detail operations.""" - - @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) - @patch("services.recommended_app_service.dify_config", autospec=True) - def test_get_recommend_app_detail_success(self, mock_config, mock_factory_class, factory): - """Test successful retrieval of app detail.""" - # Arrange - mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "remote" - app_id = "app-123" - - expected_detail = factory.create_app_detail_response( - app_id=app_id, - name="Productivity App", - description="A great productivity app", - category="productivity", - ) - - # Mock retrieval instance - mock_instance = MagicMock() - mock_instance.get_recommend_app_detail.return_value = expected_detail - - mock_factory = MagicMock() - mock_factory.return_value = mock_instance - mock_factory_class.get_recommend_app_factory.return_value = mock_factory - - # Act - result = _recommendation_detail(RecommendedAppService.get_recommend_app_detail(app_id)) - - # Assert - assert result == expected_detail - assert result["id"] == app_id - assert result["name"] == "Productivity App" - mock_instance.get_recommend_app_detail.assert_called_once_with(app_id) - - @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) - @patch("services.recommended_app_service.dify_config", autospec=True) - def test_get_recommend_app_detail_with_different_modes(self, mock_config, mock_factory_class, factory): - """Test app detail retrieval with different factory modes.""" - # Arrange - modes = ["remote", "builtin", "db"] - app_id = "test-app" - - for mode in modes: - mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = mode - - detail = factory.create_app_detail_response(app_id=app_id, name=f"App from {mode}") - - # Mock retrieval instance - mock_instance = MagicMock() - mock_instance.get_recommend_app_detail.return_value = detail - - mock_factory = MagicMock() - mock_factory.return_value = mock_instance - mock_factory_class.get_recommend_app_factory.return_value = mock_factory - - # Act - result = _recommendation_detail(RecommendedAppService.get_recommend_app_detail(app_id)) - - # Assert - assert result["name"] == f"App from {mode}" - mock_factory_class.get_recommend_app_factory.assert_called_with(mode) - - @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) - @patch("services.recommended_app_service.dify_config", autospec=True) - def test_get_recommend_app_detail_returns_none_when_not_found(self, mock_config, mock_factory_class, factory): - """Test that None is returned when app is not found.""" - # Arrange - mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "remote" - app_id = "nonexistent-app" - - # Mock retrieval instance returning None - mock_instance = MagicMock() - mock_instance.get_recommend_app_detail.return_value = None - - mock_factory = MagicMock() - mock_factory.return_value = mock_instance - mock_factory_class.get_recommend_app_factory.return_value = mock_factory - - # Act - result = _recommendation_detail(RecommendedAppService.get_recommend_app_detail(app_id)) - - # Assert - assert result is None - mock_instance.get_recommend_app_detail.assert_called_once_with(app_id) - - @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) - @patch("services.recommended_app_service.dify_config", autospec=True) - def test_get_recommend_app_detail_returns_empty_dict(self, mock_config, mock_factory_class, factory): - """Test handling of empty dict response.""" - # Arrange - mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "builtin" - app_id = "app-empty" - - # Mock retrieval instance returning empty dict - mock_instance = MagicMock() - mock_instance.get_recommend_app_detail.return_value = {} - - mock_factory = MagicMock() - mock_factory.return_value = mock_instance - mock_factory_class.get_recommend_app_factory.return_value = mock_factory - - # Act - result = _recommendation_detail(RecommendedAppService.get_recommend_app_detail(app_id)) - - # Assert - assert result == {} - - @patch("services.recommended_app_service.RecommendAppRetrievalFactory", autospec=True) - @patch("services.recommended_app_service.dify_config", autospec=True) - def test_get_recommend_app_detail_with_complex_model_config(self, mock_config, mock_factory_class, factory): - """Test app detail with complex model configuration.""" - # Arrange - mock_config.HOSTED_FETCH_APP_TEMPLATES_MODE = "remote" - app_id = "complex-app" - - complex_model_config = { - "provider": "openai", - "model": "gpt-4", - "parameters": { - "temperature": 0.7, - "max_tokens": 2000, - "top_p": 1.0, - }, - } - - expected_detail = factory.create_app_detail_response( - app_id=app_id, - name="Complex App", - model_config=complex_model_config, - workflows=["workflow-1", "workflow-2"], - tools=["tool-1", "tool-2", "tool-3"], - ) - - # Mock retrieval instance - mock_instance = MagicMock() - mock_instance.get_recommend_app_detail.return_value = expected_detail - - mock_factory = MagicMock() - mock_factory.return_value = mock_instance - mock_factory_class.get_recommend_app_factory.return_value = mock_factory - - # Act - result = _recommendation_detail(RecommendedAppService.get_recommend_app_detail(app_id)) - - # Assert - assert result["model_config"] == complex_model_config - assert len(result["workflows"]) == 2 - assert len(result["tools"]) == 3 - - -# === Merged from test_recommended_app_service_additional.py === - - -from types import SimpleNamespace -from typing import Any, cast -from unittest.mock import MagicMock - -import pytest - -from services import recommended_app_service as service_module -from services.recommended_app_service import RecommendedAppService - - -def _recommendation_detail(result: dict[str, Any] | None) -> dict[str, Any]: - return cast(dict[str, Any], result) - - -@pytest.fixture -def mocked_db_session(monkeypatch: pytest.MonkeyPatch) -> MagicMock: - # Arrange - session = MagicMock() - monkeypatch.setattr(service_module, "db", SimpleNamespace(session=session)) - - # Assert - return session - - -def _mock_factory_for_apps( - monkeypatch: pytest.MonkeyPatch, - *, - mode: str, - result: dict[str, Any], - fallback_result: dict[str, Any] | None = None, -) -> tuple[MagicMock, MagicMock]: - retrieval_instance = MagicMock() - retrieval_instance.get_recommended_apps_and_categories.return_value = result - retrieval_factory = MagicMock(return_value=retrieval_instance) - monkeypatch.setattr(service_module.dify_config, "HOSTED_FETCH_APP_TEMPLATES_MODE", mode, raising=False) - monkeypatch.setattr( - service_module.RecommendAppRetrievalFactory, - "get_recommend_app_factory", - MagicMock(return_value=retrieval_factory), - ) - - builtin_instance = MagicMock() - if fallback_result is not None: - builtin_instance.fetch_recommended_apps_from_builtin.return_value = fallback_result - monkeypatch.setattr( - service_module.RecommendAppRetrievalFactory, - "get_buildin_recommend_app_retrieval", - MagicMock(return_value=builtin_instance), - ) - return retrieval_instance, builtin_instance - - -def test_get_recommended_apps_and_categories_should_not_query_trial_table_when_trial_feature_disabled( - monkeypatch: pytest.MonkeyPatch, - mocked_db_session: MagicMock, -) -> None: - # Arrange - expected = {"recommended_apps": [{"app_id": "app-1"}], "categories": ["all"]} - retrieval_instance, builtin_instance = _mock_factory_for_apps( - monkeypatch, - mode="remote", - result=expected, - ) - monkeypatch.setattr( - service_module.FeatureService, - "get_system_features", - MagicMock(return_value=SimpleNamespace(enable_trial_app=False)), - ) - - # Act - result = RecommendedAppService.get_recommended_apps_and_categories("en-US") - - # Assert - assert result == expected - retrieval_instance.get_recommended_apps_and_categories.assert_called_once_with("en-US") - builtin_instance.fetch_recommended_apps_from_builtin.assert_not_called() - mocked_db_session.scalar.assert_not_called() - - -def test_get_recommended_apps_and_categories_should_fallback_and_enrich_can_trial_when_trial_feature_enabled( - monkeypatch: pytest.MonkeyPatch, - mocked_db_session: MagicMock, -) -> None: - # Arrange - remote_result = {"recommended_apps": [], "categories": []} - fallback_result = {"recommended_apps": [{"app_id": "app-1"}, {"app_id": "app-2"}], "categories": ["all"]} - _, builtin_instance = _mock_factory_for_apps( - monkeypatch, - mode="remote", - result=remote_result, - fallback_result=fallback_result, - ) - monkeypatch.setattr( - service_module.FeatureService, - "get_system_features", - MagicMock(return_value=SimpleNamespace(enable_trial_app=True)), - ) - mocked_db_session.scalar.side_effect = [SimpleNamespace(id="trial-app"), None] - - # Act - result = RecommendedAppService.get_recommended_apps_and_categories("ja-JP") - - # Assert - builtin_instance.fetch_recommended_apps_from_builtin.assert_called_once_with("en-US") - assert result["recommended_apps"][0]["can_trial"] is True - assert result["recommended_apps"][1]["can_trial"] is False - assert mocked_db_session.scalar.call_count == 2 - - -@pytest.mark.parametrize( - ("trial_query_result", "expected_can_trial"), - [ - (SimpleNamespace(id="trial"), True), - (None, False), - ], -) -def test_get_recommend_app_detail_should_set_can_trial_when_trial_feature_enabled( - monkeypatch: pytest.MonkeyPatch, - mocked_db_session: MagicMock, - trial_query_result: Any, - expected_can_trial: bool, -) -> None: - # Arrange - detail = {"id": "app-1", "name": "Test App"} - retrieval_instance = MagicMock() - retrieval_instance.get_recommend_app_detail.return_value = detail - retrieval_factory = MagicMock(return_value=retrieval_instance) - monkeypatch.setattr(service_module.dify_config, "HOSTED_FETCH_APP_TEMPLATES_MODE", "remote", raising=False) - monkeypatch.setattr( - service_module.RecommendAppRetrievalFactory, - "get_recommend_app_factory", - MagicMock(return_value=retrieval_factory), - ) - monkeypatch.setattr( - service_module.FeatureService, - "get_system_features", - MagicMock(return_value=SimpleNamespace(enable_trial_app=True)), - ) - mocked_db_session.scalar.return_value = trial_query_result - - # Act - result = cast(dict[str, Any], RecommendedAppService.get_recommend_app_detail("app-1")) - - # Assert - assert result["id"] == "app-1" - assert result["can_trial"] is expected_can_trial - mocked_db_session.scalar.assert_called_once() - - -def test_add_trial_app_record_should_increment_count_when_existing_record_found( - mocked_db_session: MagicMock, -) -> None: - # Arrange - existing_record = SimpleNamespace(count=3) - mocked_db_session.scalar.return_value = existing_record - - # Act - RecommendedAppService.add_trial_app_record("app-1", "account-1") - - # Assert - assert existing_record.count == 4 - mocked_db_session.scalar.assert_called_once() - mocked_db_session.commit.assert_called_once() - mocked_db_session.add.assert_not_called() - - -def test_add_trial_app_record_should_create_new_record_when_no_existing_record( - mocked_db_session: MagicMock, -) -> None: - # Arrange - mocked_db_session.scalar.return_value = None - - # Act - RecommendedAppService.add_trial_app_record("app-2", "account-2") - - # Assert - mocked_db_session.scalar.assert_called_once() - mocked_db_session.add.assert_called_once() - added = mocked_db_session.add.call_args.args[0] - assert added.app_id == "app-2" - assert added.account_id == "account-2" - assert added.count == 1 - mocked_db_session.commit.assert_called_once() diff --git a/api/tests/unit_tests/services/test_schedule_service.py b/api/tests/unit_tests/services/test_schedule_service.py index 334062242b..0f8f7ffab5 100644 --- a/api/tests/unit_tests/services/test_schedule_service.py +++ b/api/tests/unit_tests/services/test_schedule_service.py @@ -2,23 +2,16 @@ import unittest from datetime import UTC, datetime from types import SimpleNamespace from typing import Any, cast -from unittest.mock import MagicMock, Mock, patch +from unittest.mock import MagicMock, Mock import pytest from sqlalchemy.orm import Session from core.trigger.constants import TRIGGER_SCHEDULE_NODE_TYPE -from core.workflow.nodes.trigger_schedule.entities import ScheduleConfig, SchedulePlanUpdate, VisualConfig -from core.workflow.nodes.trigger_schedule.exc import ScheduleConfigError, ScheduleNotFoundError -from events.event_handlers.sync_workflow_schedule_when_app_published import ( - sync_schedule_from_workflow, -) +from core.workflow.nodes.trigger_schedule.entities import VisualConfig +from core.workflow.nodes.trigger_schedule.exc import ScheduleConfigError from libs.schedule_utils import calculate_next_run_at, convert_12h_to_24h -from models.account import Account, TenantAccountJoin -from models.trigger import WorkflowSchedulePlan from models.workflow import Workflow -from services.errors.account import AccountNotFoundError -from services.trigger import schedule_service as service_module from services.trigger.schedule_service import ScheduleService @@ -83,180 +76,6 @@ class TestScheduleService(unittest.TestCase): with pytest.raises(UnknownTimeZoneError): calculate_next_run_at(cron_expr, timezone) - @patch("libs.schedule_utils.calculate_next_run_at") - def test_create_schedule(self, mock_calculate_next_run): - """Test creating a new schedule.""" - mock_session = MagicMock(spec=Session) - mock_calculate_next_run.return_value = datetime(2025, 8, 30, 10, 30, 0, tzinfo=UTC) - - config = ScheduleConfig( - node_id="start", - cron_expression="30 10 * * *", - timezone="UTC", - ) - - schedule = ScheduleService.create_schedule( - session=mock_session, - tenant_id="test-tenant", - app_id="test-app", - config=config, - ) - - assert schedule is not None - assert schedule.tenant_id == "test-tenant" - assert schedule.app_id == "test-app" - assert schedule.node_id == "start" - assert schedule.cron_expression == "30 10 * * *" - assert schedule.timezone == "UTC" - assert schedule.next_run_at is not None - mock_session.add.assert_called_once() - mock_session.flush.assert_called_once() - - @patch("services.trigger.schedule_service.calculate_next_run_at") - def test_update_schedule(self, mock_calculate_next_run): - """Test updating an existing schedule.""" - mock_session = MagicMock(spec=Session) - mock_schedule = Mock(spec=WorkflowSchedulePlan) - mock_schedule.cron_expression = "0 12 * * *" - mock_schedule.timezone = "America/New_York" - mock_session.get.return_value = mock_schedule - mock_calculate_next_run.return_value = datetime(2025, 8, 30, 12, 0, 0, tzinfo=UTC) - - updates = SchedulePlanUpdate( - cron_expression="0 12 * * *", - timezone="America/New_York", - ) - - result = ScheduleService.update_schedule( - session=mock_session, - schedule_id="test-schedule-id", - updates=updates, - ) - - assert result is not None - assert result.cron_expression == "0 12 * * *" - assert result.timezone == "America/New_York" - mock_calculate_next_run.assert_called_once() - mock_session.flush.assert_called_once() - - def test_update_schedule_not_found(self): - """Test updating a non-existent schedule raises exception.""" - from core.workflow.nodes.trigger_schedule.exc import ScheduleNotFoundError - - mock_session = MagicMock(spec=Session) - mock_session.get.return_value = None - - updates = SchedulePlanUpdate( - cron_expression="0 12 * * *", - ) - - with pytest.raises(ScheduleNotFoundError) as context: - ScheduleService.update_schedule( - session=mock_session, - schedule_id="non-existent-id", - updates=updates, - ) - - assert "Schedule not found: non-existent-id" in str(context.value) - mock_session.flush.assert_not_called() - - def test_delete_schedule(self): - """Test deleting a schedule.""" - mock_session = MagicMock(spec=Session) - mock_schedule = Mock(spec=WorkflowSchedulePlan) - mock_session.get.return_value = mock_schedule - - # Should not raise exception and complete successfully - ScheduleService.delete_schedule( - session=mock_session, - schedule_id="test-schedule-id", - ) - - mock_session.delete.assert_called_once_with(mock_schedule) - mock_session.flush.assert_called_once() - - def test_delete_schedule_not_found(self): - """Test deleting a non-existent schedule raises exception.""" - from core.workflow.nodes.trigger_schedule.exc import ScheduleNotFoundError - - mock_session = MagicMock(spec=Session) - mock_session.get.return_value = None - - # Should raise ScheduleNotFoundError - with pytest.raises(ScheduleNotFoundError) as context: - ScheduleService.delete_schedule( - session=mock_session, - schedule_id="non-existent-id", - ) - - assert "Schedule not found: non-existent-id" in str(context.value) - mock_session.delete.assert_not_called() - - @patch("services.trigger.schedule_service.select") - def test_get_tenant_owner(self, mock_select): - """Test getting tenant owner account.""" - mock_session = MagicMock(spec=Session) - mock_account = Mock(spec=Account) - mock_account.id = "owner-account-id" - - # Mock owner query - mock_owner_result = Mock(spec=TenantAccountJoin) - mock_owner_result.account_id = "owner-account-id" - - mock_session.execute.return_value.scalar_one_or_none.return_value = mock_owner_result - mock_session.get.return_value = mock_account - - result = ScheduleService.get_tenant_owner( - session=mock_session, - tenant_id="test-tenant", - ) - - assert result is not None - assert result.id == "owner-account-id" - - @patch("services.trigger.schedule_service.select") - def test_get_tenant_owner_fallback_to_admin(self, mock_select): - """Test getting tenant owner falls back to admin if no owner.""" - mock_session = MagicMock(spec=Session) - mock_account = Mock(spec=Account) - mock_account.id = "admin-account-id" - - # Mock admin query (owner returns None) - mock_admin_result = Mock(spec=TenantAccountJoin) - mock_admin_result.account_id = "admin-account-id" - - mock_session.execute.return_value.scalar_one_or_none.side_effect = [None, mock_admin_result] - mock_session.get.return_value = mock_account - - result = ScheduleService.get_tenant_owner( - session=mock_session, - tenant_id="test-tenant", - ) - - assert result is not None - assert result.id == "admin-account-id" - - @patch("services.trigger.schedule_service.calculate_next_run_at") - def test_update_next_run_at(self, mock_calculate_next_run): - """Test updating next run time after schedule triggered.""" - mock_session = MagicMock(spec=Session) - mock_schedule = Mock(spec=WorkflowSchedulePlan) - mock_schedule.cron_expression = "30 10 * * *" - mock_schedule.timezone = "UTC" - mock_session.get.return_value = mock_schedule - - next_time = datetime(2025, 8, 31, 10, 30, 0, tzinfo=UTC) - mock_calculate_next_run.return_value = next_time - - result = ScheduleService.update_next_run_at( - session=mock_session, - schedule_id="test-schedule-id", - ) - - assert result == next_time - assert mock_schedule.next_run_at == next_time - mock_session.flush.assert_called_once() - class TestVisualToCron(unittest.TestCase): """Test cases for visual configuration to cron conversion.""" @@ -678,108 +497,6 @@ class TestScheduleWithTimezone(unittest.TestCase): assert summer_next.hour == 14 -class TestSyncScheduleFromWorkflow(unittest.TestCase): - """Test cases for syncing schedule from workflow.""" - - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.db") - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.ScheduleService") - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.select") - def test_sync_schedule_create_new(self, mock_select, mock_service, mock_db): - """Test creating new schedule when none exists.""" - mock_session = MagicMock() - mock_db.engine = MagicMock() - mock_session.__enter__ = MagicMock(return_value=mock_session) - mock_session.__exit__ = MagicMock(return_value=None) - sessionmaker = MagicMock(return_value=MagicMock(begin=MagicMock(return_value=mock_session))) - with patch("events.event_handlers.sync_workflow_schedule_when_app_published.sessionmaker", sessionmaker): - mock_session.scalar.return_value = None # No existing plan - - # Mock extract_schedule_config to return a ScheduleConfig object - mock_config = Mock(spec=ScheduleConfig) - mock_config.node_id = "start" - mock_config.cron_expression = "30 10 * * *" - mock_config.timezone = "UTC" - mock_service.extract_schedule_config.return_value = mock_config - - mock_new_plan = Mock(spec=WorkflowSchedulePlan) - mock_service.create_schedule.return_value = mock_new_plan - - workflow = Mock(spec=Workflow) - result = sync_schedule_from_workflow("tenant-id", "app-id", workflow) - - assert result == mock_new_plan - mock_service.create_schedule.assert_called_once() - mock_session.commit.assert_not_called() - - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.db") - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.ScheduleService") - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.select") - def test_sync_schedule_update_existing(self, mock_select, mock_service, mock_db): - """Test updating existing schedule.""" - mock_session = MagicMock() - mock_db.engine = MagicMock() - mock_session.__enter__ = MagicMock(return_value=mock_session) - mock_session.__exit__ = MagicMock(return_value=None) - sessionmaker = MagicMock(return_value=MagicMock(begin=MagicMock(return_value=mock_session))) - - with patch("events.event_handlers.sync_workflow_schedule_when_app_published.sessionmaker", sessionmaker): - mock_existing_plan = Mock(spec=WorkflowSchedulePlan) - mock_existing_plan.id = "existing-plan-id" - mock_session.scalar.return_value = mock_existing_plan - - # Mock extract_schedule_config to return a ScheduleConfig object - mock_config = Mock(spec=ScheduleConfig) - mock_config.node_id = "start" - mock_config.cron_expression = "0 12 * * *" - mock_config.timezone = "America/New_York" - mock_service.extract_schedule_config.return_value = mock_config - - mock_updated_plan = Mock(spec=WorkflowSchedulePlan) - mock_service.update_schedule.return_value = mock_updated_plan - - workflow = Mock(spec=Workflow) - result = sync_schedule_from_workflow("tenant-id", "app-id", workflow) - - assert result == mock_updated_plan - mock_service.update_schedule.assert_called_once() - # Verify the arguments passed to update_schedule - call_args = mock_service.update_schedule.call_args - assert call_args.kwargs["session"] == mock_session - assert call_args.kwargs["schedule_id"] == "existing-plan-id" - updates_obj = call_args.kwargs["updates"] - assert isinstance(updates_obj, SchedulePlanUpdate) - assert updates_obj.node_id == "start" - assert updates_obj.cron_expression == "0 12 * * *" - assert updates_obj.timezone == "America/New_York" - mock_session.commit.assert_not_called() - - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.db") - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.ScheduleService") - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.select") - def test_sync_schedule_remove_when_no_config(self, mock_select, mock_service, mock_db): - """Test removing schedule when no schedule config in workflow.""" - mock_session = MagicMock() - mock_db.engine = MagicMock() - mock_session.__enter__ = MagicMock(return_value=mock_session) - mock_session.__exit__ = MagicMock(return_value=None) - sessionmaker = MagicMock(return_value=MagicMock(begin=MagicMock(return_value=mock_session))) - - with patch("events.event_handlers.sync_workflow_schedule_when_app_published.sessionmaker", sessionmaker): - mock_existing_plan = Mock(spec=WorkflowSchedulePlan) - mock_existing_plan.id = "existing-plan-id" - mock_session.scalar.return_value = mock_existing_plan - - mock_service.extract_schedule_config.return_value = None # No schedule config - - workflow = Mock(spec=Workflow) - result = sync_schedule_from_workflow("tenant-id", "app-id", workflow) - - assert result is None - # Now using ScheduleService.delete_schedule instead of session.delete - mock_service.delete_schedule.assert_called_once_with(session=mock_session, schedule_id="existing-plan-id") - mock_session.commit.assert_not_called() - - @pytest.fixture def session_mock() -> MagicMock: return MagicMock(spec=Session) @@ -789,62 +506,6 @@ def _workflow(**kwargs: Any) -> Workflow: return cast(Workflow, SimpleNamespace(**kwargs)) -def test_update_schedule_should_update_only_node_id_without_recomputing_time( - session_mock: MagicMock, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - schedule = MagicMock(spec=WorkflowSchedulePlan) - schedule.cron_expression = "0 10 * * *" - schedule.timezone = "UTC" - session_mock.get.return_value = schedule - - next_run_mock = MagicMock(return_value=datetime(2026, 1, 1, 10, 0, tzinfo=UTC)) - monkeypatch.setattr(service_module, "calculate_next_run_at", next_run_mock) - - # Act - result = ScheduleService.update_schedule( - session=session_mock, - schedule_id="schedule-1", - updates=SchedulePlanUpdate(node_id="node-new"), - ) - - # Assert - assert result is schedule - assert schedule.node_id == "node-new" - next_run_mock.assert_not_called() - session_mock.flush.assert_called_once() - - -def test_get_tenant_owner_should_raise_when_account_record_missing(session_mock: MagicMock) -> None: - # Arrange - join = SimpleNamespace(account_id="account-404") - session_mock.execute.return_value.scalar_one_or_none.return_value = join - session_mock.get.return_value = None - - # Act / Assert - with pytest.raises(AccountNotFoundError, match="Account not found: account-404"): - ScheduleService.get_tenant_owner(session=session_mock, tenant_id="tenant-1") - - -def test_get_tenant_owner_should_raise_when_no_owner_or_admin_found(session_mock: MagicMock) -> None: - # Arrange - session_mock.execute.return_value.scalar_one_or_none.side_effect = [None, None] - - # Act / Assert - with pytest.raises(AccountNotFoundError, match="Account not found for tenant: tenant-1"): - ScheduleService.get_tenant_owner(session=session_mock, tenant_id="tenant-1") - - -def test_update_next_run_at_should_raise_when_schedule_not_found(session_mock: MagicMock) -> None: - # Arrange - session_mock.get.return_value = None - - # Act / Assert - with pytest.raises(ScheduleNotFoundError, match="Schedule not found: schedule-1"): - ScheduleService.update_next_run_at(session=session_mock, schedule_id="schedule-1") - - def test_to_schedule_config_should_build_from_cron_mode() -> None: # Arrange node_config: dict[str, Any] = { diff --git a/api/tests/unit_tests/services/test_summary_index_service.py b/api/tests/unit_tests/services/test_summary_index_service.py index cbf3e121d8..e17d4134ac 100644 --- a/api/tests/unit_tests/services/test_summary_index_service.py +++ b/api/tests/unit_tests/services/test_summary_index_service.py @@ -124,10 +124,7 @@ def test_create_summary_record_updates_existing_and_reenables(monkeypatch: pytes existing.disabled_by = "u" session = MagicMock(name="session") - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = existing - session.query.return_value = query + session.scalar.return_value = existing create_session_mock = MagicMock(return_value=_SessionContext(session)) monkeypatch.setattr(summary_module, "session_factory", SimpleNamespace(create_session=create_session_mock)) @@ -149,10 +146,7 @@ def test_create_summary_record_updates_existing_and_reenables(monkeypatch: pytes def test_create_summary_record_creates_new(monkeypatch: pytest.MonkeyPatch) -> None: session = MagicMock(name="session") - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = None - session.query.return_value = query + session.scalar.return_value = None create_session_mock = MagicMock(return_value=_SessionContext(session)) monkeypatch.setattr(summary_module, "session_factory", SimpleNamespace(create_session=create_session_mock)) @@ -234,10 +228,7 @@ def test_vectorize_summary_without_session_creates_record_when_missing(monkeypat # New session used after vectorization succeeds (record not found by id nor chunk_id). session = MagicMock(name="session") - q1 = MagicMock() - q1.filter_by.return_value = q1 - q1.first.side_effect = [None, None] - session.query.return_value = q1 + session.scalar.side_effect = [None, None] create_session_mock = MagicMock(return_value=_SessionContext(session)) monkeypatch.setattr(summary_module, "session_factory", SimpleNamespace(create_session=create_session_mock)) @@ -267,10 +258,7 @@ def test_vectorize_summary_final_failure_updates_error_status(monkeypatch: pytes # error_session should find record and commit status update error_session = MagicMock(name="error_session") - q = MagicMock() - q.filter_by.return_value = q - q.first.return_value = summary - error_session.query.return_value = q + error_session.scalar.return_value = summary create_session_mock = MagicMock(return_value=_SessionContext(error_session)) monkeypatch.setattr(summary_module, "session_factory", SimpleNamespace(create_session=create_session_mock)) @@ -302,10 +290,7 @@ def test_batch_create_summary_records_creates_and_updates(monkeypatch: pytest.Mo existing.enabled = False session = MagicMock() - query = MagicMock() - query.filter.return_value = query - query.all.return_value = [existing] - session.query.return_value = query + session.scalars.return_value.all.return_value = [existing] monkeypatch.setattr( summary_module, @@ -324,10 +309,7 @@ def test_update_summary_record_error_updates_when_exists(monkeypatch: pytest.Mon record = _summary_record() session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = record - session.query.return_value = query + session.scalar.return_value = record monkeypatch.setattr( summary_module, "session_factory", @@ -346,10 +328,7 @@ def test_generate_and_vectorize_summary_success(monkeypatch: pytest.MonkeyPatch) record = _summary_record(summary_content="") session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = record - session.query.return_value = query + session.scalar.return_value = record monkeypatch.setattr( summary_module, @@ -373,10 +352,7 @@ def test_generate_and_vectorize_summary_vectorize_failure_sets_error(monkeypatch record = _summary_record(summary_content="") session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = record - session.query.return_value = query + session.scalar.return_value = record monkeypatch.setattr( summary_module, @@ -415,10 +391,7 @@ def test_vectorize_summary_updates_existing_record_found_by_chunk_id(monkeypatch existing = _summary_record(summary_content="old", node_id="old-node") existing.id = "other-id" session = MagicMock(name="session") - q = MagicMock() - q.filter_by.return_value = q - q.first.side_effect = [None, existing] # miss by id, hit by chunk_id - session.query.return_value = q + session.scalar.side_effect = [None, existing] # miss by id, hit by chunk_id monkeypatch.setattr( summary_module, "session_factory", @@ -448,10 +421,7 @@ def test_vectorize_summary_updates_existing_record_found_by_id(monkeypatch: pyte existing = _summary_record(summary_content="old", node_id="old-node") session = MagicMock(name="session") - q = MagicMock() - q.filter_by.return_value = q - q.first.return_value = existing # hit by id - session.query.return_value = q + session.scalar.return_value = existing # hit by id monkeypatch.setattr( summary_module, "session_factory", @@ -487,10 +457,7 @@ def test_vectorize_summary_session_enter_returns_none_triggers_runtime_error(mon return None error_session = MagicMock() - q = MagicMock() - q.filter_by.return_value = q - q.first.return_value = summary - error_session.query.return_value = q + error_session.scalar.return_value = summary create_session_mock = MagicMock(side_effect=[_BadContext(), _SessionContext(error_session)]) monkeypatch.setattr(summary_module, "session_factory", SimpleNamespace(create_session=create_session_mock)) @@ -516,21 +483,17 @@ def test_vectorize_summary_created_record_becomes_none_triggers_guard(monkeypatc ) session = MagicMock() - q = MagicMock() - q.filter_by.return_value = q - q.first.side_effect = [None, None] # miss by id and chunk_id - session.query.return_value = q + session.scalar.side_effect = [None, None] # miss by id and chunk_id error_session = MagicMock() - eq = MagicMock() - eq.filter_by.return_value = eq - eq.first.return_value = summary - error_session.query.return_value = eq + error_session.scalar.return_value = summary create_session_mock = MagicMock(side_effect=[_SessionContext(session), _SessionContext(error_session)]) monkeypatch.setattr(summary_module, "session_factory", SimpleNamespace(create_session=create_session_mock)) # Force the created record to be None so the "should not be None" guard triggers. + # Also mock select() so SQLAlchemy doesn't validate the mocked DocumentSegmentSummary as a real column clause. + monkeypatch.setattr(summary_module, "select", MagicMock(return_value=MagicMock())) monkeypatch.setattr(summary_module, "DocumentSegmentSummary", MagicMock(return_value=None)) with pytest.raises(RuntimeError, match="summary_record_in_session should not be None"): @@ -554,10 +517,7 @@ def test_vectorize_summary_error_handler_tries_chunk_id_lookup_and_can_warn_not_ ) error_session = MagicMock(name="error_session") - q = MagicMock() - q.filter_by.return_value = q - q.first.side_effect = [None, None] # not found by id, not found by chunk_id - error_session.query.return_value = q + error_session.scalar.side_effect = [None, None] # not found by id, not found by chunk_id monkeypatch.setattr( summary_module, @@ -577,10 +537,7 @@ def test_update_summary_record_error_warns_when_missing(monkeypatch: pytest.Monk segment = _segment() session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = None - session.query.return_value = query + session.scalar.return_value = None monkeypatch.setattr( summary_module, "session_factory", @@ -599,10 +556,7 @@ def test_generate_and_vectorize_summary_creates_missing_record_and_logs_usage(mo segment = _segment() session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = None - session.query.return_value = query + session.scalar.return_value = None monkeypatch.setattr( summary_module, "session_factory", @@ -646,11 +600,7 @@ def test_generate_summaries_for_document_runs_and_handles_errors(monkeypatch: py seg2.id = "seg-2" session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.filter.return_value = query - query.all.return_value = [seg1, seg2] - session.query.return_value = query + session.scalars.return_value.all.return_value = [seg1, seg2] monkeypatch.setattr( summary_module, @@ -678,11 +628,7 @@ def test_generate_summaries_for_document_no_segments_returns_empty(monkeypatch: document.doc_form = IndexStructureType.PARAGRAPH_INDEX session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.filter.return_value = query - query.all.return_value = [] - session.query.return_value = query + session.scalars.return_value.all.return_value = [] monkeypatch.setattr( summary_module, "session_factory", @@ -702,11 +648,7 @@ def test_generate_summaries_for_document_applies_segment_ids_and_only_parent_chu seg = _segment() session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.filter.return_value = query - query.all.return_value = [seg] - session.query.return_value = query + session.scalars.return_value.all.return_value = [seg] monkeypatch.setattr( summary_module, "session_factory", @@ -723,7 +665,7 @@ def test_generate_summaries_for_document_applies_segment_ids_and_only_parent_chu segment_ids=[seg.id], only_parent_chunks=True, ) - query.filter.assert_called() + session.scalars.assert_called() def test_disable_summaries_for_segments_handles_vector_delete_error(monkeypatch: pytest.MonkeyPatch) -> None: @@ -732,11 +674,7 @@ def test_disable_summaries_for_segments_handles_vector_delete_error(monkeypatch: summary2 = _summary_record(summary_content="s", node_id=None) session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.filter.return_value = query - query.all.return_value = [summary1, summary2] - session.query.return_value = query + session.scalars.return_value.all.return_value = [summary1, summary2] monkeypatch.setattr( summary_module, @@ -761,11 +699,7 @@ def test_disable_summaries_for_segments_handles_vector_delete_error(monkeypatch: def test_disable_summaries_for_segments_no_summaries_noop(monkeypatch: pytest.MonkeyPatch) -> None: dataset = _dataset() session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.filter.return_value = query - query.all.return_value = [] - session.query.return_value = query + session.scalars.return_value.all.return_value = [] monkeypatch.setattr( summary_module, "session_factory", @@ -793,21 +727,8 @@ def test_enable_summaries_for_segments_revectorizes_and_enables(monkeypatch: pyt segment.status = SegmentStatus.COMPLETED session = MagicMock() - summary_query = MagicMock() - summary_query.filter_by.return_value = summary_query - summary_query.filter.return_value = summary_query - summary_query.all.return_value = [summary] - - seg_query = MagicMock() - seg_query.filter_by.return_value = seg_query - seg_query.first.return_value = segment - - def query_side_effect(model: object) -> MagicMock: - if model is summary_module.DocumentSegmentSummary: - return summary_query - return seg_query - - session.query.side_effect = query_side_effect + session.scalars.return_value.all.return_value = [summary] + session.scalar.return_value = segment monkeypatch.setattr( summary_module, @@ -826,11 +747,7 @@ def test_enable_summaries_for_segments_revectorizes_and_enables(monkeypatch: pyt def test_enable_summaries_for_segments_no_summaries_noop(monkeypatch: pytest.MonkeyPatch) -> None: dataset = _dataset() session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.filter.return_value = query - query.all.return_value = [] - session.query.return_value = query + session.scalars.return_value.all.return_value = [] monkeypatch.setattr( summary_module, "session_factory", @@ -860,21 +777,9 @@ def test_enable_summaries_for_segments_skips_segment_or_content_and_handles_vect good_segment.status = SegmentStatus.COMPLETED session = MagicMock() - summary_query = MagicMock() - summary_query.filter_by.return_value = summary_query - summary_query.filter.return_value = summary_query - summary_query.all.return_value = [summary1, summary2, summary3] + session.scalars.return_value.all.return_value = [summary1, summary2, summary3] + session.scalar.side_effect = [bad_segment, good_segment, good_segment] - seg_query = MagicMock() - seg_query.filter_by.return_value = seg_query - seg_query.first.side_effect = [bad_segment, good_segment, good_segment] - - def query_side_effect(model: object) -> MagicMock: - if model is summary_module.DocumentSegmentSummary: - return summary_query - return seg_query - - session.query.side_effect = query_side_effect monkeypatch.setattr( summary_module, "session_factory", @@ -895,11 +800,7 @@ def test_delete_summaries_for_segments_deletes_vectors_and_records(monkeypatch: summary = _summary_record(summary_content="sum", node_id="n1") session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.filter.return_value = query - query.all.return_value = [summary] - session.query.return_value = query + session.scalars.return_value.all.return_value = [summary] vector_instance = MagicMock() monkeypatch.setattr(summary_module, "Vector", MagicMock(return_value=vector_instance)) @@ -918,11 +819,7 @@ def test_delete_summaries_for_segments_deletes_vectors_and_records(monkeypatch: def test_delete_summaries_for_segments_no_summaries_noop(monkeypatch: pytest.MonkeyPatch) -> None: dataset = _dataset() session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.filter.return_value = query - query.all.return_value = [] - session.query.return_value = query + session.scalars.return_value.all.return_value = [] monkeypatch.setattr( summary_module, "session_factory", @@ -946,10 +843,7 @@ def test_update_summary_for_segment_empty_content_deletes_existing(monkeypatch: record = _summary_record(summary_content="old", node_id="n1") session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = record - session.query.return_value = query + session.scalar.return_value = record vector_instance = MagicMock() monkeypatch.setattr(summary_module, "Vector", MagicMock(return_value=vector_instance)) @@ -971,10 +865,7 @@ def test_update_summary_for_segment_empty_content_delete_vector_warns(monkeypatc record = _summary_record(summary_content="old", node_id="n1") session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = record - session.query.return_value = query + session.scalar.return_value = record monkeypatch.setattr( summary_module, "session_factory", @@ -996,10 +887,7 @@ def test_update_summary_for_segment_empty_content_no_record_noop(monkeypatch: py segment = _segment() session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = None - session.query.return_value = query + session.scalar.return_value = None monkeypatch.setattr( summary_module, "session_factory", @@ -1015,10 +903,7 @@ def test_update_summary_for_segment_updates_existing_and_vectorizes(monkeypatch: record = _summary_record(summary_content="old", node_id="n1") session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = record - session.query.return_value = query + session.scalar.return_value = record vector_instance = MagicMock() monkeypatch.setattr(summary_module, "Vector", MagicMock(return_value=vector_instance)) @@ -1044,10 +929,7 @@ def test_update_summary_for_segment_existing_vector_delete_warns(monkeypatch: py record = _summary_record(summary_content="old", node_id="n1") session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = record - session.query.return_value = query + session.scalar.return_value = record monkeypatch.setattr( summary_module, "session_factory", @@ -1073,10 +955,7 @@ def test_update_summary_for_segment_existing_vectorize_failure_returns_error_rec record = _summary_record(summary_content="old", node_id="n1") session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = record - session.query.return_value = query + session.scalar.return_value = record monkeypatch.setattr( summary_module, "session_factory", @@ -1095,10 +974,7 @@ def test_update_summary_for_segment_new_record_success(monkeypatch: pytest.Monke segment = _segment() session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = None - session.query.return_value = query + session.scalar.return_value = None monkeypatch.setattr( summary_module, "session_factory", @@ -1122,10 +998,7 @@ def test_update_summary_for_segment_outer_exception_sets_error_and_reraises(monk record = _summary_record(summary_content="old", node_id="n1") session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = record - session.query.return_value = query + session.scalar.return_value = record session.flush.side_effect = RuntimeError("flush boom") monkeypatch.setattr( summary_module, @@ -1143,25 +1016,9 @@ def test_update_summary_for_segment_outer_exception_sets_error_and_reraises(monk def test_get_segment_summary_and_document_summaries(monkeypatch: pytest.MonkeyPatch) -> None: record = _summary_record(summary_content="sum", node_id="n1") session = MagicMock() + session.scalar.return_value = record + session.scalars.return_value.all.return_value = [record] - q1 = MagicMock() - q1.where.return_value = q1 - q1.first.return_value = record - - q2 = MagicMock() - q2.filter.return_value = q2 - q2.all.return_value = [record] - - def query_side_effect(model: object) -> MagicMock: - if model is summary_module.DocumentSegmentSummary: - # first call used by get_segment_summary, second by get_document_summaries - if not hasattr(query_side_effect, "_called"): - query_side_effect._called = True # type: ignore[attr-defined] - return q1 - return q2 - return MagicMock() - - session.query.side_effect = query_side_effect monkeypatch.setattr( summary_module, "session_factory", @@ -1178,10 +1035,7 @@ def test_get_segments_summaries_non_empty(monkeypatch: pytest.MonkeyPatch) -> No record2 = _summary_record() record2.chunk_id = "seg-2" session = MagicMock() - q = MagicMock() - q.where.return_value = q - q.all.return_value = [record1, record2] - session.query.return_value = q + session.scalars.return_value.all.return_value = [record1, record2] monkeypatch.setattr( summary_module, "session_factory", @@ -1194,10 +1048,7 @@ def test_get_segments_summaries_non_empty(monkeypatch: pytest.MonkeyPatch) -> No def test_get_document_summary_index_status_no_segments_returns_none(monkeypatch: pytest.MonkeyPatch) -> None: session = MagicMock() - q = MagicMock() - q.where.return_value = q - q.all.return_value = [] - session.query.return_value = q + session.scalars.return_value.all.return_value = [] monkeypatch.setattr( summary_module, "session_factory", @@ -1212,10 +1063,7 @@ def test_get_documents_summary_index_status_empty_input(monkeypatch: pytest.Monk def test_get_documents_summary_index_status_no_pending_sets_none(monkeypatch: pytest.MonkeyPatch) -> None: session = MagicMock() - q = MagicMock() - q.where.return_value = q - q.all.return_value = [SimpleNamespace(id="seg-1", document_id="doc-1")] - session.query.return_value = q + session.execute.return_value.all.return_value = [SimpleNamespace(id="seg-1", document_id="doc-1")] monkeypatch.setattr( summary_module, "session_factory", @@ -1237,10 +1085,7 @@ def test_update_summary_for_segment_creates_new_and_vectorize_fails_returns_erro segment = _segment() session = MagicMock() - query = MagicMock() - query.filter_by.return_value = query - query.first.return_value = None - session.query.return_value = query + session.scalar.return_value = None monkeypatch.setattr( summary_module, @@ -1267,10 +1112,7 @@ def test_get_segments_summaries_empty_list() -> None: def test_get_document_summary_index_status_and_documents_status(monkeypatch: pytest.MonkeyPatch) -> None: seg_row = SimpleNamespace(id="seg-1", document_id="doc-1") session = MagicMock() - query = MagicMock() - query.where.return_value = query - query.all.return_value = [SimpleNamespace(id="seg-1")] - session.query.return_value = query + session.scalars.return_value.all.return_value = ["seg-1"] # get_document_summary_index_status returns IDs create_session_mock = MagicMock(return_value=_SessionContext(session)) monkeypatch.setattr(summary_module, "session_factory", SimpleNamespace(create_session=create_session_mock)) @@ -1283,11 +1125,8 @@ def test_get_document_summary_index_status_and_documents_status(monkeypatch: pyt assert SummaryIndexService.get_document_summary_index_status("doc-1", "dataset-1", "tenant-1") == "SUMMARIZING" # Multiple docs - query2 = MagicMock() - query2.where.return_value = query2 - query2.all.return_value = [seg_row] session2 = MagicMock() - session2.query.return_value = query2 + session2.execute.return_value.all.return_value = [seg_row] # get_documents_summary_index_status uses execute monkeypatch.setattr( summary_module, "session_factory", diff --git a/api/tests/unit_tests/services/test_trigger_provider_service.py b/api/tests/unit_tests/services/test_trigger_provider_service.py index 81a3b181fd..ebf1b36610 100644 --- a/api/tests/unit_tests/services/test_trigger_provider_service.py +++ b/api/tests/unit_tests/services/test_trigger_provider_service.py @@ -3,6 +3,7 @@ from __future__ import annotations import contextlib import json from types import SimpleNamespace +from typing import Any from unittest.mock import MagicMock import pytest @@ -28,9 +29,9 @@ def _mock_get_trigger_provider(mocker: MockerFixture, provider: object | None) - def _encrypter_mock( *, - decrypted: dict | None = None, - encrypted: dict | None = None, - masked: dict | None = None, + decrypted: dict[str, Any] | None = None, + encrypted: dict[str, Any] | None = None, + masked: dict[str, Any] | None = None, ) -> MagicMock: enc = MagicMock() enc.decrypt.return_value = decrypted or {} @@ -63,6 +64,12 @@ def mock_session(mocker: MockerFixture) -> MagicMock: mock_session_cm.__enter__.return_value = mock_session_instance mock_session_cm.__exit__.return_value = False mocker.patch("services.trigger.trigger_provider_service.Session", return_value=mock_session_cm) + mock_begin_cm = MagicMock() + mock_begin_cm.__enter__.return_value = mock_session_instance + mock_begin_cm.__exit__.return_value = False + mock_sessionmaker_instance = MagicMock() + mock_sessionmaker_instance.begin.return_value = mock_begin_cm + mocker.patch("services.trigger.trigger_provider_service.sessionmaker", return_value=mock_sessionmaker_instance) return mock_session_instance @@ -118,9 +125,7 @@ def test_list_trigger_provider_subscriptions_should_return_empty_list_when_no_su provider_id: TriggerProviderID, ) -> None: # Arrange - query = MagicMock() - query.filter_by.return_value.order_by.return_value.all.return_value = [] - mock_session.query.return_value = query + mock_session.scalars.return_value.all.return_value = [] # Act result = TriggerProviderService.list_trigger_provider_subscriptions("tenant-1", provider_id) @@ -146,11 +151,8 @@ def test_list_trigger_provider_subscriptions_should_mask_fields_and_attach_workf db_sub = SimpleNamespace(to_api_entity=lambda: api_sub) usage_row = SimpleNamespace(subscription_id="sub-1", app_count=2) - query_subs = MagicMock() - query_subs.filter_by.return_value.order_by.return_value.all.return_value = [db_sub] - query_usage = MagicMock() - query_usage.filter.return_value.group_by.return_value.all.return_value = [usage_row] - mock_session.query.side_effect = [query_subs, query_usage] + mock_session.scalars.return_value.all.return_value = [db_sub] + mock_session.execute.return_value.all.return_value = [usage_row] _mock_get_trigger_provider(mocker, provider_controller) cred_enc = _encrypter_mock(decrypted={"token": "plain"}, masked={"token": "****"}) @@ -182,11 +184,7 @@ def test_add_trigger_subscription_should_create_subscription_successfully_for_ap ) -> None: # Arrange _patch_redis_lock(mocker) - query_count = MagicMock() - query_count.filter_by.return_value.count.return_value = 0 - query_existing = MagicMock() - query_existing.filter_by.return_value.first.return_value = None - mock_session.query.side_effect = [query_count, query_existing] + mock_session.scalar.side_effect = [0, None] # count=0, no existing name _mock_get_trigger_provider(mocker, provider_controller) cred_enc = _encrypter_mock(encrypted={"api_key": "enc"}) @@ -212,7 +210,6 @@ def test_add_trigger_subscription_should_create_subscription_successfully_for_ap # Assert assert result["result"] == "success" mock_session.add.assert_called_once() - mock_session.commit.assert_called_once() def test_add_trigger_subscription_should_store_empty_credentials_for_unauthorized_type( @@ -223,11 +220,7 @@ def test_add_trigger_subscription_should_store_empty_credentials_for_unauthorize ) -> None: # Arrange _patch_redis_lock(mocker) - query_count = MagicMock() - query_count.filter_by.return_value.count.return_value = 0 - query_existing = MagicMock() - query_existing.filter_by.return_value.first.return_value = None - mock_session.query.side_effect = [query_count, query_existing] + mock_session.scalar.side_effect = [0, None] # count=0, no existing name _mock_get_trigger_provider(mocker, provider_controller) prop_enc = _encrypter_mock(encrypted={"p": "enc"}) @@ -262,9 +255,7 @@ def test_add_trigger_subscription_should_raise_error_when_provider_limit_reached ) -> None: # Arrange _patch_redis_lock(mocker) - query_count = MagicMock() - query_count.filter_by.return_value.count.return_value = TriggerProviderService.__MAX_TRIGGER_PROVIDER_COUNT__ - mock_session.query.return_value = query_count + mock_session.scalar.return_value = TriggerProviderService.__MAX_TRIGGER_PROVIDER_COUNT__ _mock_get_trigger_provider(mocker, provider_controller) mock_logger = mocker.patch("services.trigger.trigger_provider_service.logger") @@ -292,11 +283,7 @@ def test_add_trigger_subscription_should_raise_error_when_name_exists( ) -> None: # Arrange _patch_redis_lock(mocker) - query_count = MagicMock() - query_count.filter_by.return_value.count.return_value = 0 - query_existing = MagicMock() - query_existing.filter_by.return_value.first.return_value = object() - mock_session.query.side_effect = [query_count, query_existing] + mock_session.scalar.side_effect = [0, object()] # count=0, existing name conflict _mock_get_trigger_provider(mocker, provider_controller) # Act + Assert @@ -320,9 +307,7 @@ def test_update_trigger_subscription_should_raise_error_when_subscription_not_fo ) -> None: # Arrange _patch_redis_lock(mocker) - query_sub = MagicMock() - query_sub.filter_by.return_value.first.return_value = None - mock_session.query.return_value = query_sub + mock_session.scalar.return_value = None # Act + Assert with pytest.raises(ValueError, match="not found"): @@ -342,11 +327,7 @@ def test_update_trigger_subscription_should_raise_error_when_name_conflicts( provider_id="langgenius/github/github", credential_type=CredentialType.API_KEY.value, ) - query_sub = MagicMock() - query_sub.filter_by.return_value.first.return_value = subscription - query_existing = MagicMock() - query_existing.filter_by.return_value.first.return_value = object() - mock_session.query.side_effect = [query_sub, query_existing] + mock_session.scalar.side_effect = [subscription, object()] # found sub, name conflict _mock_get_trigger_provider(mocker, provider_controller) # Act + Assert @@ -373,11 +354,7 @@ def test_update_trigger_subscription_should_update_fields_and_clear_cache( credential_expires_at=0, expires_at=0, ) - query_sub = MagicMock() - query_sub.filter_by.return_value.first.return_value = subscription - query_existing = MagicMock() - query_existing.filter_by.return_value.first.return_value = None - mock_session.query.side_effect = [query_sub, query_existing] + mock_session.scalar.side_effect = [subscription, None] # found sub, no name conflict _mock_get_trigger_provider(mocker, provider_controller) prop_enc = _encrypter_mock(decrypted={"project": "old-value"}, encrypted={"project": "new-value"}) @@ -406,13 +383,13 @@ def test_update_trigger_subscription_should_update_fields_and_clear_cache( assert subscription.credentials == {"api_key": "new-key"} assert subscription.credential_expires_at == 100 assert subscription.expires_at == 200 - mock_session.commit.assert_called_once() + mock_delete_cache.assert_called_once() def test_get_subscription_by_id_should_return_none_when_missing(mocker: MockerFixture, mock_session: MagicMock) -> None: # Arrange - mock_session.query.return_value.filter_by.return_value.first.return_value = None + mock_session.scalar.return_value = None # Act result = TriggerProviderService.get_subscription_by_id("tenant-1", "sub-1") @@ -434,7 +411,7 @@ def test_get_subscription_by_id_should_decrypt_credentials_and_properties( credentials={"token": "enc"}, properties={"project": "enc"}, ) - mock_session.query.return_value.filter_by.return_value.first.return_value = subscription + mock_session.scalar.return_value = subscription _mock_get_trigger_provider(mocker, provider_controller) cred_enc = _encrypter_mock(decrypted={"token": "plain"}) prop_enc = _encrypter_mock(decrypted={"project": "plain"}) @@ -461,7 +438,7 @@ def test_delete_trigger_provider_should_raise_error_when_subscription_missing( mock_session: MagicMock, ) -> None: # Arrange - mock_session.query.return_value.filter_by.return_value.first.return_value = None + mock_session.scalar.return_value = None # Act + Assert with pytest.raises(ValueError, match="not found"): @@ -483,7 +460,7 @@ def test_delete_trigger_provider_should_delete_and_clear_cache_even_if_unsubscri credentials={"token": "enc"}, to_entity=lambda: SimpleNamespace(id="sub-1"), ) - mock_session.query.return_value.filter_by.return_value.first.return_value = subscription + mock_session.scalar.return_value = subscription _mock_get_trigger_provider(mocker, provider_controller) cred_enc = _encrypter_mock(decrypted={"token": "plain"}) mocker.patch( @@ -519,7 +496,7 @@ def test_delete_trigger_provider_should_skip_unsubscribe_for_unauthorized( credentials={}, to_entity=lambda: SimpleNamespace(id="sub-2"), ) - mock_session.query.return_value.filter_by.return_value.first.return_value = subscription + mock_session.scalar.return_value = subscription _mock_get_trigger_provider(mocker, provider_controller) mock_unsubscribe = mocker.patch("services.trigger.trigger_provider_service.TriggerManager.unsubscribe_trigger") mocker.patch( @@ -539,7 +516,7 @@ def test_refresh_oauth_token_should_raise_error_when_subscription_missing( mocker: MockerFixture, mock_session: MagicMock ) -> None: # Arrange - mock_session.query.return_value.filter_by.return_value.first.return_value = None + mock_session.scalar.return_value = None # Act + Assert with pytest.raises(ValueError, match="not found"): @@ -551,7 +528,7 @@ def test_refresh_oauth_token_should_raise_error_for_non_oauth_credentials( ) -> None: # Arrange subscription = SimpleNamespace(credential_type=CredentialType.API_KEY.value) - mock_session.query.return_value.filter_by.return_value.first.return_value = subscription + mock_session.scalar.return_value = subscription # Act + Assert with pytest.raises(ValueError, match="Only OAuth credentials can be refreshed"): @@ -572,7 +549,7 @@ def test_refresh_oauth_token_should_refresh_and_persist_new_credentials( credentials={"access_token": "enc"}, credential_expires_at=0, ) - mock_session.query.return_value.filter_by.return_value.first.return_value = subscription + mock_session.scalar.return_value = subscription _mock_get_trigger_provider(mocker, provider_controller) cache = MagicMock() cred_enc = _encrypter_mock(decrypted={"access_token": "old"}, encrypted={"access_token": "new"}) @@ -593,7 +570,7 @@ def test_refresh_oauth_token_should_refresh_and_persist_new_credentials( assert result == {"result": "success", "expires_at": 12345} assert subscription.credentials == {"access_token": "new"} assert subscription.credential_expires_at == 12345 - mock_session.commit.assert_called_once() + cache.delete.assert_called_once() @@ -601,7 +578,7 @@ def test_refresh_subscription_should_raise_error_when_subscription_missing( mocker: MockerFixture, mock_session: MagicMock ) -> None: # Arrange - mock_session.query.return_value.filter_by.return_value.first.return_value = None + mock_session.scalar.return_value = None # Act + Assert with pytest.raises(ValueError, match="not found"): @@ -611,7 +588,7 @@ def test_refresh_subscription_should_raise_error_when_subscription_missing( def test_refresh_subscription_should_skip_when_not_due(mocker: MockerFixture, mock_session: MagicMock) -> None: # Arrange subscription = SimpleNamespace(expires_at=200) - mock_session.query.return_value.filter_by.return_value.first.return_value = subscription + mock_session.scalar.return_value = subscription # Act result = TriggerProviderService.refresh_subscription("tenant-1", "sub-1", now=100) @@ -638,7 +615,7 @@ def test_refresh_subscription_should_refresh_and_persist_properties( credentials={"c": "enc"}, credential_type=CredentialType.API_KEY.value, ) - mock_session.query.return_value.filter_by.return_value.first.return_value = subscription + mock_session.scalar.return_value = subscription _mock_get_trigger_provider(mocker, provider_controller) cred_enc = _encrypter_mock(decrypted={"c": "plain"}) prop_cache = MagicMock() @@ -664,7 +641,7 @@ def test_refresh_subscription_should_refresh_and_persist_properties( assert result == {"result": "success", "expires_at": 999} assert subscription.properties == {"p": "new-enc"} assert subscription.expires_at == 999 - mock_session.commit.assert_called_once() + prop_cache.delete.assert_called_once() @@ -676,10 +653,7 @@ def test_get_oauth_client_should_return_tenant_client_when_available( ) -> None: # Arrange tenant_client = SimpleNamespace(oauth_params={"client_id": "enc"}) - system_client = None - query_tenant = MagicMock() - query_tenant.filter_by.return_value.first.return_value = tenant_client - mock_session.query.return_value = query_tenant + mock_session.scalar.return_value = tenant_client _mock_get_trigger_provider(mocker, provider_controller) enc = _encrypter_mock(decrypted={"client_id": "plain"}) mocker.patch("services.trigger.trigger_provider_service.create_provider_encrypter", return_value=(enc, MagicMock())) @@ -698,11 +672,7 @@ def test_get_oauth_client_should_return_none_when_plugin_not_verified( provider_controller: MagicMock, ) -> None: # Arrange - query_tenant = MagicMock() - query_tenant.filter_by.return_value.first.return_value = None - query_system = MagicMock() - query_system.filter_by.return_value.first.return_value = None - mock_session.query.side_effect = [query_tenant, query_system] + mock_session.scalar.return_value = None # no tenant client; plugin not verified → early return _mock_get_trigger_provider(mocker, provider_controller) mocker.patch("services.trigger.trigger_provider_service.PluginService.is_plugin_verified", return_value=False) @@ -720,11 +690,7 @@ def test_get_oauth_client_should_return_decrypted_system_client_when_verified( provider_controller: MagicMock, ) -> None: # Arrange - query_tenant = MagicMock() - query_tenant.filter_by.return_value.first.return_value = None - query_system = MagicMock() - query_system.filter_by.return_value.first.return_value = SimpleNamespace(encrypted_oauth_params="enc") - mock_session.query.side_effect = [query_tenant, query_system] + mock_session.scalar.side_effect = [None, SimpleNamespace(encrypted_oauth_params="enc")] _mock_get_trigger_provider(mocker, provider_controller) mocker.patch("services.trigger.trigger_provider_service.PluginService.is_plugin_verified", return_value=True) mocker.patch( @@ -746,11 +712,7 @@ def test_get_oauth_client_should_raise_error_when_system_decryption_fails( provider_controller: MagicMock, ) -> None: # Arrange - query_tenant = MagicMock() - query_tenant.filter_by.return_value.first.return_value = None - query_system = MagicMock() - query_system.filter_by.return_value.first.return_value = SimpleNamespace(encrypted_oauth_params="enc") - mock_session.query.side_effect = [query_tenant, query_system] + mock_session.scalar.side_effect = [None, SimpleNamespace(encrypted_oauth_params="enc")] _mock_get_trigger_provider(mocker, provider_controller) mocker.patch("services.trigger.trigger_provider_service.PluginService.is_plugin_verified", return_value=True) mocker.patch( @@ -789,7 +751,7 @@ def test_is_oauth_system_client_exists_should_reflect_database_record( provider_controller: MagicMock, ) -> None: # Arrange - mock_session.query.return_value.filter_by.return_value.first.return_value = object() if has_client else None + mock_session.scalar.return_value = object() if has_client else None _mock_get_trigger_provider(mocker, provider_controller) mocker.patch("services.trigger.trigger_provider_service.PluginService.is_plugin_verified", return_value=True) @@ -818,11 +780,11 @@ def test_save_custom_oauth_client_params_should_create_record_and_clear_params_w provider_controller: MagicMock, ) -> None: # Arrange - query = MagicMock() - query.filter_by.return_value.first.return_value = None - mock_session.query.return_value = query + mock_session.scalar.return_value = None _mock_get_trigger_provider(mocker, provider_controller) fake_model = SimpleNamespace(encrypted_oauth_params="", enabled=False, oauth_params={}) + # Also mock select() so SQLAlchemy doesn't validate the patched TriggerOAuthTenantClient. + mocker.patch("services.trigger.trigger_provider_service.select", MagicMock(return_value=MagicMock())) mocker.patch("services.trigger.trigger_provider_service.TriggerOAuthTenantClient", return_value=fake_model) # Act @@ -838,7 +800,6 @@ def test_save_custom_oauth_client_params_should_create_record_and_clear_params_w assert fake_model.encrypted_oauth_params == "{}" assert fake_model.enabled is True mock_session.add.assert_called_once_with(fake_model) - mock_session.commit.assert_called_once() def test_save_custom_oauth_client_params_should_merge_hidden_values_and_delete_cache( @@ -849,7 +810,7 @@ def test_save_custom_oauth_client_params_should_merge_hidden_values_and_delete_c ) -> None: # Arrange custom_client = SimpleNamespace(oauth_params={"client_id": "enc-old"}, enabled=False) - mock_session.query.return_value.filter_by.return_value.first.return_value = custom_client + mock_session.scalar.return_value = custom_client _mock_get_trigger_provider(mocker, provider_controller) cache = MagicMock() enc = _encrypter_mock(decrypted={"client_id": "old-id"}, encrypted={"client_id": "new-id"}) @@ -870,7 +831,6 @@ def test_save_custom_oauth_client_params_should_merge_hidden_values_and_delete_c assert result == {"result": "success"} assert json.loads(custom_client.encrypted_oauth_params) == {"client_id": "new-id"} cache.delete.assert_called_once() - mock_session.commit.assert_called_once() def test_get_custom_oauth_client_params_should_return_empty_when_record_missing( @@ -879,7 +839,7 @@ def test_get_custom_oauth_client_params_should_return_empty_when_record_missing( provider_id: TriggerProviderID, ) -> None: # Arrange - mock_session.query.return_value.filter_by.return_value.first.return_value = None + mock_session.scalar.return_value = None # Act result = TriggerProviderService.get_custom_oauth_client_params("tenant-1", provider_id) @@ -896,7 +856,7 @@ def test_get_custom_oauth_client_params_should_return_masked_decrypted_values( ) -> None: # Arrange custom_client = SimpleNamespace(oauth_params={"client_id": "enc"}) - mock_session.query.return_value.filter_by.return_value.first.return_value = custom_client + mock_session.scalar.return_value = custom_client _mock_get_trigger_provider(mocker, provider_controller) enc = _encrypter_mock(decrypted={"client_id": "plain"}, masked={"client_id": "pl***id"}) mocker.patch("services.trigger.trigger_provider_service.create_provider_encrypter", return_value=(enc, MagicMock())) @@ -913,15 +873,11 @@ def test_delete_custom_oauth_client_params_should_delete_record_and_commit( mock_session: MagicMock, provider_id: TriggerProviderID, ) -> None: - # Arrange - mock_session.query.return_value.filter_by.return_value.delete.return_value = 1 - # Act result = TriggerProviderService.delete_custom_oauth_client_params("tenant-1", provider_id) # Assert assert result == {"result": "success"} - mock_session.commit.assert_called_once() @pytest.mark.parametrize("exists", [True, False]) @@ -932,7 +888,7 @@ def test_is_oauth_custom_client_enabled_should_return_expected_boolean( provider_id: TriggerProviderID, ) -> None: # Arrange - mock_session.query.return_value.filter_by.return_value.first.return_value = object() if exists else None + mock_session.scalar.return_value = object() if exists else None # Act result = TriggerProviderService.is_oauth_custom_client_enabled("tenant-1", provider_id) @@ -945,7 +901,7 @@ def test_get_subscription_by_endpoint_should_return_none_when_not_found( mocker: MockerFixture, mock_session: MagicMock ) -> None: # Arrange - mock_session.query.return_value.filter_by.return_value.first.return_value = None + mock_session.scalar.return_value = None # Act result = TriggerProviderService.get_subscription_by_endpoint("endpoint-1") @@ -966,7 +922,7 @@ def test_get_subscription_by_endpoint_should_decrypt_credentials_and_properties( credentials={"token": "enc"}, properties={"hook": "enc"}, ) - mock_session.query.return_value.filter_by.return_value.first.return_value = subscription + mock_session.scalar.return_value = subscription _mock_get_trigger_provider(mocker, provider_controller) mocker.patch( "services.trigger.trigger_provider_service.create_trigger_provider_encrypter_for_subscription", diff --git a/api/tests/unit_tests/services/test_variable_truncator.py b/api/tests/unit_tests/services/test_variable_truncator.py index 27602bb1cc..4b864dd221 100644 --- a/api/tests/unit_tests/services/test_variable_truncator.py +++ b/api/tests/unit_tests/services/test_variable_truncator.py @@ -12,11 +12,11 @@ This test suite covers all functionality of the current VariableTruncator includ import functools import json import uuid -from collections.abc import Mapping from typing import Any from uuid import uuid4 import pytest + from graphon.file import File, FileTransferMethod, FileType from graphon.variables.segments import ( ArrayFileSegment, @@ -29,7 +29,6 @@ from graphon.variables.segments import ( ObjectSegment, StringSegment, ) - from services.variable_truncator import ( DummyVariableTruncator, MaxDepthExceededError, @@ -674,229 +673,3 @@ def test_dummy_variable_truncator_methods(): assert isinstance(result, TruncationResult) assert result.result == segment assert result.truncated is False - - -# === Merged from test_variable_truncator_additional.py === - - -from typing import Any - -import pytest -from graphon.nodes.variable_assigner.common.helpers import UpdatedVariable -from graphon.variables.segments import IntegerSegment, ObjectSegment, StringSegment -from graphon.variables.types import SegmentType - -from services import variable_truncator as truncator_module -from services.variable_truncator import BaseTruncator, TruncationResult, VariableTruncator - - -class _AbstractPassthrough(BaseTruncator): - def truncate(self, segment: Any) -> TruncationResult: - # Arrange / Act - return super().truncate(segment) # type: ignore[misc] - - def truncate_variable_mapping(self, v: Mapping[str, Any]) -> tuple[Mapping[str, Any], bool]: - # Arrange / Act - return super().truncate_variable_mapping(v) # type: ignore[misc] - - -def test_base_truncator_methods_should_execute_abstract_placeholders() -> None: - # Arrange - passthrough = _AbstractPassthrough() - - # Act - truncate_result = passthrough.truncate(StringSegment(value="x")) - mapping_result = passthrough.truncate_variable_mapping({"a": 1}) - - # Assert - assert truncate_result is None - assert mapping_result is None - - -def test_default_should_use_dify_config_limits(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - monkeypatch.setattr(truncator_module.dify_config, "WORKFLOW_VARIABLE_TRUNCATION_MAX_SIZE", 111) - monkeypatch.setattr(truncator_module.dify_config, "WORKFLOW_VARIABLE_TRUNCATION_ARRAY_LENGTH", 7) - monkeypatch.setattr(truncator_module.dify_config, "WORKFLOW_VARIABLE_TRUNCATION_STRING_LENGTH", 33) - - # Act - truncator = VariableTruncator.default() - - # Assert - assert truncator._max_size_bytes == 111 - assert truncator._array_element_limit == 7 - assert truncator._string_length_limit == 33 - - -def test_truncate_variable_mapping_should_mark_over_budget_keys_with_ellipsis() -> None: - # Arrange - truncator = VariableTruncator(max_size_bytes=5) - mapping = {"very_long_key": "value"} - - # Act - result, truncated = truncator.truncate_variable_mapping(mapping) - - # Assert - assert result == {"very_long_key": "..."} - assert truncated is True - - -def test_truncate_variable_mapping_should_handle_segment_values() -> None: - # Arrange - truncator = VariableTruncator(max_size_bytes=100) - mapping = {"seg": StringSegment(value="hello")} - - # Act - result, truncated = truncator.truncate_variable_mapping(mapping) - - # Assert - assert isinstance(result["seg"], StringSegment) - assert result["seg"].value == "hello" - assert truncated is False - - -@pytest.mark.parametrize( - ("value", "expected"), - [ - (None, False), - (True, False), - (1, False), - (1.5, False), - ("x", True), - ({"k": "v"}, True), - ], -) -def test_json_value_needs_truncation_should_match_expected_rules(value: Any, expected: bool) -> None: - # Arrange - - # Act - result = VariableTruncator._json_value_needs_truncation(value) - - # Assert - assert result is expected - - -def test_truncate_should_use_string_fallback_when_truncated_value_size_exceeds_limit( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - truncator = VariableTruncator(max_size_bytes=10) - forced_result = truncator_module._PartResult( - value=StringSegment(value="this is too long"), - value_size=100, - truncated=True, - ) - monkeypatch.setattr(truncator, "_truncate_segment", lambda *_args, **_kwargs: forced_result) - - # Act - result = truncator.truncate(StringSegment(value="input")) - - # Assert - assert result.truncated is True - assert isinstance(result.result, StringSegment) - assert not result.result.value.startswith('"') - - -def test_truncate_segment_should_raise_assertion_for_unexpected_truncatable_segment( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - truncator = VariableTruncator() - monkeypatch.setattr(VariableTruncator, "_segment_need_truncation", lambda _segment: True) - - # Act / Assert - with pytest.raises(AssertionError): - truncator._truncate_segment(IntegerSegment(value=1), 10) - - -def test_calculate_json_size_should_unwrap_segment_values() -> None: - # Arrange - segment = StringSegment(value="abc") - - # Act - size = VariableTruncator.calculate_json_size(segment) - - # Assert - assert size == VariableTruncator.calculate_json_size("abc") - - -def test_calculate_json_size_should_handle_updated_variable_instances() -> None: - # Arrange - updated = UpdatedVariable(name="n", selector=["node", "var"], value_type=SegmentType.STRING, new_value="v") - - # Act - size = VariableTruncator.calculate_json_size(updated) - - # Assert - assert size > 0 - - -def test_maybe_qa_structure_should_validate_shape() -> None: - # Arrange - - # Act / Assert - assert VariableTruncator._maybe_qa_structure({"qa_chunks": []}) is True - assert VariableTruncator._maybe_qa_structure({"qa_chunks": "not-list"}) is False - assert VariableTruncator._maybe_qa_structure({}) is False - - -def test_maybe_parent_child_structure_should_validate_shape() -> None: - # Arrange - - # Act / Assert - assert VariableTruncator._maybe_parent_child_structure({"parent_mode": "full", "parent_child_chunks": []}) is True - assert VariableTruncator._maybe_parent_child_structure({"parent_mode": 1, "parent_child_chunks": []}) is False - assert ( - VariableTruncator._maybe_parent_child_structure({"parent_mode": "full", "parent_child_chunks": "bad"}) is False - ) - - -def test_truncate_object_should_truncate_segment_values_inside_object() -> None: - # Arrange - truncator = VariableTruncator(string_length_limit=8, max_size_bytes=30) - mapping = {"s": StringSegment(value="long-content")} - - # Act - result = truncator._truncate_object(mapping, 20) - - # Assert - assert result.truncated is True - assert isinstance(result.value["s"], StringSegment) - - -def test_truncate_json_primitives_should_handle_updated_variable_input() -> None: - # Arrange - truncator = VariableTruncator(max_size_bytes=100) - updated = UpdatedVariable(name="n", selector=["node", "var"], value_type=SegmentType.STRING, new_value="v") - - # Act - result = truncator._truncate_json_primitives(updated, 100) - - # Assert - assert isinstance(result.value, dict) - - -def test_truncate_json_primitives_should_raise_assertion_for_unsupported_value_type() -> None: - # Arrange - truncator = VariableTruncator() - - # Act / Assert - with pytest.raises(AssertionError): - truncator._truncate_json_primitives(object(), 100) # type: ignore[arg-type] - - -def test_truncate_should_apply_json_string_fallback_for_large_non_string_segment( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - truncator = VariableTruncator(max_size_bytes=10) - forced_segment = ObjectSegment(value={"k": "v"}) - forced_result = truncator_module._PartResult(value=forced_segment, value_size=100, truncated=True) - monkeypatch.setattr(truncator, "_truncate_segment", lambda *_args, **_kwargs: forced_result) - - # Act - result = truncator.truncate(ObjectSegment(value={"a": "b"})) - - # Assert - assert result.truncated is True - assert isinstance(result.result, StringSegment) diff --git a/api/tests/unit_tests/services/test_variable_truncator_additional.py b/api/tests/unit_tests/services/test_variable_truncator_additional.py new file mode 100644 index 0000000000..e9427c4ab3 --- /dev/null +++ b/api/tests/unit_tests/services/test_variable_truncator_additional.py @@ -0,0 +1,174 @@ +from collections.abc import Mapping +from typing import Any + +import pytest + +from graphon.nodes.variable_assigner.common.helpers import UpdatedVariable +from graphon.variables.segments import IntegerSegment, ObjectSegment, StringSegment +from graphon.variables.types import SegmentType +from services import variable_truncator as truncator_module +from services.variable_truncator import BaseTruncator, TruncationResult, VariableTruncator + + +class _AbstractPassthrough(BaseTruncator): + def truncate(self, segment: Any) -> TruncationResult: + return super().truncate(segment) # type: ignore[misc] + + def truncate_variable_mapping(self, v: Mapping[str, Any]) -> tuple[Mapping[str, Any], bool]: + return super().truncate_variable_mapping(v) # type: ignore[misc] + + +class TestBaseTruncatorContract: + def test_base_truncator_methods_should_execute_abstract_placeholders(self) -> None: + passthrough = _AbstractPassthrough() + + truncate_result = passthrough.truncate(StringSegment(value="x")) + mapping_result = passthrough.truncate_variable_mapping({"a": 1}) + + assert truncate_result is None + assert mapping_result is None + + +class TestVariableTruncatorAdditionalBehavior: + def test_default_should_use_dify_config_limits(self, monkeypatch: pytest.MonkeyPatch) -> None: + monkeypatch.setattr(truncator_module.dify_config, "WORKFLOW_VARIABLE_TRUNCATION_MAX_SIZE", 111) + monkeypatch.setattr(truncator_module.dify_config, "WORKFLOW_VARIABLE_TRUNCATION_ARRAY_LENGTH", 7) + monkeypatch.setattr(truncator_module.dify_config, "WORKFLOW_VARIABLE_TRUNCATION_STRING_LENGTH", 33) + + truncator = VariableTruncator.default() + + assert truncator._max_size_bytes == 111 + assert truncator._array_element_limit == 7 + assert truncator._string_length_limit == 33 + + def test_truncate_variable_mapping_should_mark_over_budget_keys_with_ellipsis(self) -> None: + truncator = VariableTruncator(max_size_bytes=5) + mapping = {"very_long_key": "value"} + + result, truncated = truncator.truncate_variable_mapping(mapping) + + assert result == {"very_long_key": "..."} + assert truncated is True + + def test_truncate_variable_mapping_should_handle_segment_values(self) -> None: + truncator = VariableTruncator(max_size_bytes=100) + mapping = {"seg": StringSegment(value="hello")} + + result, truncated = truncator.truncate_variable_mapping(mapping) + + assert isinstance(result["seg"], StringSegment) + assert result["seg"].value == "hello" + assert truncated is False + + @pytest.mark.parametrize( + ("value", "expected"), + [ + (None, False), + (True, False), + (1, False), + (1.5, False), + ("x", True), + ({"k": "v"}, True), + ], + ) + def test_json_value_needs_truncation_should_match_expected_rules( + self, + value: Any, + expected: bool, + ) -> None: + result = VariableTruncator._json_value_needs_truncation(value) + assert result is expected + + def test_truncate_should_use_string_fallback_when_truncated_value_size_exceeds_limit( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + truncator = VariableTruncator(max_size_bytes=10) + forced_result = truncator_module._PartResult( + value=StringSegment(value="this is too long"), + value_size=100, + truncated=True, + ) + monkeypatch.setattr(truncator, "_truncate_segment", lambda *_args, **_kwargs: forced_result) + + result = truncator.truncate(StringSegment(value="input")) + + assert result.truncated is True + assert isinstance(result.result, StringSegment) + assert not result.result.value.startswith('"') + + def test_truncate_segment_should_raise_assertion_for_unexpected_truncatable_segment( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + truncator = VariableTruncator() + monkeypatch.setattr(VariableTruncator, "_segment_need_truncation", lambda _segment: True) + + with pytest.raises(AssertionError): + truncator._truncate_segment(IntegerSegment(value=1), 10) + + def test_calculate_json_size_should_unwrap_segment_values(self) -> None: + segment = StringSegment(value="abc") + + size = VariableTruncator.calculate_json_size(segment) + + assert size == VariableTruncator.calculate_json_size("abc") + + def test_calculate_json_size_should_handle_updated_variable_instances(self) -> None: + updated = UpdatedVariable(name="n", selector=["node", "var"], value_type=SegmentType.STRING, new_value="v") + + size = VariableTruncator.calculate_json_size(updated) + + assert size > 0 + + def test_maybe_qa_structure_should_validate_shape(self) -> None: + assert VariableTruncator._maybe_qa_structure({"qa_chunks": []}) is True + assert VariableTruncator._maybe_qa_structure({"qa_chunks": "not-list"}) is False + assert VariableTruncator._maybe_qa_structure({}) is False + + def test_maybe_parent_child_structure_should_validate_shape(self) -> None: + assert ( + VariableTruncator._maybe_parent_child_structure({"parent_mode": "full", "parent_child_chunks": []}) is True + ) + assert VariableTruncator._maybe_parent_child_structure({"parent_mode": 1, "parent_child_chunks": []}) is False + assert ( + VariableTruncator._maybe_parent_child_structure({"parent_mode": "full", "parent_child_chunks": "bad"}) + is False + ) + + def test_truncate_object_should_truncate_segment_values_inside_object(self) -> None: + truncator = VariableTruncator(string_length_limit=8, max_size_bytes=30) + mapping = {"s": StringSegment(value="long-content")} + + result = truncator._truncate_object(mapping, 20) + + assert result.truncated is True + assert isinstance(result.value["s"], StringSegment) + + def test_truncate_json_primitives_should_handle_updated_variable_input(self) -> None: + truncator = VariableTruncator(max_size_bytes=100) + updated = UpdatedVariable(name="n", selector=["node", "var"], value_type=SegmentType.STRING, new_value="v") + + result = truncator._truncate_json_primitives(updated, 100) + + assert isinstance(result.value, dict) + + def test_truncate_json_primitives_should_raise_assertion_for_unsupported_value_type(self) -> None: + truncator = VariableTruncator() + + with pytest.raises(AssertionError): + truncator._truncate_json_primitives(object(), 100) # type: ignore[arg-type] + + def test_truncate_should_apply_json_string_fallback_for_large_non_string_segment( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + truncator = VariableTruncator(max_size_bytes=10) + forced_segment = ObjectSegment(value={"k": "v"}) + forced_result = truncator_module._PartResult(value=forced_segment, value_size=100, truncated=True) + monkeypatch.setattr(truncator, "_truncate_segment", lambda *_args, **_kwargs: forced_result) + + result = truncator.truncate(ObjectSegment(value={"a": "b"})) + + assert result.truncated is True + assert isinstance(result.result, StringSegment) diff --git a/api/tests/unit_tests/services/test_webhook_service.py b/api/tests/unit_tests/services/test_webhook_service.py index c86ed2debd..ffdcc046f9 100644 --- a/api/tests/unit_tests/services/test_webhook_service.py +++ b/api/tests/unit_tests/services/test_webhook_service.py @@ -559,756 +559,3 @@ class TestWebhookServiceUnit: result = _prepare_webhook_execution("test_webhook", is_debug=True) assert result == (mock_trigger, mock_workflow, mock_config, mock_data, None) - - -# === Merged from test_webhook_service_additional.py === - - -from types import SimpleNamespace -from typing import Any, cast -from unittest.mock import MagicMock - -import pytest -from flask import Flask -from graphon.variables.types import SegmentType -from werkzeug.datastructures import FileStorage -from werkzeug.exceptions import RequestEntityTooLarge - -from core.workflow.nodes.trigger_webhook.entities import ( - ContentType, - WebhookBodyParameter, - WebhookData, - WebhookParameter, -) -from models.enums import AppTriggerStatus -from models.model import App -from models.trigger import WorkflowWebhookTrigger -from models.workflow import Workflow -from services.errors.app import QuotaExceededError -from services.trigger import webhook_service as service_module -from services.trigger.webhook_service import WebhookService - - -class _FakeQuery: - def __init__(self, result: Any) -> None: - self._result = result - - def where(self, *args: Any, **kwargs: Any) -> "_FakeQuery": - return self - - def filter(self, *args: Any, **kwargs: Any) -> "_FakeQuery": - return self - - def order_by(self, *args: Any, **kwargs: Any) -> "_FakeQuery": - return self - - def first(self) -> Any: - return self._result - - -class _SessionContext: - def __init__(self, session: Any) -> None: - self._session = session - - def __enter__(self) -> Any: - return self._session - - def __exit__(self, exc_type: Any, exc: Any, tb: Any) -> bool: - return False - - -@pytest.fixture -def flask_app() -> Flask: - return Flask(__name__) - - -def _patch_session(monkeypatch: pytest.MonkeyPatch, session: Any) -> None: - monkeypatch.setattr(service_module, "db", SimpleNamespace(engine=MagicMock(), session=MagicMock())) - monkeypatch.setattr(service_module, "Session", lambda *args, **kwargs: _SessionContext(session)) - - -def _workflow_trigger(**kwargs: Any) -> WorkflowWebhookTrigger: - return cast(WorkflowWebhookTrigger, SimpleNamespace(**kwargs)) - - -def _workflow(**kwargs: Any) -> Workflow: - return cast(Workflow, SimpleNamespace(**kwargs)) - - -def _app(**kwargs: Any) -> App: - return cast(App, SimpleNamespace(**kwargs)) - - -def test_get_webhook_trigger_and_workflow_should_raise_when_webhook_not_found(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - fake_session = MagicMock() - fake_session.query.return_value = _FakeQuery(None) - _patch_session(monkeypatch, fake_session) - - # Act / Assert - with pytest.raises(ValueError, match="Webhook not found"): - WebhookService.get_webhook_trigger_and_workflow("webhook-1") - - -def test_get_webhook_trigger_and_workflow_should_raise_when_app_trigger_not_found( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = SimpleNamespace(app_id="app-1", node_id="node-1") - fake_session = MagicMock() - fake_session.query.side_effect = [_FakeQuery(webhook_trigger), _FakeQuery(None)] - _patch_session(monkeypatch, fake_session) - - # Act / Assert - with pytest.raises(ValueError, match="App trigger not found"): - WebhookService.get_webhook_trigger_and_workflow("webhook-1") - - -def test_get_webhook_trigger_and_workflow_should_raise_when_app_trigger_rate_limited( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = SimpleNamespace(app_id="app-1", node_id="node-1") - app_trigger = SimpleNamespace(status=AppTriggerStatus.RATE_LIMITED) - fake_session = MagicMock() - fake_session.query.side_effect = [_FakeQuery(webhook_trigger), _FakeQuery(app_trigger)] - _patch_session(monkeypatch, fake_session) - - # Act / Assert - with pytest.raises(ValueError, match="rate limited"): - WebhookService.get_webhook_trigger_and_workflow("webhook-1") - - -def test_get_webhook_trigger_and_workflow_should_raise_when_app_trigger_disabled( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = SimpleNamespace(app_id="app-1", node_id="node-1") - app_trigger = SimpleNamespace(status=AppTriggerStatus.DISABLED) - fake_session = MagicMock() - fake_session.query.side_effect = [_FakeQuery(webhook_trigger), _FakeQuery(app_trigger)] - _patch_session(monkeypatch, fake_session) - - # Act / Assert - with pytest.raises(ValueError, match="disabled"): - WebhookService.get_webhook_trigger_and_workflow("webhook-1") - - -def test_get_webhook_trigger_and_workflow_should_raise_when_workflow_not_found(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - webhook_trigger = SimpleNamespace(app_id="app-1", node_id="node-1") - app_trigger = SimpleNamespace(status=AppTriggerStatus.ENABLED) - fake_session = MagicMock() - fake_session.query.side_effect = [_FakeQuery(webhook_trigger), _FakeQuery(app_trigger), _FakeQuery(None)] - _patch_session(monkeypatch, fake_session) - - # Act / Assert - with pytest.raises(ValueError, match="Workflow not found"): - WebhookService.get_webhook_trigger_and_workflow("webhook-1") - - -def test_get_webhook_trigger_and_workflow_should_return_values_for_non_debug_mode( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = SimpleNamespace(app_id="app-1", node_id="node-1") - app_trigger = SimpleNamespace(status=AppTriggerStatus.ENABLED) - workflow = MagicMock() - workflow.get_node_config_by_id.return_value = {"data": {"key": "value"}} - - fake_session = MagicMock() - fake_session.query.side_effect = [_FakeQuery(webhook_trigger), _FakeQuery(app_trigger), _FakeQuery(workflow)] - _patch_session(monkeypatch, fake_session) - - # Act - got_trigger, got_workflow, got_node_config = WebhookService.get_webhook_trigger_and_workflow("webhook-1") - - # Assert - assert got_trigger is webhook_trigger - assert got_workflow is workflow - assert got_node_config == {"data": {"key": "value"}} - - -def test_get_webhook_trigger_and_workflow_should_return_values_for_debug_mode(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - webhook_trigger = SimpleNamespace(app_id="app-1", node_id="node-1") - workflow = MagicMock() - workflow.get_node_config_by_id.return_value = {"data": {"mode": "debug"}} - - fake_session = MagicMock() - fake_session.query.side_effect = [_FakeQuery(webhook_trigger), _FakeQuery(workflow)] - _patch_session(monkeypatch, fake_session) - - # Act - got_trigger, got_workflow, got_node_config = WebhookService.get_webhook_trigger_and_workflow( - "webhook-1", is_debug=True - ) - - # Assert - assert got_trigger is webhook_trigger - assert got_workflow is workflow - assert got_node_config == {"data": {"mode": "debug"}} - - -def test_extract_webhook_data_should_use_text_fallback_for_unknown_content_type( - flask_app: Flask, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - warning_mock = MagicMock() - monkeypatch.setattr(service_module.logger, "warning", warning_mock) - webhook_trigger = MagicMock() - - # Act - with flask_app.test_request_context( - "/webhook", - method="POST", - headers={"Content-Type": "application/vnd.custom"}, - data="plain content", - ): - result = WebhookService.extract_webhook_data(webhook_trigger) - - # Assert - assert result["body"] == {"raw": "plain content"} - warning_mock.assert_called_once() - - -def test_extract_webhook_data_should_raise_for_request_too_large( - flask_app: Flask, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - monkeypatch.setattr(service_module.dify_config, "WEBHOOK_REQUEST_BODY_MAX_SIZE", 1) - - # Act / Assert - with flask_app.test_request_context("/webhook", method="POST", data="ab"): - with pytest.raises(RequestEntityTooLarge): - WebhookService.extract_webhook_data(MagicMock()) - - -def test_extract_octet_stream_body_should_return_none_when_empty_payload(flask_app: Flask) -> None: - # Arrange - webhook_trigger = MagicMock() - - # Act - with flask_app.test_request_context("/webhook", method="POST", data=b""): - body, files = WebhookService._extract_octet_stream_body(webhook_trigger) - - # Assert - assert body == {"raw": None} - assert files == {} - - -def test_extract_octet_stream_body_should_return_none_when_processing_raises( - flask_app: Flask, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = MagicMock() - monkeypatch.setattr(WebhookService, "_detect_binary_mimetype", MagicMock(return_value="application/octet-stream")) - monkeypatch.setattr(WebhookService, "_create_file_from_binary", MagicMock(side_effect=RuntimeError("boom"))) - - # Act - with flask_app.test_request_context("/webhook", method="POST", data=b"abc"): - body, files = WebhookService._extract_octet_stream_body(webhook_trigger) - - # Assert - assert body == {"raw": None} - assert files == {} - - -def test_extract_text_body_should_return_empty_string_when_request_read_fails( - flask_app: Flask, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - monkeypatch.setattr("flask.wrappers.Request.get_data", MagicMock(side_effect=RuntimeError("read error"))) - - # Act - with flask_app.test_request_context("/webhook", method="POST", data="abc"): - body, files = WebhookService._extract_text_body() - - # Assert - assert body == {"raw": ""} - assert files == {} - - -def test_detect_binary_mimetype_should_fallback_when_magic_raises(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - fake_magic = MagicMock() - fake_magic.from_buffer.side_effect = RuntimeError("magic failed") - monkeypatch.setattr(service_module, "magic", fake_magic) - - # Act - result = WebhookService._detect_binary_mimetype(b"binary") - - # Assert - assert result == "application/octet-stream" - - -def test_process_file_uploads_should_use_octet_stream_fallback_when_mimetype_unknown( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = _workflow_trigger(created_by="user-1", tenant_id="tenant-1") - file_obj = MagicMock() - file_obj.to_dict.return_value = {"id": "f-1"} - monkeypatch.setattr(WebhookService, "_create_file_from_binary", MagicMock(return_value=file_obj)) - monkeypatch.setattr(service_module.mimetypes, "guess_type", MagicMock(return_value=(None, None))) - - uploaded = MagicMock() - uploaded.filename = "file.unknown" - uploaded.content_type = None - uploaded.read.return_value = b"content" - - # Act - result = WebhookService._process_file_uploads({"f": uploaded}, webhook_trigger) - - # Assert - assert result == {"f": {"id": "f-1"}} - - -def test_create_file_from_binary_should_call_tool_file_manager_and_file_factory( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = _workflow_trigger(created_by="user-1", tenant_id="tenant-1") - manager = MagicMock() - manager.create_file_by_raw.return_value = SimpleNamespace(id="tool-file-1") - monkeypatch.setattr(service_module, "ToolFileManager", MagicMock(return_value=manager)) - expected_file = MagicMock() - monkeypatch.setattr(service_module.file_factory, "build_from_mapping", MagicMock(return_value=expected_file)) - - # Act - result = WebhookService._create_file_from_binary(b"abc", "text/plain", webhook_trigger) - - # Assert - assert result is expected_file - manager.create_file_by_raw.assert_called_once() - - -@pytest.mark.parametrize( - ("raw_value", "param_type", "expected"), - [ - ("42", SegmentType.NUMBER, 42), - ("3.14", SegmentType.NUMBER, 3.14), - ("yes", SegmentType.BOOLEAN, True), - ("no", SegmentType.BOOLEAN, False), - ], -) -def test_convert_form_value_should_convert_supported_types( - raw_value: str, - param_type: str, - expected: Any, -) -> None: - # Arrange - - # Act - result = WebhookService._convert_form_value("param", raw_value, param_type) - - # Assert - assert result == expected - - -def test_convert_form_value_should_raise_for_unsupported_type() -> None: - # Arrange - - # Act / Assert - with pytest.raises(ValueError, match="Unsupported type"): - WebhookService._convert_form_value("p", "x", SegmentType.FILE) - - -def test_validate_json_value_should_return_original_for_unmapped_supported_segment_type( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - warning_mock = MagicMock() - monkeypatch.setattr(service_module.logger, "warning", warning_mock) - - # Act - result = WebhookService._validate_json_value("param", {"x": 1}, "unsupported-type") - - # Assert - assert result == {"x": 1} - warning_mock.assert_called_once() - - -def test_validate_and_convert_value_should_wrap_conversion_errors() -> None: - # Arrange - - # Act / Assert - with pytest.raises(ValueError, match="validation failed"): - WebhookService._validate_and_convert_value("param", "bad", SegmentType.NUMBER, is_form_data=True) - - -def test_process_parameters_should_raise_when_required_parameter_missing() -> None: - # Arrange - raw_params = {"optional": "x"} - config = [WebhookParameter(name="required_param", type=SegmentType.STRING, required=True)] - - # Act / Assert - with pytest.raises(ValueError, match="Required parameter missing"): - WebhookService._process_parameters(raw_params, config, is_form_data=True) - - -def test_process_parameters_should_include_unconfigured_parameters() -> None: - # Arrange - raw_params = {"known": "1", "unknown": "x"} - config = [WebhookParameter(name="known", type=SegmentType.NUMBER, required=False)] - - # Act - result = WebhookService._process_parameters(raw_params, config, is_form_data=True) - - # Assert - assert result == {"known": 1, "unknown": "x"} - - -def test_process_body_parameters_should_raise_when_required_text_raw_is_missing() -> None: - # Arrange - - # Act / Assert - with pytest.raises(ValueError, match="Required body content missing"): - WebhookService._process_body_parameters( - raw_body={"raw": ""}, - body_configs=[WebhookBodyParameter(name="raw", required=True)], - content_type=ContentType.TEXT, - ) - - -def test_process_body_parameters_should_skip_file_config_for_multipart_form_data() -> None: - # Arrange - raw_body = {"message": "hello", "extra": "x"} - body_configs = [ - WebhookBodyParameter(name="upload", type=SegmentType.FILE, required=True), - WebhookBodyParameter(name="message", type=SegmentType.STRING, required=True), - ] - - # Act - result = WebhookService._process_body_parameters(raw_body, body_configs, ContentType.FORM_DATA) - - # Assert - assert result == {"message": "hello", "extra": "x"} - - -def test_validate_required_headers_should_accept_sanitized_header_names() -> None: - # Arrange - headers = {"x_api_key": "123"} - configs = [WebhookParameter(name="x-api-key", required=True)] - - # Act - WebhookService._validate_required_headers(headers, configs) - - # Assert - assert True - - -def test_validate_required_headers_should_raise_when_required_header_missing() -> None: - # Arrange - headers = {"x-other": "123"} - configs = [WebhookParameter(name="x-api-key", required=True)] - - # Act / Assert - with pytest.raises(ValueError, match="Required header missing"): - WebhookService._validate_required_headers(headers, configs) - - -def test_validate_http_metadata_should_return_content_type_mismatch_error() -> None: - # Arrange - webhook_data = {"method": "POST", "headers": {"Content-Type": "application/json"}} - node_data = WebhookData(method="post", content_type=ContentType.TEXT) - - # Act - result = WebhookService._validate_http_metadata(webhook_data, node_data) - - # Assert - assert result["valid"] is False - assert "Content-type mismatch" in result["error"] - - -def test_extract_content_type_should_fallback_to_lowercase_header_key() -> None: - # Arrange - headers = {"content-type": "application/json; charset=utf-8"} - - # Act - result = WebhookService._extract_content_type(headers) - - # Assert - assert result == "application/json" - - -def test_build_workflow_inputs_should_include_expected_keys() -> None: - # Arrange - webhook_data = {"headers": {"h": "v"}, "query_params": {"q": 1}, "body": {"b": 2}} - - # Act - result = WebhookService.build_workflow_inputs(webhook_data) - - # Assert - assert result["webhook_data"] == webhook_data - assert result["webhook_headers"] == {"h": "v"} - assert result["webhook_query_params"] == {"q": 1} - assert result["webhook_body"] == {"b": 2} - - -def test_trigger_workflow_execution_should_trigger_async_workflow_successfully(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - webhook_trigger = _workflow_trigger( - app_id="app-1", - node_id="node-1", - tenant_id="tenant-1", - webhook_id="webhook-1", - ) - workflow = _workflow(id="wf-1") - webhook_data = {"body": {"x": 1}} - - session = MagicMock() - _patch_session(monkeypatch, session) - - end_user = SimpleNamespace(id="end-user-1") - monkeypatch.setattr( - service_module.EndUserService, "get_or_create_end_user_by_type", MagicMock(return_value=end_user) - ) - quota_type = SimpleNamespace(TRIGGER=SimpleNamespace(consume=MagicMock())) - monkeypatch.setattr(service_module, "QuotaType", quota_type) - trigger_async_mock = MagicMock() - monkeypatch.setattr(service_module.AsyncWorkflowService, "trigger_workflow_async", trigger_async_mock) - - # Act - WebhookService.trigger_workflow_execution(webhook_trigger, webhook_data, workflow) - - # Assert - trigger_async_mock.assert_called_once() - - -def test_trigger_workflow_execution_should_mark_tenant_rate_limited_when_quota_exceeded( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = _workflow_trigger( - app_id="app-1", - node_id="node-1", - tenant_id="tenant-1", - webhook_id="webhook-1", - ) - workflow = _workflow(id="wf-1") - - session = MagicMock() - _patch_session(monkeypatch, session) - - monkeypatch.setattr( - service_module.EndUserService, - "get_or_create_end_user_by_type", - MagicMock(return_value=SimpleNamespace(id="end-user-1")), - ) - monkeypatch.setattr( - service_module.QuotaService, - "reserve", - MagicMock(side_effect=QuotaExceededError(feature="trigger", tenant_id="tenant-1", required=1)), - ) - mark_rate_limited_mock = MagicMock() - monkeypatch.setattr(service_module.AppTriggerService, "mark_tenant_triggers_rate_limited", mark_rate_limited_mock) - - # Act / Assert - with pytest.raises(QuotaExceededError): - WebhookService.trigger_workflow_execution(webhook_trigger, {"body": {}}, workflow) - mark_rate_limited_mock.assert_called_once_with("tenant-1") - - -def test_trigger_workflow_execution_should_log_and_reraise_unexpected_errors(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - webhook_trigger = _workflow_trigger( - app_id="app-1", - node_id="node-1", - tenant_id="tenant-1", - webhook_id="webhook-1", - ) - workflow = _workflow(id="wf-1") - - session = MagicMock() - _patch_session(monkeypatch, session) - - monkeypatch.setattr( - service_module.EndUserService, "get_or_create_end_user_by_type", MagicMock(side_effect=RuntimeError("boom")) - ) - logger_exception_mock = MagicMock() - monkeypatch.setattr(service_module.logger, "exception", logger_exception_mock) - - # Act / Assert - with pytest.raises(RuntimeError, match="boom"): - WebhookService.trigger_workflow_execution(webhook_trigger, {"body": {}}, workflow) - logger_exception_mock.assert_called_once() - - -def test_sync_webhook_relationships_should_raise_when_workflow_exceeds_node_limit() -> None: - # Arrange - app = _app(id="app-1", tenant_id="tenant-1", created_by="user-1") - workflow = _workflow( - walk_nodes=lambda _node_type: [ - (f"node-{i}", {}) for i in range(WebhookService.MAX_WEBHOOK_NODES_PER_WORKFLOW + 1) - ] - ) - - # Act / Assert - with pytest.raises(ValueError, match="maximum webhook node limit"): - WebhookService.sync_webhook_relationships(app, workflow) - - -def test_sync_webhook_relationships_should_raise_when_lock_not_acquired(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - app = _app(id="app-1", tenant_id="tenant-1", created_by="user-1") - workflow = _workflow(walk_nodes=lambda _node_type: [("node-1", {})]) - - lock = MagicMock() - lock.acquire.return_value = False - monkeypatch.setattr(service_module.redis_client, "get", MagicMock(return_value=None)) - monkeypatch.setattr(service_module.redis_client, "lock", MagicMock(return_value=lock)) - - # Act / Assert - with pytest.raises(RuntimeError, match="Failed to acquire lock"): - WebhookService.sync_webhook_relationships(app, workflow) - - -def test_sync_webhook_relationships_should_create_missing_records_and_delete_stale_records( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - app = _app(id="app-1", tenant_id="tenant-1", created_by="user-1") - workflow = _workflow(walk_nodes=lambda _node_type: [("node-new", {})]) - - class _WorkflowWebhookTrigger: - app_id = "app_id" - tenant_id = "tenant_id" - webhook_id = "webhook_id" - node_id = "node_id" - - def __init__(self, app_id: str, tenant_id: str, node_id: str, webhook_id: str, created_by: str) -> None: - self.id = None - self.app_id = app_id - self.tenant_id = tenant_id - self.node_id = node_id - self.webhook_id = webhook_id - self.created_by = created_by - - class _Select: - def where(self, *args: Any, **kwargs: Any) -> "_Select": - return self - - class _Session: - def __init__(self) -> None: - self.added: list[Any] = [] - self.deleted: list[Any] = [] - self.commit_count = 0 - self.existing_records = [SimpleNamespace(node_id="node-stale")] - - def scalars(self, _stmt: Any) -> Any: - return SimpleNamespace(all=lambda: self.existing_records) - - def add(self, obj: Any) -> None: - self.added.append(obj) - - def flush(self) -> None: - for idx, obj in enumerate(self.added, start=1): - if obj.id is None: - obj.id = f"rec-{idx}" - - def commit(self) -> None: - self.commit_count += 1 - - def delete(self, obj: Any) -> None: - self.deleted.append(obj) - - lock = MagicMock() - lock.acquire.return_value = True - lock.release.return_value = None - - fake_session = _Session() - - monkeypatch.setattr(service_module, "WorkflowWebhookTrigger", _WorkflowWebhookTrigger) - monkeypatch.setattr(service_module, "select", MagicMock(return_value=_Select())) - monkeypatch.setattr(service_module.redis_client, "get", MagicMock(return_value=None)) - monkeypatch.setattr(service_module.redis_client, "lock", MagicMock(return_value=lock)) - redis_set_mock = MagicMock() - redis_delete_mock = MagicMock() - monkeypatch.setattr(service_module.redis_client, "set", redis_set_mock) - monkeypatch.setattr(service_module.redis_client, "delete", redis_delete_mock) - monkeypatch.setattr(WebhookService, "generate_webhook_id", MagicMock(return_value="generated-webhook-id")) - _patch_session(monkeypatch, fake_session) - - # Act - WebhookService.sync_webhook_relationships(app, workflow) - - # Assert - assert len(fake_session.added) == 1 - assert len(fake_session.deleted) == 1 - assert fake_session.commit_count == 2 - redis_set_mock.assert_called_once() - redis_delete_mock.assert_called_once() - lock.release.assert_called_once() - - -def test_sync_webhook_relationships_should_log_when_lock_release_fails(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - app = _app(id="app-1", tenant_id="tenant-1", created_by="user-1") - workflow = _workflow(walk_nodes=lambda _node_type: []) - - class _Select: - def where(self, *args: Any, **kwargs: Any) -> "_Select": - return self - - class _Session: - def scalars(self, _stmt: Any) -> Any: - return SimpleNamespace(all=lambda: []) - - def commit(self) -> None: - return None - - lock = MagicMock() - lock.acquire.return_value = True - lock.release.side_effect = RuntimeError("release failed") - - logger_exception_mock = MagicMock() - - monkeypatch.setattr(service_module, "select", MagicMock(return_value=_Select())) - monkeypatch.setattr(service_module.redis_client, "get", MagicMock(return_value=None)) - monkeypatch.setattr(service_module.redis_client, "lock", MagicMock(return_value=lock)) - monkeypatch.setattr(service_module.logger, "exception", logger_exception_mock) - _patch_session(monkeypatch, _Session()) - - # Act - WebhookService.sync_webhook_relationships(app, workflow) - - # Assert - assert logger_exception_mock.call_count == 1 - - -def test_generate_webhook_response_should_fallback_when_response_body_is_not_json() -> None: - # Arrange - node_config = {"data": {"status_code": 200, "response_body": "{bad-json"}} - - # Act - body, status = WebhookService.generate_webhook_response(node_config) - - # Assert - assert status == 200 - assert "message" in body - - -def test_generate_webhook_id_should_return_24_character_identifier() -> None: - # Arrange - - # Act - webhook_id = WebhookService.generate_webhook_id() - - # Assert - assert isinstance(webhook_id, str) - assert len(webhook_id) == 24 - - -def test_sanitize_key_should_return_original_value_for_non_string_input() -> None: - # Arrange - - # Act - result = WebhookService._sanitize_key(123) # type: ignore[arg-type] - - # Assert - assert result == 123 diff --git a/api/tests/unit_tests/services/test_webhook_service_additional.py b/api/tests/unit_tests/services/test_webhook_service_additional.py new file mode 100644 index 0000000000..776cb5dc3f --- /dev/null +++ b/api/tests/unit_tests/services/test_webhook_service_additional.py @@ -0,0 +1,292 @@ +from types import SimpleNamespace +from typing import Any +from unittest.mock import MagicMock + +import pytest +from flask import Flask +from werkzeug.exceptions import RequestEntityTooLarge + +from core.workflow.nodes.trigger_webhook.entities import ( + ContentType, + WebhookBodyParameter, + WebhookData, + WebhookParameter, +) +from graphon.variables.types import SegmentType +from services.trigger import webhook_service as service_module +from services.trigger.webhook_service import WebhookService + + +class _FakeQuery: + def __init__(self, result: Any) -> None: + self._result = result + + def where(self, *args: Any, **kwargs: Any) -> "_FakeQuery": + return self + + def filter(self, *args: Any, **kwargs: Any) -> "_FakeQuery": + return self + + def order_by(self, *args: Any, **kwargs: Any) -> "_FakeQuery": + return self + + def first(self) -> Any: + return self._result + + +@pytest.fixture +def flask_app() -> Flask: + return Flask(__name__) + + +def _workflow_trigger(**kwargs: Any) -> Any: + return SimpleNamespace(**kwargs) + + +class TestWebhookServiceExtractionFallbacks: + def test_extract_webhook_data_should_use_text_fallback_for_unknown_content_type( + self, + flask_app: Flask, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + warning_mock = MagicMock() + monkeypatch.setattr(service_module.logger, "warning", warning_mock) + webhook_trigger = MagicMock() + + with flask_app.test_request_context( + "/webhook", + method="POST", + headers={"Content-Type": "application/vnd.custom"}, + data="plain content", + ): + result = WebhookService.extract_webhook_data(webhook_trigger) + + assert result["body"] == {"raw": "plain content"} + warning_mock.assert_called_once() + + def test_extract_webhook_data_should_raise_for_request_too_large( + self, + flask_app: Flask, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + monkeypatch.setattr(service_module.dify_config, "WEBHOOK_REQUEST_BODY_MAX_SIZE", 1) + + with flask_app.test_request_context("/webhook", method="POST", data="ab"): + with pytest.raises(RequestEntityTooLarge): + WebhookService.extract_webhook_data(MagicMock()) + + def test_extract_octet_stream_body_should_return_none_when_empty_payload(self, flask_app: Flask) -> None: + webhook_trigger = MagicMock() + + with flask_app.test_request_context("/webhook", method="POST", data=b""): + body, files = WebhookService._extract_octet_stream_body(webhook_trigger) + + assert body == {"raw": None} + assert files == {} + + def test_extract_octet_stream_body_should_return_none_when_processing_raises( + self, + flask_app: Flask, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + webhook_trigger = MagicMock() + monkeypatch.setattr( + WebhookService, "_detect_binary_mimetype", MagicMock(return_value="application/octet-stream") + ) + monkeypatch.setattr(WebhookService, "_create_file_from_binary", MagicMock(side_effect=RuntimeError("boom"))) + + with flask_app.test_request_context("/webhook", method="POST", data=b"abc"): + body, files = WebhookService._extract_octet_stream_body(webhook_trigger) + + assert body == {"raw": None} + assert files == {} + + def test_extract_text_body_should_return_empty_string_when_request_read_fails( + self, + flask_app: Flask, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + monkeypatch.setattr("flask.wrappers.Request.get_data", MagicMock(side_effect=RuntimeError("read error"))) + + with flask_app.test_request_context("/webhook", method="POST", data="abc"): + body, files = WebhookService._extract_text_body() + + assert body == {"raw": ""} + assert files == {} + + def test_detect_binary_mimetype_should_fallback_when_magic_raises( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + fake_magic = MagicMock() + fake_magic.from_buffer.side_effect = RuntimeError("magic failed") + monkeypatch.setattr(service_module, "magic", fake_magic) + + result = WebhookService._detect_binary_mimetype(b"binary") + + assert result == "application/octet-stream" + + def test_process_file_uploads_should_use_octet_stream_fallback_when_mimetype_unknown( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + webhook_trigger = _workflow_trigger(created_by="user-1", tenant_id="tenant-1") + file_obj = MagicMock() + file_obj.to_dict.return_value = {"id": "f-1"} + monkeypatch.setattr(WebhookService, "_create_file_from_binary", MagicMock(return_value=file_obj)) + monkeypatch.setattr(service_module.mimetypes, "guess_type", MagicMock(return_value=(None, None))) + + uploaded = MagicMock() + uploaded.filename = "file.unknown" + uploaded.content_type = None + uploaded.read.return_value = b"content" + + result = WebhookService._process_file_uploads({"f": uploaded}, webhook_trigger) + + assert result == {"f": {"id": "f-1"}} + + def test_create_file_from_binary_should_call_tool_file_manager_and_file_factory( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + webhook_trigger = _workflow_trigger(created_by="user-1", tenant_id="tenant-1") + manager = MagicMock() + manager.create_file_by_raw.return_value = SimpleNamespace(id="tool-file-1") + monkeypatch.setattr(service_module, "ToolFileManager", MagicMock(return_value=manager)) + expected_file = MagicMock() + monkeypatch.setattr(service_module.file_factory, "build_from_mapping", MagicMock(return_value=expected_file)) + + result = WebhookService._create_file_from_binary(b"abc", "text/plain", webhook_trigger) + + assert result is expected_file + manager.create_file_by_raw.assert_called_once() + + +class TestWebhookServiceValidationAndConversion: + @pytest.mark.parametrize( + ("raw_value", "param_type", "expected"), + [ + ("42", SegmentType.NUMBER, 42), + ("3.14", SegmentType.NUMBER, 3.14), + ("yes", SegmentType.BOOLEAN, True), + ("no", SegmentType.BOOLEAN, False), + ], + ) + def test_convert_form_value_should_convert_supported_types( + self, + raw_value: str, + param_type: str, + expected: Any, + ) -> None: + result = WebhookService._convert_form_value("param", raw_value, param_type) + assert result == expected + + def test_convert_form_value_should_raise_for_unsupported_type(self) -> None: + with pytest.raises(ValueError, match="Unsupported type"): + WebhookService._convert_form_value("p", "x", SegmentType.FILE) + + def test_validate_json_value_should_return_original_for_unmapped_supported_segment_type( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + warning_mock = MagicMock() + monkeypatch.setattr(service_module.logger, "warning", warning_mock) + + result = WebhookService._validate_json_value("param", {"x": 1}, "unsupported-type") + + assert result == {"x": 1} + warning_mock.assert_called_once() + + def test_validate_and_convert_value_should_wrap_conversion_errors(self) -> None: + with pytest.raises(ValueError, match="validation failed"): + WebhookService._validate_and_convert_value("param", "bad", SegmentType.NUMBER, is_form_data=True) + + def test_process_parameters_should_raise_when_required_parameter_missing(self) -> None: + raw_params = {"optional": "x"} + config = [WebhookParameter(name="required_param", type=SegmentType.STRING, required=True)] + + with pytest.raises(ValueError, match="Required parameter missing"): + WebhookService._process_parameters(raw_params, config, is_form_data=True) + + def test_process_parameters_should_include_unconfigured_parameters(self) -> None: + raw_params = {"known": "1", "unknown": "x"} + config = [WebhookParameter(name="known", type=SegmentType.NUMBER, required=False)] + + result = WebhookService._process_parameters(raw_params, config, is_form_data=True) + + assert result == {"known": 1, "unknown": "x"} + + def test_process_body_parameters_should_raise_when_required_text_raw_is_missing(self) -> None: + with pytest.raises(ValueError, match="Required body content missing"): + WebhookService._process_body_parameters( + raw_body={"raw": ""}, + body_configs=[WebhookBodyParameter(name="raw", required=True)], + content_type=ContentType.TEXT, + ) + + def test_process_body_parameters_should_skip_file_config_for_multipart_form_data(self) -> None: + raw_body = {"message": "hello", "extra": "x"} + body_configs = [ + WebhookBodyParameter(name="upload", type=SegmentType.FILE, required=True), + WebhookBodyParameter(name="message", type=SegmentType.STRING, required=True), + ] + + result = WebhookService._process_body_parameters(raw_body, body_configs, ContentType.FORM_DATA) + + assert result == {"message": "hello", "extra": "x"} + + def test_validate_required_headers_should_accept_sanitized_header_names(self) -> None: + headers = {"x_api_key": "123"} + configs = [WebhookParameter(name="x-api-key", required=True)] + + WebhookService._validate_required_headers(headers, configs) + + def test_validate_required_headers_should_raise_when_required_header_missing(self) -> None: + headers = {"x-other": "123"} + configs = [WebhookParameter(name="x-api-key", required=True)] + + with pytest.raises(ValueError, match="Required header missing"): + WebhookService._validate_required_headers(headers, configs) + + def test_validate_http_metadata_should_return_content_type_mismatch_error(self) -> None: + webhook_data = {"method": "POST", "headers": {"Content-Type": "application/json"}} + node_data = WebhookData(method="post", content_type=ContentType.TEXT) + + result = WebhookService._validate_http_metadata(webhook_data, node_data) + + assert result["valid"] is False + assert "Content-type mismatch" in result["error"] + + def test_extract_content_type_should_fallback_to_lowercase_header_key(self) -> None: + headers = {"content-type": "application/json; charset=utf-8"} + assert WebhookService._extract_content_type(headers) == "application/json" + + def test_build_workflow_inputs_should_include_expected_keys(self) -> None: + webhook_data = {"headers": {"h": "v"}, "query_params": {"q": 1}, "body": {"b": 2}} + + result = WebhookService.build_workflow_inputs(webhook_data) + + assert result["webhook_data"] == webhook_data + assert result["webhook_headers"] == {"h": "v"} + assert result["webhook_query_params"] == {"q": 1} + assert result["webhook_body"] == {"b": 2} + + +class TestWebhookServiceUtilities: + def test_generate_webhook_response_should_fallback_when_response_body_is_not_json(self) -> None: + node_config = {"data": {"status_code": 200, "response_body": "{bad-json"}} + + body, status = WebhookService.generate_webhook_response(node_config) + + assert status == 200 + assert "message" in body + + def test_generate_webhook_id_should_return_24_character_identifier(self) -> None: + webhook_id = WebhookService.generate_webhook_id() + + assert isinstance(webhook_id, str) + assert len(webhook_id) == 24 + + def test_sanitize_key_should_return_original_value_for_non_string_input(self) -> None: + result = WebhookService._sanitize_key(123) # type: ignore[arg-type] + assert result == 123 diff --git a/api/tests/unit_tests/services/test_website_service.py b/api/tests/unit_tests/services/test_website_service.py index b0ddc7388a..2024aec13a 100644 --- a/api/tests/unit_tests/services/test_website_service.py +++ b/api/tests/unit_tests/services/test_website_service.py @@ -89,7 +89,7 @@ def test_website_crawl_api_request_from_args_valid_and_to_crawl_request() -> Non ({"provider": "firecrawl", "url": "https://example.com"}, "Options are required"), ], ) -def test_website_crawl_api_request_from_args_requires_fields(args: dict, missing_msg: str) -> None: +def test_website_crawl_api_request_from_args_requires_fields(args: dict[str, Any], missing_msg: str) -> None: with pytest.raises(ValueError, match=missing_msg): WebsiteCrawlApiRequest.from_args(args) diff --git a/api/tests/unit_tests/services/test_workflow_collaboration_service.py b/api/tests/unit_tests/services/test_workflow_collaboration_service.py new file mode 100644 index 0000000000..8a6addfece --- /dev/null +++ b/api/tests/unit_tests/services/test_workflow_collaboration_service.py @@ -0,0 +1,608 @@ +from unittest.mock import Mock, patch + +import pytest + +from repositories.workflow_collaboration_repository import WorkflowCollaborationRepository +from services.workflow_collaboration_service import WorkflowCollaborationService + + +class TestWorkflowCollaborationService: + @pytest.fixture + def service(self) -> tuple[WorkflowCollaborationService, Mock, Mock]: + repository = Mock(spec=WorkflowCollaborationRepository) + socketio = Mock() + return WorkflowCollaborationService(repository, socketio), repository, socketio + + def test_authorize_and_join_workflow_room_returns_leader_status( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, socketio = service + socketio.get_session.return_value = { + "user_id": "u-1", + "username": "Jane", + "avatar": None, + "tenant_id": "t-1", + } + + with ( + patch.object(collaboration_service, "_can_access_workflow", return_value=True), + patch.object(collaboration_service, "get_or_set_leader", return_value="sid-1"), + patch.object(collaboration_service, "broadcast_online_users"), + ): + # Act + result = collaboration_service.authorize_and_join_workflow_room("wf-1", "sid-1") + + # Assert + assert result == ("u-1", True) + repository.set_session_info.assert_called_once() + socketio.enter_room.assert_called_once_with("sid-1", "wf-1") + socketio.emit.assert_called_once_with("status", {"isLeader": True}, room="sid-1") + + def test_authorize_and_join_workflow_room_returns_none_when_missing_user( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, _repository, socketio = service + socketio.get_session.return_value = {} + + # Act + result = collaboration_service.authorize_and_join_workflow_room("wf-1", "sid-1") + + # Assert + assert result is None + + def test_authorize_and_join_workflow_room_returns_none_when_missing_tenant( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, socketio = service + socketio.get_session.return_value = {"user_id": "u-1", "username": "Jane", "avatar": None} + + result = collaboration_service.authorize_and_join_workflow_room("wf-1", "sid-1") + + assert result is None + repository.set_session_info.assert_not_called() + socketio.enter_room.assert_not_called() + socketio.emit.assert_not_called() + + def test_authorize_and_join_workflow_room_returns_none_when_workflow_is_not_accessible( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, socketio = service + socketio.get_session.return_value = { + "user_id": "u-1", + "username": "Jane", + "avatar": None, + "tenant_id": "t-1", + } + + with patch.object(collaboration_service, "_can_access_workflow", return_value=False): + result = collaboration_service.authorize_and_join_workflow_room("wf-1", "sid-1") + + assert result is None + repository.set_session_info.assert_not_called() + socketio.enter_room.assert_not_called() + socketio.emit.assert_not_called() + + def test_repr_and_save_socket_identity(self, service: tuple[WorkflowCollaborationService, Mock, Mock]) -> None: + collaboration_service, _repository, socketio = service + user = Mock() + user.id = "u-1" + user.name = "Jane" + user.avatar = "avatar.png" + user.current_tenant_id = "t-1" + + assert "WorkflowCollaborationService" in repr(collaboration_service) + + collaboration_service.save_socket_identity("sid-1", user) + + socketio.save_session.assert_called_once_with( + "sid-1", + {"user_id": "u-1", "username": "Jane", "avatar": "avatar.png", "tenant_id": "t-1"}, + ) + + def test_can_access_workflow_uses_session_factory( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, _repository, _socketio = service + session = Mock() + session.scalar.return_value = "wf-1" + session_context = Mock() + session_context.__enter__ = Mock(return_value=session) + session_context.__exit__ = Mock(return_value=False) + + with patch( + "services.workflow_collaboration_service.session_factory.create_session", + return_value=session_context, + ): + result = collaboration_service._can_access_workflow("wf-1", "tenant-1") + + assert result is True + session.scalar.assert_called_once() + + def test_relay_collaboration_event_unauthorized( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_sid_mapping.return_value = None + + # Act + result = collaboration_service.relay_collaboration_event("sid-1", {}) + + # Assert + assert result == ({"msg": "unauthorized"}, 401) + + def test_relay_collaboration_event_emits_update( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, socketio = service + repository.get_sid_mapping.return_value = {"workflow_id": "wf-1", "user_id": "u-1"} + payload = {"type": "mouse_move", "data": {"x": 1}, "timestamp": 123} + + # Act + result = collaboration_service.relay_collaboration_event("sid-1", payload) + + # Assert + assert result == ({"msg": "event_broadcasted"}, 200) + socketio.emit.assert_called_once_with( + "collaboration_update", + {"type": "mouse_move", "userId": "u-1", "data": {"x": 1}, "timestamp": 123}, + room="wf-1", + skip_sid="sid-1", + ) + + def test_relay_collaboration_event_requires_event_type( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, _socketio = service + repository.get_sid_mapping.return_value = {"workflow_id": "wf-1", "user_id": "u-1"} + + result = collaboration_service.relay_collaboration_event("sid-1", {"data": {"x": 1}}) + + assert result == ({"msg": "invalid event type"}, 400) + + def test_relay_collaboration_event_sync_request_forwards_to_active_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, socketio = service + repository.get_sid_mapping.return_value = {"workflow_id": "wf-1", "user_id": "u-1"} + repository.get_current_leader.return_value = "sid-leader" + payload = {"type": "sync_request", "data": {"reason": "join"}, "timestamp": 123} + + with ( + patch.object(collaboration_service, "refresh_session_state"), + patch.object(collaboration_service, "is_session_active", return_value=True), + ): + result = collaboration_service.relay_collaboration_event("sid-1", payload) + + assert result == ({"msg": "sync_request_forwarded"}, 200) + socketio.emit.assert_called_once_with( + "collaboration_update", + {"type": "sync_request", "userId": "u-1", "data": {"reason": "join"}, "timestamp": 123}, + room="sid-leader", + ) + repository.set_leader.assert_not_called() + + def test_relay_collaboration_event_sync_request_reelects_active_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, socketio = service + repository.get_sid_mapping.return_value = {"workflow_id": "wf-1", "user_id": "u-1"} + repository.get_current_leader.return_value = "sid-old" + repository.list_sessions.return_value = [ + { + "user_id": "u-2", + "username": "B", + "avatar": None, + "sid": "sid-2", + "connected_at": 1, + "graph_active": True, + }, + { + "user_id": "u-3", + "username": "C", + "avatar": None, + "sid": "sid-3", + "connected_at": 2, + "graph_active": True, + }, + ] + payload = {"type": "sync_request", "data": {"reason": "join"}, "timestamp": 123} + + def _is_session_active(_workflow_id: str, session_sid: str) -> bool: + return session_sid != "sid-old" + + with ( + patch.object(collaboration_service, "refresh_session_state"), + patch.object(collaboration_service, "broadcast_leader_change") as broadcast_leader_change, + patch.object(collaboration_service, "is_session_active", side_effect=_is_session_active), + ): + result = collaboration_service.relay_collaboration_event("sid-2", payload) + + assert result == ({"msg": "sync_request_forwarded"}, 200) + repository.delete_leader.assert_called_once_with("wf-1") + repository.set_leader.assert_called_once_with("wf-1", "sid-2") + broadcast_leader_change.assert_called_once_with("wf-1", "sid-2") + socketio.emit.assert_called_once_with( + "collaboration_update", + {"type": "sync_request", "userId": "u-1", "data": {"reason": "join"}, "timestamp": 123}, + room="sid-2", + ) + + def test_relay_collaboration_event_sync_request_returns_when_no_active_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, socketio = service + repository.get_sid_mapping.return_value = {"workflow_id": "wf-1", "user_id": "u-1"} + repository.get_current_leader.return_value = "sid-old" + repository.list_sessions.return_value = [] + payload = {"type": "sync_request", "data": {"reason": "join"}, "timestamp": 123} + + with ( + patch.object(collaboration_service, "refresh_session_state"), + patch.object(collaboration_service, "is_session_active", return_value=False), + ): + result = collaboration_service.relay_collaboration_event("sid-2", payload) + + assert result == ({"msg": "no_active_leader"}, 200) + repository.delete_leader.assert_called_once_with("wf-1") + socketio.emit.assert_not_called() + + def test_relay_graph_event_unauthorized(self, service: tuple[WorkflowCollaborationService, Mock, Mock]) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_sid_mapping.return_value = None + + # Act + result = collaboration_service.relay_graph_event("sid-1", {"nodes": []}) + + # Assert + assert result == ({"msg": "unauthorized"}, 401) + + def test_disconnect_session_no_mapping(self, service: tuple[WorkflowCollaborationService, Mock, Mock]) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_sid_mapping.return_value = None + + # Act + collaboration_service.disconnect_session("sid-1") + + # Assert + repository.delete_session.assert_not_called() + + def test_disconnect_session_cleans_up(self, service: tuple[WorkflowCollaborationService, Mock, Mock]) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_sid_mapping.return_value = {"workflow_id": "wf-1", "user_id": "u-1"} + + with ( + patch.object(collaboration_service, "handle_leader_disconnect") as handle_leader_disconnect, + patch.object(collaboration_service, "broadcast_online_users") as broadcast_online_users, + ): + # Act + collaboration_service.disconnect_session("sid-1") + + # Assert + repository.delete_session.assert_called_once_with("wf-1", "sid-1") + handle_leader_disconnect.assert_called_once_with("wf-1", "sid-1") + broadcast_online_users.assert_called_once_with("wf-1") + + def test_get_or_set_leader_returns_active_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_current_leader.return_value = "sid-1" + + with patch.object(collaboration_service, "is_session_active", return_value=True): + # Act + result = collaboration_service.get_or_set_leader("wf-1", "sid-2") + + # Assert + assert result == "sid-1" + repository.set_leader_if_absent.assert_not_called() + + def test_get_or_set_leader_replaces_dead_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_current_leader.return_value = "sid-1" + repository.set_leader_if_absent.return_value = True + repository.list_sessions.return_value = [ + { + "user_id": "u-2", + "username": "B", + "avatar": None, + "sid": "sid-2", + "connected_at": 1, + "graph_active": True, + } + ] + + with ( + patch.object(collaboration_service, "is_session_active", side_effect=lambda _wf, sid: sid != "sid-1"), + patch.object(collaboration_service, "broadcast_leader_change") as broadcast_leader_change, + ): + # Act + result = collaboration_service.get_or_set_leader("wf-1", "sid-2") + + # Assert + assert result == "sid-2" + repository.delete_session.assert_called_once_with("wf-1", "sid-1") + repository.delete_leader.assert_called_once_with("wf-1") + broadcast_leader_change.assert_called_once_with("wf-1", "sid-2") + + def test_get_or_set_leader_falls_back_to_existing( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_current_leader.side_effect = [None, "sid-3"] + repository.set_leader_if_absent.return_value = False + repository.list_sessions.return_value = [ + { + "user_id": "u-2", + "username": "B", + "avatar": None, + "sid": "sid-2", + "connected_at": 1, + "graph_active": True, + } + ] + + # Act + result = collaboration_service.get_or_set_leader("wf-1", "sid-2") + + # Assert + assert result == "sid-3" + + def test_get_or_set_leader_returns_sid_when_leader_still_missing( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, _socketio = service + repository.get_current_leader.side_effect = [None, None] + repository.set_leader_if_absent.return_value = False + + result = collaboration_service.get_or_set_leader("wf-1", "sid-2") + + assert result == "sid-2" + + def test_handle_leader_disconnect_elects_new( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_current_leader.return_value = "sid-1" + repository.list_sessions.return_value = [ + { + "user_id": "u-2", + "username": "B", + "avatar": None, + "sid": "sid-2", + "connected_at": 1, + "graph_active": True, + } + ] + + with ( + patch.object(collaboration_service, "is_session_active", return_value=True), + patch.object(collaboration_service, "broadcast_leader_change") as broadcast_leader_change, + ): + # Act + collaboration_service.handle_leader_disconnect("wf-1", "sid-1") + + # Assert + repository.set_leader.assert_called_once_with("wf-1", "sid-2") + broadcast_leader_change.assert_called_once_with("wf-1", "sid-2") + + def test_handle_leader_disconnect_clears_when_empty( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_current_leader.return_value = "sid-1" + repository.list_sessions.return_value = [] + + # Act + collaboration_service.handle_leader_disconnect("wf-1", "sid-1") + + # Assert + repository.delete_leader.assert_called_once_with("wf-1") + + def test_handle_leader_disconnect_ignores_non_leader_or_missing_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, _socketio = service + + repository.get_current_leader.return_value = None + collaboration_service.handle_leader_disconnect("wf-1", "sid-1") + + repository.get_current_leader.return_value = "sid-leader" + collaboration_service.handle_leader_disconnect("wf-1", "sid-other") + + repository.set_leader.assert_not_called() + repository.delete_leader.assert_not_called() + + def test_broadcast_leader_change_logs_emit_errors( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, socketio = service + repository.get_session_sids.return_value = ["sid-1", "sid-2"] + socketio.emit.side_effect = [RuntimeError("boom"), None] + + with patch("services.workflow_collaboration_service.logging.exception") as exception_mock: + collaboration_service.broadcast_leader_change("wf-1", "sid-2") + + assert exception_mock.call_count == 1 + + def test_broadcast_online_users_sorts_and_emits( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, socketio = service + repository.list_sessions.return_value = [ + {"user_id": "u-1", "username": "A", "avatar": None, "sid": "sid-1", "connected_at": 3}, + {"user_id": "u-2", "username": "B", "avatar": None, "sid": "sid-2", "connected_at": 1}, + ] + repository.get_current_leader.return_value = "sid-1" + + with patch.object(collaboration_service, "is_session_active", return_value=True): + # Act + collaboration_service.broadcast_online_users("wf-1") + + # Assert + socketio.emit.assert_called_once_with( + "online_users", + { + "workflow_id": "wf-1", + "users": [ + {"user_id": "u-2", "username": "B", "avatar": None, "sid": "sid-2", "connected_at": 1}, + {"user_id": "u-1", "username": "A", "avatar": None, "sid": "sid-1", "connected_at": 3}, + ], + "leader": "sid-1", + }, + room="wf-1", + ) + + def test_broadcast_online_users_reassigns_missing_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, socketio = service + users = [{"user_id": "u-2", "username": "B", "avatar": None, "sid": "sid-2", "connected_at": 1}] + repository.get_current_leader.return_value = "sid-old" + + with ( + patch.object(collaboration_service, "_prune_inactive_sessions", return_value=users), + patch.object(collaboration_service, "_select_graph_leader", return_value="sid-2"), + patch.object(collaboration_service, "broadcast_leader_change") as broadcast_leader_change, + ): + collaboration_service.broadcast_online_users("wf-1") + + repository.delete_leader.assert_called_once_with("wf-1") + repository.set_leader.assert_called_once_with("wf-1", "sid-2") + broadcast_leader_change.assert_called_once_with("wf-1", "sid-2") + socketio.emit.assert_called_once_with( + "online_users", + {"workflow_id": "wf-1", "users": users, "leader": "sid-2"}, + room="wf-1", + ) + + def test_refresh_session_state_expires_active_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_current_leader.return_value = "sid-1" + + with patch.object(collaboration_service, "is_session_active", return_value=True): + # Act + collaboration_service.refresh_session_state("wf-1", "sid-1") + + # Assert + repository.refresh_session_state.assert_called_once_with("wf-1", "sid-1") + repository.expire_leader.assert_called_once_with("wf-1") + repository.set_leader.assert_not_called() + + def test_refresh_session_state_sets_leader_when_missing( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_current_leader.return_value = None + repository.list_sessions.return_value = [ + { + "user_id": "u-2", + "username": "B", + "avatar": None, + "sid": "sid-2", + "connected_at": 1, + "graph_active": True, + } + ] + + with ( + patch.object(collaboration_service, "is_session_active", return_value=True), + patch.object(collaboration_service, "broadcast_leader_change") as broadcast_leader_change, + ): + # Act + collaboration_service.refresh_session_state("wf-1", "sid-2") + + # Assert + repository.set_leader.assert_called_once_with("wf-1", "sid-2") + broadcast_leader_change.assert_called_once_with("wf-1", "sid-2") + + def test_refresh_session_state_replaces_inactive_existing_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, _socketio = service + repository.get_current_leader.return_value = "sid-old" + + with ( + patch.object(collaboration_service, "is_session_active", return_value=False), + patch.object(collaboration_service, "broadcast_leader_change") as broadcast_leader_change, + ): + collaboration_service.refresh_session_state("wf-1", "sid-new") + + repository.delete_leader.assert_called_once_with("wf-1") + repository.set_leader.assert_called_once_with("wf-1", "sid-new") + broadcast_leader_change.assert_called_once_with("wf-1", "sid-new") + + def test_relay_graph_event_emits_update(self, service: tuple[WorkflowCollaborationService, Mock, Mock]) -> None: + # Arrange + collaboration_service, repository, socketio = service + repository.get_sid_mapping.return_value = {"workflow_id": "wf-1", "user_id": "u-1"} + + # Act + result = collaboration_service.relay_graph_event("sid-1", {"nodes": []}) + + # Assert + assert result == ({"msg": "graph_update_broadcasted"}, 200) + repository.refresh_session_state.assert_called_once_with("wf-1", "sid-1") + socketio.emit.assert_called_once_with("graph_update", {"nodes": []}, room="wf-1", skip_sid="sid-1") + + def test_prune_inactive_sessions_handles_empty_and_removes_stale( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, _socketio = service + repository.list_sessions.return_value = [] + assert collaboration_service._prune_inactive_sessions("wf-1") == [] + + active = {"sid": "sid-1", "user_id": "u-1", "connected_at": 1} + stale = {"sid": "sid-2", "user_id": "u-2", "connected_at": 2} + repository.list_sessions.return_value = [active, stale] + + with patch.object( + collaboration_service, + "is_session_active", + side_effect=lambda _workflow_id, sid: sid == "sid-1", + ): + users = collaboration_service._prune_inactive_sessions("wf-1") + + assert users == [active] + repository.delete_session.assert_called_with("wf-1", "sid-2") + + def test_is_session_active_guard_branches(self, service: tuple[WorkflowCollaborationService, Mock, Mock]) -> None: + collaboration_service, repository, socketio = service + socketio.manager.is_connected.return_value = True + repository.session_exists.return_value = True + repository.sid_mapping_exists.return_value = True + + assert collaboration_service.is_session_active("wf-1", "") is False + + socketio.manager.is_connected.return_value = False + assert collaboration_service.is_session_active("wf-1", "sid-1") is False + + socketio.manager.is_connected.side_effect = AttributeError("missing manager") + assert collaboration_service.is_session_active("wf-1", "sid-1") is False + socketio.manager.is_connected.side_effect = None + + socketio.manager.is_connected.return_value = True + repository.session_exists.return_value = False + assert collaboration_service.is_session_active("wf-1", "sid-1") is False + + repository.session_exists.return_value = True + repository.sid_mapping_exists.return_value = False + assert collaboration_service.is_session_active("wf-1", "sid-1") is False diff --git a/api/tests/unit_tests/services/test_workflow_comment_service.py b/api/tests/unit_tests/services/test_workflow_comment_service.py new file mode 100644 index 0000000000..e6db068e07 --- /dev/null +++ b/api/tests/unit_tests/services/test_workflow_comment_service.py @@ -0,0 +1,578 @@ +from unittest.mock import MagicMock, Mock, patch + +import pytest +from werkzeug.exceptions import Forbidden, NotFound + +from services import workflow_comment_service as service_module +from services.workflow_comment_service import WorkflowCommentService + + +@pytest.fixture +def mock_session(monkeypatch: pytest.MonkeyPatch) -> Mock: + session = Mock() + context_manager = MagicMock() + context_manager.__enter__.return_value = session + context_manager.__exit__.return_value = False + mock_db = MagicMock() + mock_db.engine = Mock() + empty_scalars = Mock() + empty_scalars.all.return_value = [] + session.scalars.return_value = empty_scalars + monkeypatch.setattr(service_module, "Session", Mock(return_value=context_manager)) + monkeypatch.setattr(service_module, "db", mock_db) + monkeypatch.setattr(service_module.send_workflow_comment_mention_email_task, "delay", Mock()) + return session + + +def _mock_scalars(result_list: list[object]) -> Mock: + scalars = Mock() + scalars.all.return_value = result_list + return scalars + + +class TestWorkflowCommentService: + def test_validate_content_rejects_empty(self) -> None: + with pytest.raises(ValueError): + WorkflowCommentService._validate_content(" ") + + def test_validate_content_rejects_too_long(self) -> None: + with pytest.raises(ValueError): + WorkflowCommentService._validate_content("a" * 1001) + + def test_filter_valid_mentioned_user_ids_filters_by_tenant_and_preserves_order(self, mock_session: Mock) -> None: + tenant_member_1 = "123e4567-e89b-12d3-a456-426614174000" + tenant_member_2 = "123e4567-e89b-12d3-a456-426614174002" + non_tenant_member = "123e4567-e89b-12d3-a456-426614174001" + mock_session.scalars.return_value = _mock_scalars([tenant_member_1, tenant_member_2]) + + result = WorkflowCommentService._filter_valid_mentioned_user_ids( + [ + tenant_member_1, + "", + 123, # type: ignore[list-item] + tenant_member_1, + non_tenant_member, + tenant_member_2, + ], + session=mock_session, + tenant_id="tenant-1", + ) + + assert result == [ + tenant_member_1, + tenant_member_2, + ] + + def test_format_comment_excerpt_handles_short_and_long_limits(self) -> None: + assert WorkflowCommentService._format_comment_excerpt(" hello ", max_length=10) == "hello" + assert WorkflowCommentService._format_comment_excerpt("abcdefghijk", max_length=3) == "abc" + assert WorkflowCommentService._format_comment_excerpt(" abcdefghijk ", max_length=8) == "abcde..." + + def test_build_mention_email_payloads_returns_empty_for_no_candidates(self, mock_session: Mock) -> None: + assert ( + WorkflowCommentService._build_mention_email_payloads( + session=mock_session, + tenant_id="tenant-1", + app_id="app-1", + mentioner_id="user-1", + mentioned_user_ids=[], + content="hello", + ) + == [] + ) + assert ( + WorkflowCommentService._build_mention_email_payloads( + session=mock_session, + tenant_id="tenant-1", + app_id="app-1", + mentioner_id="user-1", + mentioned_user_ids=["user-1"], + content="hello", + ) + == [] + ) + + def test_dispatch_mention_emails_enqueues_each_payload(self) -> None: + delay_mock = Mock() + with patch.object(service_module.send_workflow_comment_mention_email_task, "delay", delay_mock): + WorkflowCommentService._dispatch_mention_emails( + [ + {"to": "a@example.com"}, + {"to": "b@example.com"}, + ] + ) + + assert delay_mock.call_count == 2 + + def test_build_mention_email_payloads_skips_accounts_without_email(self, mock_session: Mock) -> None: + account_without_email = Mock() + account_without_email.email = None + account_without_email.name = "No Email" + account_without_email.interface_language = "en-US" + + account_with_email = Mock() + account_with_email.email = "user@example.com" + account_with_email.name = "" + account_with_email.interface_language = None + + mock_session.scalar.side_effect = ["My App", "Commenter"] + mock_session.scalars.return_value = _mock_scalars([account_without_email, account_with_email]) + + payloads = WorkflowCommentService._build_mention_email_payloads( + session=mock_session, + tenant_id="tenant-1", + app_id="app-1", + mentioner_id="user-1", + mentioned_user_ids=["user-2"], + content="hello", + ) + expected_app_url = f"{service_module.dify_config.CONSOLE_WEB_URL.rstrip('/')}/app/app-1/workflow" + + assert payloads == [ + { + "language": "en-US", + "to": "user@example.com", + "mentioned_name": "user@example.com", + "commenter_name": "Commenter", + "app_name": "My App", + "comment_content": "hello", + "app_url": expected_app_url, + } + ] + + def test_create_comment_creates_mentions(self, mock_session: Mock) -> None: + comment = Mock() + comment.id = "comment-1" + comment.created_at = "ts" + + with ( + patch.object(service_module, "WorkflowComment", return_value=comment), + patch.object(service_module, "WorkflowCommentMention", return_value=Mock()), + patch.object(WorkflowCommentService, "_filter_valid_mentioned_user_ids", return_value=["user-2"]), + ): + result = WorkflowCommentService.create_comment( + tenant_id="tenant-1", + app_id="app-1", + created_by="user-1", + content="hello", + position_x=1.0, + position_y=2.0, + mentioned_user_ids=["user-2", "bad-id"], + ) + + assert result == {"id": "comment-1", "created_at": "ts"} + assert mock_session.add.call_args_list[0].args[0] is comment + assert mock_session.add.call_count == 2 + mock_session.commit.assert_called_once() + + def test_update_comment_raises_not_found(self, mock_session: Mock) -> None: + mock_session.scalar.return_value = None + + with pytest.raises(NotFound): + WorkflowCommentService.update_comment( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + user_id="user-1", + content="hello", + ) + + def test_update_comment_raises_forbidden(self, mock_session: Mock) -> None: + comment = Mock() + comment.created_by = "owner" + mock_session.scalar.return_value = comment + + with pytest.raises(Forbidden): + WorkflowCommentService.update_comment( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + user_id="intruder", + content="hello", + ) + + def test_update_comment_replaces_mentions(self, mock_session: Mock) -> None: + comment = Mock() + comment.id = "comment-1" + comment.created_by = "owner" + mock_session.scalar.return_value = comment + + existing_mentions = [Mock(), Mock()] + mock_session.scalars.return_value = _mock_scalars(existing_mentions) + + with patch.object(WorkflowCommentService, "_filter_valid_mentioned_user_ids", return_value=["user-2"]): + result = WorkflowCommentService.update_comment( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + user_id="owner", + content="updated", + mentioned_user_ids=["user-2", "bad-id"], + ) + + assert result == {"id": "comment-1", "updated_at": comment.updated_at} + assert mock_session.delete.call_count == 2 + assert mock_session.add.call_count == 1 + mock_session.commit.assert_called_once() + + def test_update_comment_preserves_mentions_when_mentioned_user_ids_omitted(self, mock_session: Mock) -> None: + comment = Mock() + comment.id = "comment-1" + comment.created_by = "owner" + mock_session.scalar.return_value = comment + + with ( + patch.object(WorkflowCommentService, "_filter_valid_mentioned_user_ids") as filter_mentions_mock, + patch.object(WorkflowCommentService, "_build_mention_email_payloads") as build_payloads_mock, + patch.object(WorkflowCommentService, "_dispatch_mention_emails") as dispatch_mock, + ): + result = WorkflowCommentService.update_comment( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + user_id="owner", + content="updated", + ) + + assert result == {"id": "comment-1", "updated_at": comment.updated_at} + mock_session.delete.assert_not_called() + mock_session.add.assert_not_called() + filter_mentions_mock.assert_not_called() + build_payloads_mock.assert_not_called() + dispatch_mock.assert_called_once_with([]) + mock_session.commit.assert_called_once() + + def test_update_comment_clears_mentions_when_empty_list_provided(self, mock_session: Mock) -> None: + comment = Mock() + comment.id = "comment-1" + comment.created_by = "owner" + mock_session.scalar.return_value = comment + + existing_mentions = [Mock(), Mock()] + mock_session.scalars.return_value = _mock_scalars(existing_mentions) + + with patch.object(WorkflowCommentService, "_filter_valid_mentioned_user_ids", return_value=[]): + result = WorkflowCommentService.update_comment( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + user_id="owner", + content="updated", + mentioned_user_ids=[], + ) + + assert result == {"id": "comment-1", "updated_at": comment.updated_at} + assert mock_session.delete.call_count == 2 + mock_session.add.assert_not_called() + mock_session.commit.assert_called_once() + + def test_update_comment_notifies_only_new_mentions(self, mock_session: Mock) -> None: + comment = Mock() + comment.id = "comment-1" + comment.created_by = "owner" + mock_session.scalar.return_value = comment + + existing_mention = Mock() + existing_mention.mentioned_user_id = "user-2" + mock_session.scalars.return_value = _mock_scalars([existing_mention]) + + with ( + patch.object( + WorkflowCommentService, + "_filter_valid_mentioned_user_ids", + return_value=["user-2", "user-3"], + ), + patch.object( + WorkflowCommentService, + "_build_mention_email_payloads", + return_value=[], + ) as build_payloads_mock, + patch.object(WorkflowCommentService, "_dispatch_mention_emails") as dispatch_mock, + ): + WorkflowCommentService.update_comment( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + user_id="owner", + content="updated", + mentioned_user_ids=["user-2", "user-3"], + ) + + assert build_payloads_mock.call_args.kwargs["mentioned_user_ids"] == ["user-3"] + dispatch_mock.assert_called_once_with([]) + + def test_get_comments_preloads_related_accounts(self, mock_session: Mock) -> None: + comment = Mock() + comment.created_by = "user-1" + comment.resolved_by = "user-2" + reply = Mock() + reply.created_by = "user-3" + mention = Mock() + mention.mentioned_user_id = "user-4" + comment.replies = [reply] + comment.mentions = [mention] + comment.cache_created_by_account = Mock() + comment.cache_resolved_by_account = Mock() + reply.cache_created_by_account = Mock() + mention.cache_mentioned_user_account = Mock() + + account_1 = Mock() + account_1.id = "user-1" + account_2 = Mock() + account_2.id = "user-2" + account_3 = Mock() + account_3.id = "user-3" + account_4 = Mock() + account_4.id = "user-4" + + mock_session.scalars.side_effect = [ + _mock_scalars([comment]), + _mock_scalars([account_1, account_2, account_3, account_4]), + ] + + result = WorkflowCommentService.get_comments("tenant-1", "app-1") + + assert result == [comment] + comment.cache_created_by_account.assert_called_once_with(account_1) + comment.cache_resolved_by_account.assert_called_once_with(account_2) + reply.cache_created_by_account.assert_called_once_with(account_3) + mention.cache_mentioned_user_account.assert_called_once_with(account_4) + + def test_preload_accounts_returns_early_for_empty_comments(self, mock_session: Mock) -> None: + WorkflowCommentService._preload_accounts(mock_session, []) + + mock_session.scalars.assert_not_called() + + def test_get_comment_raises_not_found_with_provided_session(self) -> None: + session = Mock() + session.scalar.return_value = None + + with pytest.raises(NotFound): + WorkflowCommentService.get_comment("tenant-1", "app-1", "comment-1", session=session) + + def test_get_comment_uses_context_manager_when_session_not_provided(self, mock_session: Mock) -> None: + comment = Mock() + comment.created_by = "user-1" + comment.resolved_by = None + comment.replies = [] + comment.mentions = [] + comment.cache_created_by_account = Mock() + comment.cache_resolved_by_account = Mock() + mock_session.scalar.return_value = comment + mock_session.scalars.return_value = _mock_scalars([]) + + result = WorkflowCommentService.get_comment("tenant-1", "app-1", "comment-1") + + assert result is comment + comment.cache_created_by_account.assert_called_once() + comment.cache_resolved_by_account.assert_called_once_with(None) + + def test_delete_comment_raises_forbidden(self, mock_session: Mock) -> None: + comment = Mock() + comment.created_by = "owner" + + with patch.object(WorkflowCommentService, "get_comment", return_value=comment): + with pytest.raises(Forbidden): + WorkflowCommentService.delete_comment("tenant-1", "app-1", "comment-1", "intruder") + + def test_delete_comment_removes_related_entities(self, mock_session: Mock) -> None: + comment = Mock() + comment.created_by = "owner" + + mentions = [Mock(), Mock()] + replies = [Mock()] + mock_session.scalars.side_effect = [_mock_scalars(mentions), _mock_scalars(replies)] + + with patch.object(WorkflowCommentService, "get_comment", return_value=comment): + WorkflowCommentService.delete_comment("tenant-1", "app-1", "comment-1", "owner") + + assert mock_session.delete.call_count == 4 + mock_session.commit.assert_called_once() + + def test_resolve_comment_sets_fields(self, mock_session: Mock) -> None: + comment = Mock() + comment.resolved = False + comment.resolved_at = None + comment.resolved_by = None + + with ( + patch.object(WorkflowCommentService, "get_comment", return_value=comment), + patch.object(service_module, "naive_utc_now", return_value="now"), + ): + result = WorkflowCommentService.resolve_comment("tenant-1", "app-1", "comment-1", "user-1") + + assert result is comment + assert comment.resolved is True + assert comment.resolved_at == "now" + assert comment.resolved_by == "user-1" + mock_session.commit.assert_called_once() + + def test_resolve_comment_noop_when_already_resolved(self, mock_session: Mock) -> None: + comment = Mock() + comment.resolved = True + + with patch.object(WorkflowCommentService, "get_comment", return_value=comment): + result = WorkflowCommentService.resolve_comment("tenant-1", "app-1", "comment-1", "user-1") + + assert result is comment + mock_session.commit.assert_not_called() + + def test_create_reply_requires_comment(self, mock_session: Mock) -> None: + mock_session.get.return_value = None + + with pytest.raises(NotFound): + WorkflowCommentService.create_reply("comment-1", "hello", "user-1") + + def test_create_reply_creates_mentions(self, mock_session: Mock) -> None: + mock_session.get.return_value = Mock() + reply = Mock() + reply.id = "reply-1" + reply.created_at = "ts" + + with ( + patch.object(service_module, "WorkflowCommentReply", return_value=reply), + patch.object(service_module, "WorkflowCommentMention", return_value=Mock()), + patch.object(WorkflowCommentService, "_filter_valid_mentioned_user_ids", return_value=["user-2"]), + ): + result = WorkflowCommentService.create_reply( + comment_id="comment-1", + content="hello", + created_by="user-1", + mentioned_user_ids=["user-2", "bad-id"], + ) + + assert result == {"id": "reply-1", "created_at": "ts"} + assert mock_session.add.call_count == 2 + mock_session.commit.assert_called_once() + + def test_update_reply_raises_not_found(self, mock_session: Mock) -> None: + mock_session.scalar.return_value = None + + with pytest.raises(NotFound): + WorkflowCommentService.update_reply( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + reply_id="reply-1", + user_id="user-1", + content="hello", + ) + + def test_update_reply_raises_forbidden(self, mock_session: Mock) -> None: + reply = Mock() + reply.created_by = "owner" + mock_session.scalar.return_value = reply + + with pytest.raises(Forbidden): + WorkflowCommentService.update_reply( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + reply_id="reply-1", + user_id="intruder", + content="hello", + ) + + def test_update_reply_replaces_mentions(self, mock_session: Mock) -> None: + reply = Mock() + reply.id = "reply-1" + reply.comment_id = "comment-1" + reply.created_by = "owner" + reply.updated_at = "updated" + mock_session.scalar.return_value = reply + mock_session.scalars.return_value = _mock_scalars([Mock()]) + comment = Mock() + comment.tenant_id = "tenant-1" + comment.app_id = "app-1" + mock_session.get.return_value = comment + + with patch.object(WorkflowCommentService, "_filter_valid_mentioned_user_ids", return_value=["user-2"]): + result = WorkflowCommentService.update_reply( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + reply_id="reply-1", + user_id="owner", + content="new", + mentioned_user_ids=["user-2", "bad-id"], + ) + + assert result == {"id": "reply-1", "updated_at": "updated"} + assert mock_session.delete.call_count == 1 + assert mock_session.add.call_count == 1 + mock_session.commit.assert_called_once() + mock_session.refresh.assert_called_once_with(reply) + + def test_update_comment_updates_position_coordinates_when_provided(self, mock_session: Mock) -> None: + comment = Mock() + comment.id = "comment-1" + comment.created_by = "owner" + comment.position_x = 1.0 + comment.position_y = 2.0 + mock_session.scalar.return_value = comment + mock_session.scalars.return_value = _mock_scalars([]) + + WorkflowCommentService.update_comment( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + user_id="owner", + content="updated", + position_x=10.5, + position_y=20.5, + mentioned_user_ids=[], + ) + + assert comment.position_x == 10.5 + assert comment.position_y == 20.5 + + def test_delete_reply_raises_forbidden(self, mock_session: Mock) -> None: + reply = Mock() + reply.created_by = "owner" + mock_session.scalar.return_value = reply + + with pytest.raises(Forbidden): + WorkflowCommentService.delete_reply( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + reply_id="reply-1", + user_id="intruder", + ) + + def test_delete_reply_raises_not_found(self, mock_session: Mock) -> None: + mock_session.scalar.return_value = None + + with pytest.raises(NotFound): + WorkflowCommentService.delete_reply( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + reply_id="reply-1", + user_id="owner", + ) + + def test_delete_reply_removes_mentions(self, mock_session: Mock) -> None: + reply = Mock() + reply.created_by = "owner" + mock_session.scalar.return_value = reply + mock_session.scalars.return_value = _mock_scalars([Mock(), Mock()]) + + WorkflowCommentService.delete_reply( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + reply_id="reply-1", + user_id="owner", + ) + + assert mock_session.delete.call_count == 3 + mock_session.commit.assert_called_once() + + def test_validate_comment_access_delegates_to_get_comment(self) -> None: + comment = Mock() + with patch.object(WorkflowCommentService, "get_comment", return_value=comment) as get_comment_mock: + result = WorkflowCommentService.validate_comment_access("comment-1", "tenant-1", "app-1") + + assert result is comment + get_comment_mock.assert_called_once_with("tenant-1", "app-1", "comment-1") diff --git a/api/tests/unit_tests/services/test_workflow_run_service.py b/api/tests/unit_tests/services/test_workflow_run_service.py new file mode 100644 index 0000000000..03471389a6 --- /dev/null +++ b/api/tests/unit_tests/services/test_workflow_run_service.py @@ -0,0 +1,262 @@ +from types import SimpleNamespace +from typing import Any, cast +from unittest.mock import MagicMock + +import pytest +from sqlalchemy import Engine + +from models import Account, App, EndUser, WorkflowRunTriggeredFrom +from services import workflow_run_service as service_module +from services.workflow_run_service import WorkflowRunService + + +@pytest.fixture +def repository_factory_mocks(monkeypatch: pytest.MonkeyPatch) -> tuple[MagicMock, MagicMock, Any]: + node_repo = MagicMock() + workflow_run_repo = MagicMock() + factory = SimpleNamespace( + create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), + create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), + ) + monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) + return node_repo, workflow_run_repo, factory + + +def _app_model(**kwargs: Any) -> App: + return cast(App, SimpleNamespace(**kwargs)) + + +def _account(**kwargs: Any) -> Account: + return cast(Account, SimpleNamespace(**kwargs)) + + +def _end_user(**kwargs: Any) -> EndUser: + return cast(EndUser, SimpleNamespace(**kwargs)) + + +class TestWorkflowRunServiceInitialization: + def test___init___should_create_sessionmaker_from_db_engine_when_session_factory_missing( + self, + monkeypatch: pytest.MonkeyPatch, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + ) -> None: + session_factory = MagicMock(name="session_factory") + sessionmaker_mock = MagicMock(return_value=session_factory) + monkeypatch.setattr(service_module, "sessionmaker", sessionmaker_mock) + monkeypatch.setattr(service_module, "db", SimpleNamespace(engine="db-engine")) + + service = WorkflowRunService() + + sessionmaker_mock.assert_called_once_with(bind="db-engine", expire_on_commit=False) + assert service._session_factory is session_factory + + def test___init___should_create_sessionmaker_when_engine_is_provided( + self, + monkeypatch: pytest.MonkeyPatch, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + ) -> None: + class FakeEngine: + pass + + session_factory = MagicMock(name="session_factory") + sessionmaker_mock = MagicMock(return_value=session_factory) + monkeypatch.setattr(service_module, "Engine", FakeEngine) + monkeypatch.setattr(service_module, "sessionmaker", sessionmaker_mock) + engine = cast(Engine, FakeEngine()) + + service = WorkflowRunService(session_factory=engine) + + sessionmaker_mock.assert_called_once_with(bind=engine, expire_on_commit=False) + assert service._session_factory is session_factory + + def test___init___should_keep_provided_sessionmaker_and_create_repositories( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + ) -> None: + node_repo, workflow_run_repo, factory = repository_factory_mocks + session_factory = MagicMock(name="session_factory") + + service = WorkflowRunService(session_factory=session_factory) + + assert service._session_factory is session_factory + assert service._node_execution_service_repo is node_repo + assert service._workflow_run_repo is workflow_run_repo + factory.create_api_workflow_node_execution_repository.assert_called_once_with(session_factory) + factory.create_api_workflow_run_repository.assert_called_once_with(session_factory) + + +class TestWorkflowRunServiceQueries: + def test_get_paginate_workflow_runs_should_forward_filters_and_parse_limit( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + ) -> None: + _, workflow_run_repo, _ = repository_factory_mocks + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + app_model = _app_model(tenant_id="tenant-1", id="app-1") + expected = MagicMock(name="pagination") + workflow_run_repo.get_paginated_workflow_runs.return_value = expected + args = {"limit": "7", "last_id": "last-1", "status": "succeeded"} + + result = service.get_paginate_workflow_runs( + app_model=app_model, + args=args, + triggered_from=WorkflowRunTriggeredFrom.APP_RUN, + ) + + assert result is expected + workflow_run_repo.get_paginated_workflow_runs.assert_called_once_with( + tenant_id="tenant-1", + app_id="app-1", + triggered_from=WorkflowRunTriggeredFrom.APP_RUN, + limit=7, + last_id="last-1", + status="succeeded", + ) + + def test_get_paginate_advanced_chat_workflow_runs_should_attach_message_fields_when_message_exists( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + app_model = _app_model(tenant_id="tenant-1", id="app-1") + run_with_message = SimpleNamespace( + id="run-1", + status="running", + message=SimpleNamespace(id="msg-1", conversation_id="conv-1"), + ) + run_without_message = SimpleNamespace(id="run-2", status="succeeded", message=None) + pagination = SimpleNamespace(data=[run_with_message, run_without_message]) + monkeypatch.setattr(service, "get_paginate_workflow_runs", MagicMock(return_value=pagination)) + + result = service.get_paginate_advanced_chat_workflow_runs(app_model=app_model, args={"limit": "2"}) + + assert result is pagination + assert len(result.data) == 2 + assert result.data[0].message_id == "msg-1" + assert result.data[0].conversation_id == "conv-1" + assert result.data[0].status == "running" + assert not hasattr(result.data[1], "message_id") + assert result.data[1].id == "run-2" + + def test_get_workflow_run_should_delegate_to_repository_by_tenant_and_app( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + ) -> None: + _, workflow_run_repo, _ = repository_factory_mocks + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + app_model = _app_model(tenant_id="tenant-1", id="app-1") + expected = MagicMock(name="workflow_run") + workflow_run_repo.get_workflow_run_by_id.return_value = expected + + result = service.get_workflow_run(app_model=app_model, run_id="run-1") + + assert result is expected + workflow_run_repo.get_workflow_run_by_id.assert_called_once_with( + tenant_id="tenant-1", + app_id="app-1", + run_id="run-1", + ) + + def test_get_workflow_runs_count_should_forward_optional_filters( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + ) -> None: + _, workflow_run_repo, _ = repository_factory_mocks + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + app_model = _app_model(tenant_id="tenant-1", id="app-1") + expected = {"total": 3, "succeeded": 2} + workflow_run_repo.get_workflow_runs_count.return_value = expected + + result = service.get_workflow_runs_count( + app_model=app_model, + status="succeeded", + time_range="7d", + triggered_from=WorkflowRunTriggeredFrom.APP_RUN, + ) + + assert result == expected + workflow_run_repo.get_workflow_runs_count.assert_called_once_with( + tenant_id="tenant-1", + app_id="app-1", + triggered_from=WorkflowRunTriggeredFrom.APP_RUN, + status="succeeded", + time_range="7d", + ) + + def test_get_workflow_run_node_executions_should_return_empty_list_when_run_not_found( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=None)) + app_model = _app_model(id="app-1") + user = _account(current_tenant_id="tenant-1") + + result = service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) + + assert result == [] + + def test_get_workflow_run_node_executions_should_use_end_user_tenant_id( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + monkeypatch: pytest.MonkeyPatch, + ) -> None: + node_repo, _, _ = repository_factory_mocks + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=SimpleNamespace(id="run-1"))) + + class FakeEndUser: + def __init__(self, tenant_id: str) -> None: + self.tenant_id = tenant_id + + monkeypatch.setattr(service_module, "EndUser", FakeEndUser) + user = cast(EndUser, FakeEndUser(tenant_id="tenant-end-user")) + app_model = _app_model(id="app-1") + expected = [SimpleNamespace(id="exec-1")] + node_repo.get_executions_by_workflow_run.return_value = expected + + result = service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) + + assert result == expected + node_repo.get_executions_by_workflow_run.assert_called_once_with( + tenant_id="tenant-end-user", + app_id="app-1", + workflow_run_id="run-1", + ) + + def test_get_workflow_run_node_executions_should_use_account_current_tenant_id( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + monkeypatch: pytest.MonkeyPatch, + ) -> None: + node_repo, _, _ = repository_factory_mocks + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=SimpleNamespace(id="run-1"))) + app_model = _app_model(id="app-1") + user = _account(current_tenant_id="tenant-account") + expected = [SimpleNamespace(id="exec-1"), SimpleNamespace(id="exec-2")] + node_repo.get_executions_by_workflow_run.return_value = expected + + result = service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) + + assert result == expected + node_repo.get_executions_by_workflow_run.assert_called_once_with( + tenant_id="tenant-account", + app_id="app-1", + workflow_run_id="run-1", + ) + + def test_get_workflow_run_node_executions_should_raise_when_resolved_tenant_id_is_none( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=SimpleNamespace(id="run-1"))) + app_model = _app_model(id="app-1") + user = _account(current_tenant_id=None) + + with pytest.raises(ValueError, match="tenant_id cannot be None"): + service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) diff --git a/api/tests/unit_tests/services/test_workflow_run_service_pause.py b/api/tests/unit_tests/services/test_workflow_run_service_pause.py index 64b21317ab..239cc83518 100644 --- a/api/tests/unit_tests/services/test_workflow_run_service_pause.py +++ b/api/tests/unit_tests/services/test_workflow_run_service_pause.py @@ -13,10 +13,10 @@ from datetime import datetime from unittest.mock import MagicMock, create_autospec, patch import pytest -from graphon.enums import WorkflowExecutionStatus from sqlalchemy import Engine from sqlalchemy.orm import Session, sessionmaker +from graphon.enums import WorkflowExecutionStatus from models.workflow import WorkflowPause from repositories.api_workflow_run_repository import APIWorkflowRunRepository from repositories.sqlalchemy_api_workflow_run_repository import _PrivateWorkflowPauseEntity @@ -176,300 +176,3 @@ class TestWorkflowRunService: service = WorkflowRunService(session_factory) assert service._session_factory == session_factory - - -# === Merged from test_workflow_run_service.py === - - -from types import SimpleNamespace -from typing import Any, cast -from unittest.mock import MagicMock - -import pytest - -from models import Account, App, EndUser, WorkflowRunTriggeredFrom -from services import workflow_run_service as service_module -from services.workflow_run_service import WorkflowRunService - - -@pytest.fixture -def repository_factory_mocks(monkeypatch: pytest.MonkeyPatch) -> tuple[MagicMock, MagicMock, Any]: - # Arrange - node_repo = MagicMock() - workflow_run_repo = MagicMock() - factory = SimpleNamespace( - create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), - create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), - ) - monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) - - # Assert - return node_repo, workflow_run_repo, factory - - -def _app_model(**kwargs: Any) -> App: - return cast(App, SimpleNamespace(**kwargs)) - - -def _account(**kwargs: Any) -> Account: - return cast(Account, SimpleNamespace(**kwargs)) - - -def _end_user(**kwargs: Any) -> EndUser: - return cast(EndUser, SimpleNamespace(**kwargs)) - - -def test___init___should_create_sessionmaker_from_db_engine_when_session_factory_missing( - monkeypatch: pytest.MonkeyPatch, - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], -) -> None: - # Arrange - session_factory = MagicMock(name="session_factory") - sessionmaker_mock = MagicMock(return_value=session_factory) - monkeypatch.setattr(service_module, "sessionmaker", sessionmaker_mock) - monkeypatch.setattr(service_module, "db", SimpleNamespace(engine="db-engine")) - - # Act - service = WorkflowRunService() - - # Assert - sessionmaker_mock.assert_called_once_with(bind="db-engine", expire_on_commit=False) - assert service._session_factory is session_factory - - -def test___init___should_create_sessionmaker_when_engine_is_provided( - monkeypatch: pytest.MonkeyPatch, - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], -) -> None: - # Arrange - class FakeEngine: - pass - - session_factory = MagicMock(name="session_factory") - sessionmaker_mock = MagicMock(return_value=session_factory) - monkeypatch.setattr(service_module, "Engine", FakeEngine) - monkeypatch.setattr(service_module, "sessionmaker", sessionmaker_mock) - engine = cast(Engine, FakeEngine()) - - # Act - service = WorkflowRunService(session_factory=engine) - - # Assert - sessionmaker_mock.assert_called_once_with(bind=engine, expire_on_commit=False) - assert service._session_factory is session_factory - - -def test___init___should_keep_provided_sessionmaker_and_create_repositories( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], -) -> None: - # Arrange - node_repo, workflow_run_repo, factory = repository_factory_mocks - session_factory = MagicMock(name="session_factory") - - # Act - service = WorkflowRunService(session_factory=session_factory) - - # Assert - assert service._session_factory is session_factory - assert service._node_execution_service_repo is node_repo - assert service._workflow_run_repo is workflow_run_repo - factory.create_api_workflow_node_execution_repository.assert_called_once_with(session_factory) - factory.create_api_workflow_run_repository.assert_called_once_with(session_factory) - - -def test_get_paginate_workflow_runs_should_forward_filters_and_parse_limit( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], -) -> None: - # Arrange - _, workflow_run_repo, _ = repository_factory_mocks - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - app_model = _app_model(tenant_id="tenant-1", id="app-1") - expected = MagicMock(name="pagination") - workflow_run_repo.get_paginated_workflow_runs.return_value = expected - args = {"limit": "7", "last_id": "last-1", "status": "succeeded"} - - # Act - result = service.get_paginate_workflow_runs( - app_model=app_model, - args=args, - triggered_from=WorkflowRunTriggeredFrom.APP_RUN, - ) - - # Assert - assert result is expected - workflow_run_repo.get_paginated_workflow_runs.assert_called_once_with( - tenant_id="tenant-1", - app_id="app-1", - triggered_from=WorkflowRunTriggeredFrom.APP_RUN, - limit=7, - last_id="last-1", - status="succeeded", - ) - - -def test_get_paginate_advanced_chat_workflow_runs_should_attach_message_fields_when_message_exists( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - app_model = _app_model(tenant_id="tenant-1", id="app-1") - run_with_message = SimpleNamespace( - id="run-1", - status="running", - message=SimpleNamespace(id="msg-1", conversation_id="conv-1"), - ) - run_without_message = SimpleNamespace(id="run-2", status="succeeded", message=None) - pagination = SimpleNamespace(data=[run_with_message, run_without_message]) - monkeypatch.setattr(service, "get_paginate_workflow_runs", MagicMock(return_value=pagination)) - - # Act - result = service.get_paginate_advanced_chat_workflow_runs(app_model=app_model, args={"limit": "2"}) - - # Assert - assert result is pagination - assert len(result.data) == 2 - assert result.data[0].message_id == "msg-1" - assert result.data[0].conversation_id == "conv-1" - assert result.data[0].status == "running" - assert not hasattr(result.data[1], "message_id") - assert result.data[1].id == "run-2" - - -def test_get_workflow_run_should_delegate_to_repository_by_tenant_and_app( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], -) -> None: - # Arrange - _, workflow_run_repo, _ = repository_factory_mocks - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - app_model = _app_model(tenant_id="tenant-1", id="app-1") - expected = MagicMock(name="workflow_run") - workflow_run_repo.get_workflow_run_by_id.return_value = expected - - # Act - result = service.get_workflow_run(app_model=app_model, run_id="run-1") - - # Assert - assert result is expected - workflow_run_repo.get_workflow_run_by_id.assert_called_once_with( - tenant_id="tenant-1", - app_id="app-1", - run_id="run-1", - ) - - -def test_get_workflow_runs_count_should_forward_optional_filters( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], -) -> None: - # Arrange - _, workflow_run_repo, _ = repository_factory_mocks - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - app_model = _app_model(tenant_id="tenant-1", id="app-1") - expected = {"total": 3, "succeeded": 2} - workflow_run_repo.get_workflow_runs_count.return_value = expected - - # Act - result = service.get_workflow_runs_count( - app_model=app_model, - status="succeeded", - time_range="7d", - triggered_from=WorkflowRunTriggeredFrom.APP_RUN, - ) - - # Assert - assert result == expected - workflow_run_repo.get_workflow_runs_count.assert_called_once_with( - tenant_id="tenant-1", - app_id="app-1", - triggered_from=WorkflowRunTriggeredFrom.APP_RUN, - status="succeeded", - time_range="7d", - ) - - -def test_get_workflow_run_node_executions_should_return_empty_list_when_run_not_found( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=None)) - app_model = _app_model(id="app-1") - user = _account(current_tenant_id="tenant-1") - - # Act - result = service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) - - # Assert - assert result == [] - - -def test_get_workflow_run_node_executions_should_use_end_user_tenant_id( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - node_repo, _, _ = repository_factory_mocks - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=SimpleNamespace(id="run-1"))) - - class FakeEndUser: - def __init__(self, tenant_id: str) -> None: - self.tenant_id = tenant_id - - monkeypatch.setattr(service_module, "EndUser", FakeEndUser) - user = cast(EndUser, FakeEndUser(tenant_id="tenant-end-user")) - app_model = _app_model(id="app-1") - expected = [SimpleNamespace(id="exec-1")] - node_repo.get_executions_by_workflow_run.return_value = expected - - # Act - result = service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) - - # Assert - assert result == expected - node_repo.get_executions_by_workflow_run.assert_called_once_with( - tenant_id="tenant-end-user", - app_id="app-1", - workflow_run_id="run-1", - ) - - -def test_get_workflow_run_node_executions_should_use_account_current_tenant_id( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - node_repo, _, _ = repository_factory_mocks - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=SimpleNamespace(id="run-1"))) - app_model = _app_model(id="app-1") - user = _account(current_tenant_id="tenant-account") - expected = [SimpleNamespace(id="exec-1"), SimpleNamespace(id="exec-2")] - node_repo.get_executions_by_workflow_run.return_value = expected - - # Act - result = service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) - - # Assert - assert result == expected - node_repo.get_executions_by_workflow_run.assert_called_once_with( - tenant_id="tenant-account", - app_id="app-1", - workflow_run_id="run-1", - ) - - -def test_get_workflow_run_node_executions_should_raise_when_resolved_tenant_id_is_none( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=SimpleNamespace(id="run-1"))) - app_model = _app_model(id="app-1") - user = _account(current_tenant_id=None) - - # Act / Assert - with pytest.raises(ValueError, match="tenant_id cannot be None"): - service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) diff --git a/api/tests/unit_tests/services/test_workflow_service.py b/api/tests/unit_tests/services/test_workflow_service.py index 1b253eb2f1..287f5f2e5e 100644 --- a/api/tests/unit_tests/services/test_workflow_service.py +++ b/api/tests/unit_tests/services/test_workflow_service.py @@ -12,9 +12,10 @@ This test suite covers: import json import uuid from typing import Any, cast -from unittest.mock import ANY, MagicMock, patch +from unittest.mock import ANY, MagicMock, Mock, patch import pytest + from graphon.entities import WorkflowNodeExecution from graphon.enums import ( BuiltinNodeTypes, @@ -28,7 +29,6 @@ from graphon.model_runtime.entities.model_entities import ModelType from graphon.node_events import NodeRunResult from graphon.nodes.http_request import HTTP_REQUEST_CONFIG_FILTER_KEY, HttpRequestNode, HttpRequestNodeConfig from graphon.variables.input_entities import VariableEntityType - from libs.datetime_utils import naive_utc_now from models.human_input import RecipientType from models.model import App, AppMode @@ -94,8 +94,8 @@ class TestWorkflowAssociatedDataFactory: app_id: str = "app-123", version: str = Workflow.VERSION_DRAFT, workflow_type: str = WorkflowType.WORKFLOW.value, - graph: dict | None = None, - features: dict | None = None, + graph: dict[str, Any] | None = None, + features: dict[str, Any] | None = None, unique_hash: str | None = None, **kwargs, ) -> MagicMock: @@ -713,6 +713,79 @@ class TestWorkflowService: with pytest.raises(ValueError, match="Invalid app mode"): workflow_service.validate_features_structure(app, features) + # ==================== Draft Workflow Variable Update Tests ==================== + # These tests verify updating draft workflow environment/conversation variables + + def test_update_draft_workflow_environment_variables_updates_workflow(self, workflow_service, mock_db_session): + """Test update_draft_workflow_environment_variables updates draft fields.""" + app = TestWorkflowAssociatedDataFactory.create_app_mock() + account = TestWorkflowAssociatedDataFactory.create_account_mock() + workflow = TestWorkflowAssociatedDataFactory.create_workflow_mock() + variables = [Mock()] + + with ( + patch.object(workflow_service, "get_draft_workflow", return_value=workflow), + patch("services.workflow_service.naive_utc_now", return_value="now"), + ): + workflow_service.update_draft_workflow_environment_variables( + app_model=app, + environment_variables=variables, + account=account, + ) + + assert workflow.environment_variables == variables + assert workflow.updated_by == account.id + assert workflow.updated_at == "now" + mock_db_session.session.commit.assert_called_once() + + def test_update_draft_workflow_environment_variables_raises_when_missing(self, workflow_service): + """Test update_draft_workflow_environment_variables raises when draft missing.""" + app = TestWorkflowAssociatedDataFactory.create_app_mock() + account = TestWorkflowAssociatedDataFactory.create_account_mock() + + with patch.object(workflow_service, "get_draft_workflow", return_value=None): + with pytest.raises(ValueError, match="No draft workflow found."): + workflow_service.update_draft_workflow_environment_variables( + app_model=app, + environment_variables=[], + account=account, + ) + + def test_update_draft_workflow_conversation_variables_updates_workflow(self, workflow_service, mock_db_session): + """Test update_draft_workflow_conversation_variables updates draft fields.""" + app = TestWorkflowAssociatedDataFactory.create_app_mock() + account = TestWorkflowAssociatedDataFactory.create_account_mock() + workflow = TestWorkflowAssociatedDataFactory.create_workflow_mock() + variables = [Mock()] + + with ( + patch.object(workflow_service, "get_draft_workflow", return_value=workflow), + patch("services.workflow_service.naive_utc_now", return_value="now"), + ): + workflow_service.update_draft_workflow_conversation_variables( + app_model=app, + conversation_variables=variables, + account=account, + ) + + assert workflow.conversation_variables == variables + assert workflow.updated_by == account.id + assert workflow.updated_at == "now" + mock_db_session.session.commit.assert_called_once() + + def test_update_draft_workflow_conversation_variables_raises_when_missing(self, workflow_service): + """Test update_draft_workflow_conversation_variables raises when draft missing.""" + app = TestWorkflowAssociatedDataFactory.create_app_mock() + account = TestWorkflowAssociatedDataFactory.create_account_mock() + + with patch.object(workflow_service, "get_draft_workflow", return_value=None): + with pytest.raises(ValueError, match="No draft workflow found."): + workflow_service.update_draft_workflow_conversation_variables( + app_model=app, + conversation_variables=[], + account=account, + ) + # ==================== Publish Workflow Tests ==================== # These tests verify creating published versions from draft workflows @@ -969,8 +1042,7 @@ class TestWorkflowService: # 1. Workflow exists # 2. No app is currently using it # 3. Not published as a tool - mock_session.scalar.side_effect = [mock_workflow, None] # workflow exists, no app using it - mock_session.query.return_value.where.return_value.first.return_value = None # no tool provider + mock_session.scalar.side_effect = [mock_workflow, None, None] # workflow, no app using it, no tool provider with patch("services.workflow_service.select") as mock_select: mock_stmt = MagicMock() @@ -1045,8 +1117,7 @@ class TestWorkflowService: mock_tool_provider = MagicMock() mock_session = MagicMock() - mock_session.scalar.side_effect = [mock_workflow, None] # workflow exists, no app using it - mock_session.query.return_value.where.return_value.first.return_value = mock_tool_provider + mock_session.scalar.side_effect = [mock_workflow, None, mock_tool_provider] # workflow, no app, tool provider with patch("services.workflow_service.select") as mock_select: mock_stmt = MagicMock() @@ -1688,7 +1759,7 @@ class TestWorkflowServiceCredentialValidation: """Missing provider or model in node_data should be a no-op.""" # Arrange workflow = self._make_workflow([]) - node_data: dict = {} # no model key + node_data: dict[str, Any] = {} # no model key # Act + Assert (no error expected) service._validate_load_balancing_credentials(workflow, node_data, "node-1") @@ -2271,7 +2342,7 @@ class TestRebuildFileForUserInputsInStartNode: # Arrange file_var = self._make_variable("attachment", VariableEntityType.FILE) start_data = self._make_start_node_data([file_var]) - user_inputs: dict = {} # attachment not provided + user_inputs: dict[str, Any] = {} # attachment not provided # Act result = _rebuild_file_for_user_inputs_in_start_node( @@ -2753,9 +2824,9 @@ class TestWorkflowServiceFreeNodeExecution: variable_pool = MagicMock() with ( - patch("services.workflow_service.GraphInitParams") as mock_graph_init_params, + patch("services.workflow_service.DifyGraphInitContext") as mock_graph_init_context_cls, patch("services.workflow_service.GraphRuntimeState"), - patch("services.workflow_service.build_dify_run_context"), + patch("services.workflow_service.build_dify_run_context") as mock_build_dify_run_context, patch("services.workflow_service.DifyHumanInputNodeRuntime") as mock_runtime_cls, patch("services.workflow_service.HumanInputNode") as mock_node_cls, ): @@ -2764,4 +2835,17 @@ class TestWorkflowServiceFreeNodeExecution: ) assert node == mock_node_cls.return_value mock_node_cls.assert_called_once() - mock_runtime_cls.assert_called_once_with(mock_graph_init_params.return_value.run_context) + mock_graph_init_context_cls.assert_called_once_with( + workflow_id="wf-1", + graph_config=workflow.graph_dict, + run_context=mock_build_dify_run_context.return_value, + call_depth=0, + ) + mock_runtime_cls.assert_called_once_with(mock_build_dify_run_context.return_value) + mock_node_cls.assert_called_once_with( + id="n-1", + config=node_config, + graph_init_params=mock_graph_init_context_cls.return_value.to_graph_init_params.return_value, + graph_runtime_state=ANY, + runtime=mock_runtime_cls.return_value, + ) diff --git a/api/tests/unit_tests/services/tools/test_builtin_tools_manage_service.py b/api/tests/unit_tests/services/tools/test_builtin_tools_manage_service.py index 175900071b..79a2d30f57 100644 --- a/api/tests/unit_tests/services/tools/test_builtin_tools_manage_service.py +++ b/api/tests/unit_tests/services/tools/test_builtin_tools_manage_service.py @@ -15,17 +15,24 @@ def _mock_session(mock_session_cls): return session +def _mock_sessionmaker(mock_sm_cls): + """Helper: set up a sessionmaker().begin() context manager mock and return the inner session.""" + session = MagicMock() + mock_sm_cls.return_value.begin.return_value.__enter__ = MagicMock(return_value=session) + mock_sm_cls.return_value.begin.return_value.__exit__ = MagicMock(return_value=False) + return session + + class TestDeleteCustomOauthClientParams: - @patch(f"{MODULE}.Session") + @patch(f"{MODULE}.sessionmaker") @patch(f"{MODULE}.db") - def test_deletes_and_returns_success(self, mock_db, mock_session_cls): - session = _mock_session(mock_session_cls) + def test_deletes_and_returns_success(self, mock_db, mock_sm_cls): + session = _mock_sessionmaker(mock_sm_cls) result = BuiltinToolManageService.delete_custom_oauth_client_params("tenant-1", "google") assert result == {"result": "success"} - session.query.return_value.filter_by.return_value.delete.assert_called_once() - session.commit.assert_called_once() + session.execute.assert_called_once() class TestListBuiltinToolProviderTools: @@ -104,7 +111,7 @@ class TestIsOauthSystemClientExists: @patch(f"{MODULE}.db") def test_true_when_exists(self, mock_db, mock_session_cls): session = _mock_session(mock_session_cls) - session.query.return_value.filter_by.return_value.first.return_value = MagicMock() + session.scalar.return_value = MagicMock() assert BuiltinToolManageService.is_oauth_system_client_exists("google") is True @@ -112,7 +119,7 @@ class TestIsOauthSystemClientExists: @patch(f"{MODULE}.db") def test_false_when_missing(self, mock_db, mock_session_cls): session = _mock_session(mock_session_cls) - session.query.return_value.filter_by.return_value.first.return_value = None + session.scalar.return_value = None assert BuiltinToolManageService.is_oauth_system_client_exists("google") is False @@ -122,7 +129,7 @@ class TestIsOauthCustomClientEnabled: @patch(f"{MODULE}.db") def test_true_when_enabled(self, mock_db, mock_session_cls): session = _mock_session(mock_session_cls) - session.query.return_value.filter_by.return_value.first.return_value = MagicMock(enabled=True) + session.scalar.return_value = MagicMock(enabled=True) assert BuiltinToolManageService.is_oauth_custom_client_enabled("t", "g") is True @@ -130,7 +137,7 @@ class TestIsOauthCustomClientEnabled: @patch(f"{MODULE}.db") def test_false_when_none(self, mock_db, mock_session_cls): session = _mock_session(mock_session_cls) - session.query.return_value.filter_by.return_value.first.return_value = None + session.scalar.return_value = None assert BuiltinToolManageService.is_oauth_custom_client_enabled("t", "g") is False @@ -138,23 +145,23 @@ class TestIsOauthCustomClientEnabled: class TestDeleteBuiltinToolProvider: @patch(f"{MODULE}.BuiltinToolManageService.create_tool_encrypter") @patch(f"{MODULE}.ToolManager") - @patch(f"{MODULE}.Session") + @patch(f"{MODULE}.sessionmaker") @patch(f"{MODULE}.db") - def test_raises_when_not_found(self, mock_db, mock_session_cls, mock_tm, mock_enc): - session = _mock_session(mock_session_cls) - session.query.return_value.where.return_value.first.return_value = None + def test_raises_when_not_found(self, mock_db, mock_sm_cls, mock_tm, mock_enc): + session = _mock_sessionmaker(mock_sm_cls) + session.scalar.return_value = None with pytest.raises(ValueError, match="you have not added provider"): BuiltinToolManageService.delete_builtin_tool_provider("t", "p", "id") @patch(f"{MODULE}.BuiltinToolManageService.create_tool_encrypter") @patch(f"{MODULE}.ToolManager") - @patch(f"{MODULE}.Session") + @patch(f"{MODULE}.sessionmaker") @patch(f"{MODULE}.db") - def test_deletes_provider_and_clears_cache(self, mock_db, mock_session_cls, mock_tm, mock_enc): - session = _mock_session(mock_session_cls) + def test_deletes_provider_and_clears_cache(self, mock_db, mock_sm_cls, mock_tm, mock_enc): + session = _mock_sessionmaker(mock_sm_cls) db_provider = MagicMock() - session.query.return_value.where.return_value.first.return_value = db_provider + session.scalar.return_value = db_provider mock_cache = MagicMock() mock_enc.return_value = (MagicMock(), mock_cache) @@ -162,40 +169,38 @@ class TestDeleteBuiltinToolProvider: assert result == {"result": "success"} session.delete.assert_called_once_with(db_provider) - session.commit.assert_called_once() mock_cache.delete.assert_called_once() class TestSetDefaultProvider: - @patch(f"{MODULE}.Session") + @patch(f"{MODULE}.sessionmaker") @patch(f"{MODULE}.db") - def test_raises_when_not_found(self, mock_db, mock_session_cls): - session = _mock_session(mock_session_cls) - session.query.return_value.filter_by.return_value.first.return_value = None + def test_raises_when_not_found(self, mock_db, mock_sm_cls): + session = _mock_sessionmaker(mock_sm_cls) + session.scalar.return_value = None with pytest.raises(ValueError, match="provider not found"): BuiltinToolManageService.set_default_provider("t", "u", "p", "id") - @patch(f"{MODULE}.Session") + @patch(f"{MODULE}.sessionmaker") @patch(f"{MODULE}.db") - def test_sets_default_and_clears_old(self, mock_db, mock_session_cls): - session = _mock_session(mock_session_cls) + def test_sets_default_and_clears_old(self, mock_db, mock_sm_cls): + session = _mock_sessionmaker(mock_sm_cls) target = MagicMock() - session.query.return_value.filter_by.return_value.first.return_value = target + session.scalar.return_value = target result = BuiltinToolManageService.set_default_provider("t", "u", "p", "id") assert result == {"result": "success"} assert target.is_default is True - session.commit.assert_called_once() class TestUpdateBuiltinToolProvider: - @patch(f"{MODULE}.Session") + @patch(f"{MODULE}.sessionmaker") @patch(f"{MODULE}.db") - def test_raises_when_provider_not_exists(self, mock_db, mock_session_cls): - session = _mock_session(mock_session_cls) - session.query.return_value.where.return_value.first.return_value = None + def test_raises_when_provider_not_exists(self, mock_db, mock_sm_cls): + session = _mock_sessionmaker(mock_sm_cls) + session.scalar.return_value = None with pytest.raises(ValueError, match="you have not added provider"): BuiltinToolManageService.update_builtin_tool_provider("u", "t", "p", "c") @@ -203,12 +208,12 @@ class TestUpdateBuiltinToolProvider: @patch(f"{MODULE}.BuiltinToolManageService.create_tool_encrypter") @patch(f"{MODULE}.CredentialType") @patch(f"{MODULE}.ToolManager") - @patch(f"{MODULE}.Session") + @patch(f"{MODULE}.sessionmaker") @patch(f"{MODULE}.db") - def test_updates_credentials_and_commits(self, mock_db, mock_session_cls, mock_tm, mock_cred_type, mock_enc): - session = _mock_session(mock_session_cls) + def test_updates_credentials_and_commits(self, mock_db, mock_sm_cls, mock_tm, mock_cred_type, mock_enc): + session = _mock_sessionmaker(mock_sm_cls) db_provider = MagicMock(credential_type="api_key", credentials="{}") - session.query.return_value.where.return_value.first.return_value = db_provider + session.scalar.return_value = db_provider mock_cred_instance = MagicMock() mock_cred_instance.is_editable.return_value = True @@ -227,7 +232,6 @@ class TestUpdateBuiltinToolProvider: result = BuiltinToolManageService.update_builtin_tool_provider("u", "t", "p", "c", credentials={"key": "val"}) assert result == {"result": "success"} - session.commit.assert_called_once() mock_cache.delete.assert_called_once() @@ -270,7 +274,7 @@ class TestGetOauthClient: mock_create_enc.return_value = (mock_encrypter, MagicMock()) user_client = MagicMock(oauth_params='{"encrypted": "data"}') - session.query.return_value.filter_by.return_value.first.return_value = user_client + session.scalar.return_value = user_client result = BuiltinToolManageService.get_oauth_client("t", "google") @@ -293,10 +297,7 @@ class TestGetOauthClient: mock_create_enc.return_value = (MagicMock(), MagicMock()) system_client = MagicMock(encrypted_oauth_params="enc") - session.query.return_value.filter_by.return_value.first.side_effect = [ - None, # user client - system_client, # system client - ] + session.scalar.side_effect = [None, system_client] result = BuiltinToolManageService.get_oauth_client("t", "google") @@ -321,7 +322,7 @@ class TestGetCustomOauthClientParams: @patch(f"{MODULE}.db") def test_returns_empty_when_none(self, mock_db, mock_session_cls): session = _mock_session(mock_session_cls) - session.query.return_value.filter_by.return_value.first.return_value = None + session.scalar.return_value = None result = BuiltinToolManageService.get_custom_oauth_client_params("t", "p") @@ -387,7 +388,7 @@ class TestGetBuiltinProvider: session = _mock_session(mock_session_cls) mock_prov_id.return_value.provider_name = "google" mock_prov_id.return_value.organization = "langgenius" - session.query.return_value.where.return_value.order_by.return_value.first.return_value = None + session.scalar.return_value = None result = BuiltinToolManageService.get_builtin_provider("google", "t") @@ -413,7 +414,7 @@ class TestGetBuiltinProvider: return m mock_prov_id.side_effect = prov_id_side_effect - session.query.return_value.where.return_value.order_by.return_value.first.return_value = db_provider + session.scalar.return_value = db_provider result = BuiltinToolManageService.get_builtin_provider("google", "t") @@ -435,7 +436,7 @@ class TestGetBuiltinProvider: mock_prov_id.side_effect = prov_id_side_effect db_provider = MagicMock(provider="third-party/custom/custom-tool") - session.query.return_value.where.return_value.order_by.return_value.first.return_value = db_provider + session.scalar.return_value = db_provider result = BuiltinToolManageService.get_builtin_provider("third-party/custom/custom-tool", "t") @@ -448,7 +449,7 @@ class TestGetBuiltinProvider: session = _mock_session(mock_session_cls) mock_prov_id.side_effect = Exception("parse error") fallback = MagicMock() - session.query.return_value.where.return_value.order_by.return_value.first.return_value = fallback + session.scalar.return_value = fallback result = BuiltinToolManageService.get_builtin_provider("old-provider", "t") diff --git a/api/tests/unit_tests/services/vector_service.py b/api/tests/unit_tests/services/vector_service.py index ee9ba1c6d6..ad80beb4e3 100644 --- a/api/tests/unit_tests/services/vector_service.py +++ b/api/tests/unit_tests/services/vector_service.py @@ -114,6 +114,7 @@ This test suite follows a comprehensive testing strategy that covers: ================================================================================ """ +from typing import Any from unittest.mock import Mock, patch import pytest @@ -156,7 +157,7 @@ class VectorServiceTestDataFactory: indexing_technique: str = IndexTechniqueType.HIGH_QUALITY, embedding_model_provider: str = "openai", embedding_model: str = "text-embedding-ada-002", - index_struct_dict: dict | None = None, + index_struct_dict: dict[str, Any] | None = None, **kwargs, ) -> Mock: """ diff --git a/api/tests/unit_tests/services/workflow/test_draft_var_loader_simple.py b/api/tests/unit_tests/services/workflow/test_draft_var_loader_simple.py index 8525672da8..60beec7964 100644 --- a/api/tests/unit_tests/services/workflow/test_draft_var_loader_simple.py +++ b/api/tests/unit_tests/services/workflow/test_draft_var_loader_simple.py @@ -4,12 +4,12 @@ import json from unittest.mock import Mock, patch import pytest -from graphon.file import File, FileTransferMethod, FileType -from graphon.variables.segments import ObjectSegment, StringSegment -from graphon.variables.types import SegmentType from sqlalchemy import Engine from core.workflow.file_reference import build_file_reference +from graphon.file import File, FileTransferMethod, FileType +from graphon.variables.segments import ObjectSegment, StringSegment +from graphon.variables.types import SegmentType from models.model import UploadFile from models.workflow import WorkflowDraftVariable, WorkflowDraftVariableFile from services.workflow_draft_variable_service import DraftVarLoader diff --git a/api/tests/unit_tests/services/workflow/test_workflow_draft_variable_service.py b/api/tests/unit_tests/services/workflow/test_workflow_draft_variable_service.py index e7e72793a3..f6bdb6a60e 100644 --- a/api/tests/unit_tests/services/workflow/test_workflow_draft_variable_service.py +++ b/api/tests/unit_tests/services/workflow/test_workflow_draft_variable_service.py @@ -4,10 +4,6 @@ import uuid from unittest.mock import MagicMock, Mock, patch import pytest -from graphon.enums import BuiltinNodeTypes -from graphon.file import File, FileTransferMethod, FileType -from graphon.variables.segments import StringSegment -from graphon.variables.types import SegmentType from sqlalchemy import Engine from sqlalchemy.orm import Session @@ -17,6 +13,10 @@ from core.workflow.variable_prefixes import ( ENVIRONMENT_VARIABLE_NODE_ID, SYSTEM_VARIABLE_NODE_ID, ) +from graphon.enums import BuiltinNodeTypes +from graphon.file import File, FileTransferMethod, FileType +from graphon.variables.segments import StringSegment +from graphon.variables.types import SegmentType from libs.uuid_utils import uuidv7 from models.account import Account from models.enums import DraftVariableType diff --git a/api/tests/unit_tests/services/workflow/test_workflow_event_snapshot_service.py b/api/tests/unit_tests/services/workflow/test_workflow_event_snapshot_service.py index b8b073f75c..d570dce107 100644 --- a/api/tests/unit_tests/services/workflow/test_workflow_event_snapshot_service.py +++ b/api/tests/unit_tests/services/workflow/test_workflow_event_snapshot_service.py @@ -3,17 +3,16 @@ import queue from collections.abc import Sequence from dataclasses import dataclass from datetime import UTC, datetime -from itertools import cycle from threading import Event import pytest -from graphon.entities.pause_reason import HumanInputRequired -from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus -from graphon.runtime import GraphRuntimeState, VariablePool from core.app.app_config.entities import WorkflowUIBasedAppConfig from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity from core.app.layers.pause_state_persist_layer import WorkflowResumptionContext, _WorkflowGenerateEntityWrapper +from graphon.entities.pause_reason import HumanInputRequired +from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus +from graphon.runtime import GraphRuntimeState, VariablePool from models.enums import CreatorUserRole from models.model import AppMode from models.workflow import WorkflowRun @@ -223,577 +222,3 @@ def test_resolve_task_id_priority(context_task_id, buffered_task_id, expected) - buffer_state.task_id_ready.set() task_id = _resolve_task_id(resumption_context, buffer_state, "run-1", wait_timeout=0.0) assert task_id == expected - - -# === Merged from test_workflow_event_snapshot_service_additional.py === - - -import json -import queue -from collections.abc import Mapping -from dataclasses import dataclass -from datetime import UTC, datetime -from threading import Event -from types import SimpleNamespace -from typing import Any, cast -from unittest.mock import MagicMock - -import pytest -from graphon.enums import WorkflowExecutionStatus -from graphon.runtime import GraphRuntimeState, VariablePool -from sqlalchemy.orm import Session, sessionmaker - -from core.app.app_config.entities import WorkflowUIBasedAppConfig -from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity -from core.app.entities.task_entities import StreamEvent -from core.app.layers.pause_state_persist_layer import WorkflowResumptionContext, _WorkflowGenerateEntityWrapper -from models.enums import CreatorUserRole -from models.model import AppMode -from models.workflow import WorkflowRun -from repositories.entities.workflow_pause import WorkflowPauseEntity -from services import workflow_event_snapshot_service as service_module -from services.workflow_event_snapshot_service import BufferState, MessageContext, build_workflow_event_stream - - -def _build_workflow_run_additional(status: WorkflowExecutionStatus = WorkflowExecutionStatus.RUNNING) -> WorkflowRun: - return WorkflowRun( - id="run-1", - tenant_id="tenant-1", - app_id="app-1", - workflow_id="workflow-1", - type="workflow", - triggered_from="app-run", - version="v1", - graph=None, - inputs=json.dumps({"query": "hello"}), - status=status, - outputs=json.dumps({}), - error=None, - elapsed_time=1.2, - total_tokens=5, - total_steps=2, - created_by_role=CreatorUserRole.END_USER, - created_by="user-1", - created_at=datetime(2024, 1, 1, tzinfo=UTC), - ) - - -def _build_resumption_context_additional(task_id: str) -> WorkflowResumptionContext: - app_config = WorkflowUIBasedAppConfig( - tenant_id="tenant-1", - app_id="app-1", - app_mode=AppMode.WORKFLOW, - workflow_id="workflow-1", - ) - generate_entity = WorkflowAppGenerateEntity( - task_id=task_id, - app_config=app_config, - inputs={}, - files=[], - user_id="user-1", - stream=True, - invoke_from=InvokeFrom.EXPLORE, - call_depth=0, - workflow_execution_id="run-1", - ) - runtime_state = GraphRuntimeState(variable_pool=VariablePool(), start_at=0.0) - runtime_state.outputs = {"answer": "ok"} - wrapper = _WorkflowGenerateEntityWrapper(entity=generate_entity) - return WorkflowResumptionContext( - generate_entity=wrapper, - serialized_graph_runtime_state=runtime_state.dumps(), - ) - - -class _SessionContext: - def __init__(self, session: Any) -> None: - self._session = session - - def __enter__(self) -> Any: - return self._session - - def __exit__(self, exc_type: Any, exc: Any, tb: Any) -> bool: - return False - - -class _SessionMaker: - def __init__(self, session: Any) -> None: - self._session = session - - def __call__(self) -> _SessionContext: - return _SessionContext(self._session) - - -class _SubscriptionContext: - def __init__(self, subscription: Any) -> None: - self._subscription = subscription - - def __enter__(self) -> Any: - return self._subscription - - def __exit__(self, exc_type: Any, exc: Any, tb: Any) -> bool: - return False - - -class _Topic: - def __init__(self, subscription: Any) -> None: - self._subscription = subscription - - def subscribe(self) -> _SubscriptionContext: - return _SubscriptionContext(self._subscription) - - -class _StaticSubscription: - def receive(self, timeout: int = 1) -> None: - return None - - -@dataclass(frozen=True) -class _PauseEntity(WorkflowPauseEntity): - state: bytes - - @property - def id(self) -> str: - return "pause-1" - - @property - def workflow_execution_id(self) -> str: - return "run-1" - - @property - def resumed_at(self) -> datetime | None: - return None - - @property - def paused_at(self) -> datetime: - return datetime(2024, 1, 1, tzinfo=UTC) - - def get_state(self) -> bytes: - return self.state - - def get_pause_reasons(self) -> list[Any]: - return [] - - -def test_get_message_context_should_return_none_when_no_message() -> None: - # Arrange - session = SimpleNamespace(scalar=MagicMock(return_value=None)) - session_maker = _SessionMaker(session) - - # Act - result = service_module._get_message_context(cast(sessionmaker[Session], session_maker), "run-1") - - # Assert - assert result is None - - -def test_get_message_context_should_default_created_at_to_zero_when_message_has_no_timestamp() -> None: - # Arrange - message = SimpleNamespace( - id="msg-1", - conversation_id="conv-1", - created_at=None, - answer="answer", - ) - session = SimpleNamespace(scalar=MagicMock(return_value=message)) - session_maker = _SessionMaker(session) - - # Act - result = service_module._get_message_context(cast(sessionmaker[Session], session_maker), "run-1") - - # Assert - assert result is not None - assert result.created_at == 0 - assert result.message_id == "msg-1" - assert result.conversation_id == "conv-1" - assert result.answer == "answer" - - -def test_load_resumption_context_should_return_none_when_pause_entity_missing() -> None: - # Arrange - - # Act - result = service_module._load_resumption_context(None) - - # Assert - assert result is None - - -def test_load_resumption_context_should_return_none_when_pause_entity_state_is_invalid() -> None: - # Arrange - pause_entity = _PauseEntity(state=b"not-a-valid-state") - - # Act - result = service_module._load_resumption_context(pause_entity) - - # Assert - assert result is None - - -def test_load_resumption_context_should_parse_valid_state_into_context() -> None: - # Arrange - context = _build_resumption_context_additional(task_id="task-ctx") - pause_entity = _PauseEntity(state=context.dumps().encode()) - - # Act - result = service_module._load_resumption_context(pause_entity) - - # Assert - assert result is not None - assert result.get_generate_entity().task_id == "task-ctx" - - -def test_resolve_task_id_should_return_workflow_run_id_when_buffer_state_is_missing() -> None: - # Arrange - - # Act - result = service_module._resolve_task_id( - resumption_context=None, - buffer_state=None, - workflow_run_id="run-1", - ) - - # Assert - assert result == "run-1" - - -@pytest.mark.parametrize( - ("payload", "expected"), - [ - (b'{"event":"node_started"}', {"event": "node_started"}), - (b"invalid-json", None), - (b"[]", None), - ], -) -def test_parse_event_message_should_parse_only_json_object( - payload: bytes, - expected: dict[str, Any] | None, -) -> None: - # Arrange - - # Act - result = service_module._parse_event_message(payload) - - # Assert - assert result == expected - - -def test_is_terminal_event_should_recognize_finished_and_optional_paused_events() -> None: - # Arrange - finished_event = {"event": StreamEvent.WORKFLOW_FINISHED.value} - paused_event = {"event": StreamEvent.WORKFLOW_PAUSED.value} - - # Act - is_finished = service_module._is_terminal_event(finished_event, include_paused=False) - paused_without_flag = service_module._is_terminal_event(paused_event, include_paused=False) - paused_with_flag = service_module._is_terminal_event(paused_event, include_paused=True) - - # Assert - assert is_finished is True - assert paused_without_flag is False - assert paused_with_flag is True - assert service_module._is_terminal_event(StreamEvent.PING.value, include_paused=True) is False - - -def test_apply_message_context_should_update_payload_when_context_exists() -> None: - # Arrange - payload: dict[str, Any] = {"event": "workflow_started"} - context = MessageContext(conversation_id="conv-1", message_id="msg-1", created_at=1700000000) - - # Act - service_module._apply_message_context(payload, context) - - # Assert - assert payload["conversation_id"] == "conv-1" - assert payload["message_id"] == "msg-1" - assert payload["created_at"] == 1700000000 - - -def test_start_buffering_should_capture_task_id_and_enqueue_event() -> None: - # Arrange - class Subscription: - def __init__(self) -> None: - self._calls = 0 - - def receive(self, timeout: int = 1) -> bytes | None: - self._calls += 1 - if self._calls == 1: - return b'{"event":"node_started","task_id":"task-1"}' - return None - - subscription = Subscription() - - # Act - buffer_state = service_module._start_buffering(subscription) - ready = buffer_state.task_id_ready.wait(timeout=1) - event = buffer_state.queue.get(timeout=1) - buffer_state.stop_event.set() - finished = buffer_state.done_event.wait(timeout=1) - - # Assert - assert ready is True - assert finished is True - assert buffer_state.task_id_hint == "task-1" - assert event["event"] == "node_started" - - -def test_start_buffering_should_drop_old_event_when_queue_is_full( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - class QueueWithSingleFull: - def __init__(self) -> None: - self._first_put = True - self.items: list[dict[str, Any]] = [{"event": "old"}] - - def put_nowait(self, item: dict[str, Any]) -> None: - if self._first_put: - self._first_put = False - raise queue.Full - self.items.append(item) - - def get_nowait(self) -> dict[str, Any]: - if not self.items: - raise queue.Empty - return self.items.pop(0) - - def empty(self) -> bool: - return len(self.items) == 0 - - fake_queue = QueueWithSingleFull() - monkeypatch.setattr(service_module.queue, "Queue", lambda maxsize=2048: fake_queue) - - class Subscription: - def __init__(self) -> None: - self._calls = 0 - - def receive(self, timeout: int = 1) -> bytes | None: - self._calls += 1 - if self._calls == 1: - return b'{"event":"node_started","task_id":"task-2"}' - return None - - subscription = Subscription() - - # Act - buffer_state = service_module._start_buffering(subscription) - ready = buffer_state.task_id_ready.wait(timeout=1) - buffer_state.stop_event.set() - finished = buffer_state.done_event.wait(timeout=1) - - # Assert - assert ready is True - assert finished is True - assert fake_queue.items[-1]["task_id"] == "task-2" - - -def test_start_buffering_should_set_done_event_when_subscription_raises() -> None: - # Arrange - class Subscription: - def receive(self, timeout: int = 1) -> bytes | None: - raise RuntimeError("subscription failure") - - subscription = Subscription() - - # Act - buffer_state = service_module._start_buffering(subscription) - finished = buffer_state.done_event.wait(timeout=1) - - # Assert - assert finished is True - - -def test_build_workflow_event_stream_should_emit_ping_and_terminal_snapshot_event( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - workflow_run = _build_workflow_run_additional(status=WorkflowExecutionStatus.RUNNING) - topic = _Topic(_StaticSubscription()) - workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock()) - node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) - factory = SimpleNamespace( - create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), - create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), - ) - monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) - monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) - monkeypatch.setattr( - service_module, - "_get_message_context", - MagicMock(return_value=MessageContext("conv-1", "msg-1", 1700000000)), - ) - monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) - buffer_state = BufferState( - queue=queue.Queue(), - stop_event=Event(), - done_event=Event(), - task_id_ready=Event(), - task_id_hint="task-1", - ) - monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) - monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) - monkeypatch.setattr( - service_module, - "_build_snapshot_events", - MagicMock(return_value=[{"event": StreamEvent.WORKFLOW_FINISHED.value, "task_id": "task-1"}]), - ) - - # Act - events = list( - build_workflow_event_stream( - app_mode=AppMode.ADVANCED_CHAT, - workflow_run=workflow_run, - tenant_id="tenant-1", - app_id="app-1", - session_maker=MagicMock(), - ) - ) - - # Assert - assert events[0] == StreamEvent.PING.value - finished_event = cast(Mapping[str, Any], events[1]) - assert finished_event["event"] == StreamEvent.WORKFLOW_FINISHED.value - assert buffer_state.stop_event.is_set() is True - node_repo.get_execution_snapshots_by_workflow_run.assert_called_once() - called_kwargs = node_repo.get_execution_snapshots_by_workflow_run.call_args.kwargs - assert called_kwargs["workflow_run_id"] == "run-1" - - -def test_build_workflow_event_stream_should_emit_periodic_ping_and_stop_after_idle_timeout( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - workflow_run = _build_workflow_run_additional(status=WorkflowExecutionStatus.RUNNING) - topic = _Topic(_StaticSubscription()) - workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock()) - node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) - factory = SimpleNamespace( - create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), - create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), - ) - monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) - monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) - monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) - monkeypatch.setattr(service_module, "_build_snapshot_events", MagicMock(return_value=[])) - monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) - - class AlwaysEmptyQueue: - def empty(self) -> bool: - return False - - def get(self, timeout: int = 1) -> None: - raise queue.Empty - - buffer_state = BufferState( - queue=AlwaysEmptyQueue(), # type: ignore[arg-type] - stop_event=Event(), - done_event=Event(), - task_id_ready=Event(), - task_id_hint="task-1", - ) - monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) - time_values = cycle([0.0, 6.0, 21.0, 26.0]) - monkeypatch.setattr(service_module.time, "time", lambda: next(time_values)) - - # Act - events = list( - build_workflow_event_stream( - app_mode=AppMode.WORKFLOW, - workflow_run=workflow_run, - tenant_id="tenant-1", - app_id="app-1", - session_maker=MagicMock(), - idle_timeout=20.0, - ping_interval=5.0, - ) - ) - - # Assert - assert events == [StreamEvent.PING.value, StreamEvent.PING.value] - assert buffer_state.stop_event.is_set() is True - - -def test_build_workflow_event_stream_should_exit_when_buffer_done_and_empty( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - workflow_run = _build_workflow_run_additional(status=WorkflowExecutionStatus.RUNNING) - topic = _Topic(_StaticSubscription()) - workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock()) - node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) - factory = SimpleNamespace( - create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), - create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), - ) - monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) - monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) - monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) - monkeypatch.setattr(service_module, "_build_snapshot_events", MagicMock(return_value=[])) - monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) - buffer_state = BufferState( - queue=queue.Queue(), - stop_event=Event(), - done_event=Event(), - task_id_ready=Event(), - task_id_hint="task-1", - ) - buffer_state.done_event.set() - monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) - - # Act - events = list( - build_workflow_event_stream( - app_mode=AppMode.WORKFLOW, - workflow_run=workflow_run, - tenant_id="tenant-1", - app_id="app-1", - session_maker=MagicMock(), - ) - ) - - # Assert - assert events == [StreamEvent.PING.value] - assert buffer_state.stop_event.is_set() is True - - -def test_build_workflow_event_stream_should_continue_when_pause_loading_fails( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - workflow_run = _build_workflow_run_additional(status=WorkflowExecutionStatus.PAUSED) - topic = _Topic(_StaticSubscription()) - workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock(side_effect=RuntimeError("boom"))) - node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) - factory = SimpleNamespace( - create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), - create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), - ) - monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) - monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) - monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) - monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) - snapshot_builder = MagicMock(return_value=[{"event": StreamEvent.WORKFLOW_FINISHED.value}]) - monkeypatch.setattr(service_module, "_build_snapshot_events", snapshot_builder) - buffer_state = BufferState( - queue=queue.Queue(), - stop_event=Event(), - done_event=Event(), - task_id_ready=Event(), - task_id_hint="task-1", - ) - monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) - - # Act - events = list( - build_workflow_event_stream( - app_mode=AppMode.WORKFLOW, - workflow_run=workflow_run, - tenant_id="tenant-1", - app_id="app-1", - session_maker=MagicMock(), - ) - ) - - # Assert - assert events[0] == StreamEvent.PING.value - assert snapshot_builder.call_args.kwargs["pause_entity"] is None diff --git a/api/tests/unit_tests/services/workflow/test_workflow_event_snapshot_service_additional.py b/api/tests/unit_tests/services/workflow/test_workflow_event_snapshot_service_additional.py new file mode 100644 index 0000000000..d2634d7d7b --- /dev/null +++ b/api/tests/unit_tests/services/workflow/test_workflow_event_snapshot_service_additional.py @@ -0,0 +1,505 @@ +import json +import queue +from collections.abc import Mapping +from dataclasses import dataclass +from datetime import UTC, datetime +from itertools import cycle +from threading import Event +from types import SimpleNamespace +from typing import Any, cast +from unittest.mock import MagicMock + +import pytest +from sqlalchemy.orm import Session, sessionmaker + +from core.app.app_config.entities import WorkflowUIBasedAppConfig +from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity +from core.app.entities.task_entities import StreamEvent +from core.app.layers.pause_state_persist_layer import WorkflowResumptionContext, _WorkflowGenerateEntityWrapper +from graphon.enums import WorkflowExecutionStatus +from graphon.runtime import GraphRuntimeState, VariablePool +from models.enums import CreatorUserRole +from models.model import AppMode +from models.workflow import WorkflowRun +from repositories.entities.workflow_pause import WorkflowPauseEntity +from services import workflow_event_snapshot_service as service_module +from services.workflow_event_snapshot_service import BufferState, MessageContext, build_workflow_event_stream + + +def _build_workflow_run(status: WorkflowExecutionStatus = WorkflowExecutionStatus.RUNNING) -> WorkflowRun: + return WorkflowRun( + id="run-1", + tenant_id="tenant-1", + app_id="app-1", + workflow_id="workflow-1", + type="workflow", + triggered_from="app-run", + version="v1", + graph=None, + inputs=json.dumps({"query": "hello"}), + status=status, + outputs=json.dumps({}), + error=None, + elapsed_time=1.2, + total_tokens=5, + total_steps=2, + created_by_role=CreatorUserRole.END_USER, + created_by="user-1", + created_at=datetime(2024, 1, 1, tzinfo=UTC), + ) + + +def _build_resumption_context(task_id: str) -> WorkflowResumptionContext: + app_config = WorkflowUIBasedAppConfig( + tenant_id="tenant-1", + app_id="app-1", + app_mode=AppMode.WORKFLOW, + workflow_id="workflow-1", + ) + generate_entity = WorkflowAppGenerateEntity( + task_id=task_id, + app_config=app_config, + inputs={}, + files=[], + user_id="user-1", + stream=True, + invoke_from=InvokeFrom.EXPLORE, + call_depth=0, + workflow_execution_id="run-1", + ) + runtime_state = GraphRuntimeState(variable_pool=VariablePool(), start_at=0.0) + runtime_state.outputs = {"answer": "ok"} + wrapper = _WorkflowGenerateEntityWrapper(entity=generate_entity) + return WorkflowResumptionContext( + generate_entity=wrapper, + serialized_graph_runtime_state=runtime_state.dumps(), + ) + + +class _SessionContext: + def __init__(self, session: Any) -> None: + self._session = session + + def __enter__(self) -> Any: + return self._session + + def __exit__(self, exc_type: Any, exc: Any, tb: Any) -> bool: + return False + + +class _SessionMaker: + def __init__(self, session: Any) -> None: + self._session = session + + def __call__(self) -> _SessionContext: + return _SessionContext(self._session) + + +class _SubscriptionContext: + def __init__(self, subscription: Any) -> None: + self._subscription = subscription + + def __enter__(self) -> Any: + return self._subscription + + def __exit__(self, exc_type: Any, exc: Any, tb: Any) -> bool: + return False + + +class _Topic: + def __init__(self, subscription: Any) -> None: + self._subscription = subscription + + def subscribe(self) -> _SubscriptionContext: + return _SubscriptionContext(self._subscription) + + +class _StaticSubscription: + def receive(self, timeout: int = 1) -> None: + return None + + +@dataclass(frozen=True) +class _PauseEntity(WorkflowPauseEntity): + state: bytes + + @property + def id(self) -> str: + return "pause-1" + + @property + def workflow_execution_id(self) -> str: + return "run-1" + + @property + def resumed_at(self) -> datetime | None: + return None + + @property + def paused_at(self) -> datetime: + return datetime(2024, 1, 1, tzinfo=UTC) + + def get_state(self) -> bytes: + return self.state + + def get_pause_reasons(self) -> list[Any]: + return [] + + +class TestWorkflowEventSnapshotHelpers: + def test_get_message_context_should_return_none_when_no_message(self) -> None: + session = SimpleNamespace(scalar=MagicMock(return_value=None)) + session_maker = _SessionMaker(session) + + result = service_module._get_message_context(cast(sessionmaker[Session], session_maker), "run-1") + + assert result is None + + def test_get_message_context_should_default_created_at_to_zero_when_message_has_no_timestamp(self) -> None: + message = SimpleNamespace( + id="msg-1", + conversation_id="conv-1", + created_at=None, + answer="answer", + ) + session = SimpleNamespace(scalar=MagicMock(return_value=message)) + session_maker = _SessionMaker(session) + + result = service_module._get_message_context(cast(sessionmaker[Session], session_maker), "run-1") + + assert result is not None + assert result.created_at == 0 + assert result.message_id == "msg-1" + assert result.conversation_id == "conv-1" + assert result.answer == "answer" + + def test_load_resumption_context_should_return_none_when_pause_entity_missing(self) -> None: + assert service_module._load_resumption_context(None) is None + + def test_load_resumption_context_should_return_none_when_pause_entity_state_is_invalid(self) -> None: + pause_entity = _PauseEntity(state=b"not-a-valid-state") + assert service_module._load_resumption_context(pause_entity) is None + + def test_load_resumption_context_should_parse_valid_state_into_context(self) -> None: + context = _build_resumption_context(task_id="task-ctx") + pause_entity = _PauseEntity(state=context.dumps().encode()) + + result = service_module._load_resumption_context(pause_entity) + + assert result is not None + assert result.get_generate_entity().task_id == "task-ctx" + + def test_resolve_task_id_should_return_workflow_run_id_when_buffer_state_is_missing(self) -> None: + result = service_module._resolve_task_id( + resumption_context=None, + buffer_state=None, + workflow_run_id="run-1", + ) + + assert result == "run-1" + + @pytest.mark.parametrize( + ("payload", "expected"), + [ + (b'{"event":"node_started"}', {"event": "node_started"}), + (b"invalid-json", None), + (b"[]", None), + ], + ) + def test_parse_event_message_should_parse_only_json_object( + self, + payload: bytes, + expected: dict[str, Any] | None, + ) -> None: + result = service_module._parse_event_message(payload) + assert result == expected + + def test_is_terminal_event_should_recognize_finished_and_optional_paused_events(self) -> None: + finished_event = {"event": StreamEvent.WORKFLOW_FINISHED.value} + paused_event = {"event": StreamEvent.WORKFLOW_PAUSED.value} + + is_finished = service_module._is_terminal_event(finished_event, include_paused=False) + paused_without_flag = service_module._is_terminal_event(paused_event, include_paused=False) + paused_with_flag = service_module._is_terminal_event(paused_event, include_paused=True) + + assert is_finished is True + assert paused_without_flag is False + assert paused_with_flag is True + assert service_module._is_terminal_event(StreamEvent.PING.value, include_paused=True) is False + + def test_apply_message_context_should_update_payload_when_context_exists(self) -> None: + payload: dict[str, Any] = {"event": "workflow_started"} + context = MessageContext(conversation_id="conv-1", message_id="msg-1", created_at=1700000000) + + service_module._apply_message_context(payload, context) + + assert payload["conversation_id"] == "conv-1" + assert payload["message_id"] == "msg-1" + assert payload["created_at"] == 1700000000 + + def test_start_buffering_should_capture_task_id_and_enqueue_event(self) -> None: + class Subscription: + def __init__(self) -> None: + self._calls = 0 + + def receive(self, timeout: int = 1) -> bytes | None: + self._calls += 1 + if self._calls == 1: + return b'{"event":"node_started","task_id":"task-1"}' + return None + + subscription = Subscription() + + buffer_state = service_module._start_buffering(subscription) + ready = buffer_state.task_id_ready.wait(timeout=1) + event = buffer_state.queue.get(timeout=1) + buffer_state.stop_event.set() + finished = buffer_state.done_event.wait(timeout=1) + + assert ready is True + assert finished is True + assert buffer_state.task_id_hint == "task-1" + assert event["event"] == "node_started" + + def test_start_buffering_should_drop_old_event_when_queue_is_full( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + class QueueWithSingleFull: + def __init__(self) -> None: + self._first_put = True + self.items: list[dict[str, Any]] = [{"event": "old"}] + + def put_nowait(self, item: dict[str, Any]) -> None: + if self._first_put: + self._first_put = False + raise queue.Full + self.items.append(item) + + def get_nowait(self) -> dict[str, Any]: + if not self.items: + raise queue.Empty + return self.items.pop(0) + + def empty(self) -> bool: + return len(self.items) == 0 + + fake_queue = QueueWithSingleFull() + monkeypatch.setattr(service_module.queue, "Queue", lambda maxsize=2048: fake_queue) + + class Subscription: + def __init__(self) -> None: + self._calls = 0 + + def receive(self, timeout: int = 1) -> bytes | None: + self._calls += 1 + if self._calls == 1: + return b'{"event":"node_started","task_id":"task-2"}' + return None + + subscription = Subscription() + + buffer_state = service_module._start_buffering(subscription) + ready = buffer_state.task_id_ready.wait(timeout=1) + buffer_state.stop_event.set() + finished = buffer_state.done_event.wait(timeout=1) + + assert ready is True + assert finished is True + assert fake_queue.items[-1]["task_id"] == "task-2" + + def test_start_buffering_should_set_done_event_when_subscription_raises(self) -> None: + class Subscription: + def receive(self, timeout: int = 1) -> bytes | None: + raise RuntimeError("subscription failure") + + subscription = Subscription() + buffer_state = service_module._start_buffering(subscription) + + assert buffer_state.done_event.wait(timeout=1) is True + + +class TestBuildWorkflowEventStream: + def test_build_workflow_event_stream_should_emit_ping_and_terminal_snapshot_event( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + workflow_run = _build_workflow_run(status=WorkflowExecutionStatus.RUNNING) + topic = _Topic(_StaticSubscription()) + workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock()) + node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) + factory = SimpleNamespace( + create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), + create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), + ) + monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) + monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) + monkeypatch.setattr( + service_module, + "_get_message_context", + MagicMock(return_value=MessageContext("conv-1", "msg-1", 1700000000)), + ) + monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) + buffer_state = BufferState( + queue=queue.Queue(), + stop_event=Event(), + done_event=Event(), + task_id_ready=Event(), + task_id_hint="task-1", + ) + monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) + monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) + monkeypatch.setattr( + service_module, + "_build_snapshot_events", + MagicMock(return_value=[{"event": StreamEvent.WORKFLOW_FINISHED.value, "task_id": "task-1"}]), + ) + + events = list( + build_workflow_event_stream( + app_mode=AppMode.ADVANCED_CHAT, + workflow_run=workflow_run, + tenant_id="tenant-1", + app_id="app-1", + session_maker=MagicMock(), + ) + ) + + assert events[0] == StreamEvent.PING.value + finished_event = cast(Mapping[str, Any], events[1]) + assert finished_event["event"] == StreamEvent.WORKFLOW_FINISHED.value + assert buffer_state.stop_event.is_set() is True + node_repo.get_execution_snapshots_by_workflow_run.assert_called_once() + called_kwargs = node_repo.get_execution_snapshots_by_workflow_run.call_args.kwargs + assert called_kwargs["workflow_run_id"] == "run-1" + + def test_build_workflow_event_stream_should_emit_periodic_ping_and_stop_after_idle_timeout( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + workflow_run = _build_workflow_run(status=WorkflowExecutionStatus.RUNNING) + topic = _Topic(_StaticSubscription()) + workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock()) + node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) + factory = SimpleNamespace( + create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), + create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), + ) + monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) + monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) + monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) + monkeypatch.setattr(service_module, "_build_snapshot_events", MagicMock(return_value=[])) + monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) + + class AlwaysEmptyQueue: + def empty(self) -> bool: + return False + + def get(self, timeout: int = 1) -> None: + raise queue.Empty + + buffer_state = BufferState( + queue=AlwaysEmptyQueue(), # type: ignore[arg-type] + stop_event=Event(), + done_event=Event(), + task_id_ready=Event(), + task_id_hint="task-1", + ) + monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) + time_values = cycle([0.0, 6.0, 21.0, 26.0]) + monkeypatch.setattr(service_module.time, "time", lambda: next(time_values)) + + events = list( + build_workflow_event_stream( + app_mode=AppMode.WORKFLOW, + workflow_run=workflow_run, + tenant_id="tenant-1", + app_id="app-1", + session_maker=MagicMock(), + idle_timeout=20.0, + ping_interval=5.0, + ) + ) + + assert events == [StreamEvent.PING.value, StreamEvent.PING.value] + assert buffer_state.stop_event.is_set() is True + + def test_build_workflow_event_stream_should_exit_when_buffer_done_and_empty( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + workflow_run = _build_workflow_run(status=WorkflowExecutionStatus.RUNNING) + topic = _Topic(_StaticSubscription()) + workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock()) + node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) + factory = SimpleNamespace( + create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), + create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), + ) + monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) + monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) + monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) + monkeypatch.setattr(service_module, "_build_snapshot_events", MagicMock(return_value=[])) + monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) + buffer_state = BufferState( + queue=queue.Queue(), + stop_event=Event(), + done_event=Event(), + task_id_ready=Event(), + task_id_hint="task-1", + ) + buffer_state.done_event.set() + monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) + + events = list( + build_workflow_event_stream( + app_mode=AppMode.WORKFLOW, + workflow_run=workflow_run, + tenant_id="tenant-1", + app_id="app-1", + session_maker=MagicMock(), + ) + ) + + assert events == [StreamEvent.PING.value] + assert buffer_state.stop_event.is_set() is True + + def test_build_workflow_event_stream_should_continue_when_pause_loading_fails( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + workflow_run = _build_workflow_run(status=WorkflowExecutionStatus.PAUSED) + topic = _Topic(_StaticSubscription()) + workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock(side_effect=RuntimeError("boom"))) + node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) + factory = SimpleNamespace( + create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), + create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), + ) + monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) + monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) + monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) + monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) + snapshot_builder = MagicMock(return_value=[{"event": StreamEvent.WORKFLOW_FINISHED.value}]) + monkeypatch.setattr(service_module, "_build_snapshot_events", snapshot_builder) + buffer_state = BufferState( + queue=queue.Queue(), + stop_event=Event(), + done_event=Event(), + task_id_ready=Event(), + task_id_hint="task-1", + ) + monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) + + events = list( + build_workflow_event_stream( + app_mode=AppMode.WORKFLOW, + workflow_run=workflow_run, + tenant_id="tenant-1", + app_id="app-1", + session_maker=MagicMock(), + ) + ) + + assert events[0] == StreamEvent.PING.value + assert snapshot_builder.call_args.kwargs["pause_entity"] is None diff --git a/api/tests/unit_tests/services/workflow/test_workflow_human_input_delivery.py b/api/tests/unit_tests/services/workflow/test_workflow_human_input_delivery.py index 98d057e41f..d7192994b2 100644 --- a/api/tests/unit_tests/services/workflow/test_workflow_human_input_delivery.py +++ b/api/tests/unit_tests/services/workflow/test_workflow_human_input_delivery.py @@ -3,9 +3,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter -from graphon.enums import BuiltinNodeTypes -from graphon.nodes.human_input.entities import HumanInputNodeData from sqlalchemy.orm import sessionmaker from core.workflow.human_input_compat import ( @@ -15,6 +12,9 @@ from core.workflow.human_input_compat import ( ExternalRecipient, MemberRecipient, ) +from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter +from graphon.enums import BuiltinNodeTypes +from graphon.nodes.human_input.entities import HumanInputNodeData from services import workflow_service as workflow_service_module from services.workflow_service import WorkflowService diff --git a/api/tests/unit_tests/tasks/test_clean_dataset_task.py b/api/tests/unit_tests/tasks/test_clean_dataset_task.py index 936a10d6c5..b4332334ab 100644 --- a/api/tests/unit_tests/tasks/test_clean_dataset_task.py +++ b/api/tests/unit_tests/tasks/test_clean_dataset_task.py @@ -60,12 +60,6 @@ def mock_db_session(): cm.__exit__.return_value = None mock_sf.create_session.return_value = cm - # Setup query chain - mock_query = MagicMock() - mock_session.query.return_value = mock_query - mock_query.where.return_value = mock_query - mock_query.delete.return_value = 0 - # Setup scalars for select queries mock_session.scalars.return_value.all.return_value = [] @@ -220,11 +214,6 @@ class TestPipelineAndWorkflowDeletion: - Pipeline record is deleted - Related workflow record is deleted """ - # Arrange - mock_query = mock_db_session.session.query.return_value - mock_query.where.return_value = mock_query - mock_query.delete.return_value = 1 - # Act clean_dataset_task( dataset_id=dataset_id, @@ -236,9 +225,9 @@ class TestPipelineAndWorkflowDeletion: pipeline_id=pipeline_id, ) - # Assert - verify delete was called for pipeline-related queries - # The actual count depends on total queries, but pipeline deletion should add 2 more - assert mock_query.delete.call_count >= 7 # 5 base + 2 pipeline/workflow + # Assert - verify execute was called for delete operations + # 1 attachment JOIN query + 5 base deletes + 2 pipeline/workflow deletes = 8 + assert mock_db_session.session.execute.call_count >= 8 def test_clean_dataset_task_without_pipeline_id( self, @@ -256,11 +245,6 @@ class TestPipelineAndWorkflowDeletion: Expected behavior: - Pipeline and workflow deletion queries are not executed """ - # Arrange - mock_query = mock_db_session.session.query.return_value - mock_query.where.return_value = mock_query - mock_query.delete.return_value = 1 - # Act clean_dataset_task( dataset_id=dataset_id, @@ -272,8 +256,9 @@ class TestPipelineAndWorkflowDeletion: pipeline_id=None, ) - # Assert - verify delete was called only for base queries (5 times) - assert mock_query.delete.call_count == 5 + # Assert - verify execute was called for delete operations + # 1 attachment JOIN query + 5 base deletes = 6 + assert mock_db_session.session.execute.call_count == 6 # ============================================================================ diff --git a/api/tests/unit_tests/tasks/test_dataset_indexing_task.py b/api/tests/unit_tests/tasks/test_dataset_indexing_task.py index 34e474c921..5dad58b8f1 100644 --- a/api/tests/unit_tests/tasks/test_dataset_indexing_task.py +++ b/api/tests/unit_tests/tasks/test_dataset_indexing_task.py @@ -82,8 +82,8 @@ def mock_db_session(): """Mock session_factory.create_session() to return a session whose queries use shared test data. Tests set session._shared_data = {"dataset": , "documents": [, ...]} - This fixture makes session.query(Dataset).first() return the shared dataset, - and session.query(Document).all()/first() return from the shared documents. + This fixture makes session.scalar(select(Dataset)...) return the shared dataset, + and session.scalars(select(Document)...).all() return the shared documents. """ with patch("tasks.document_indexing_task.session_factory") as mock_sf: session = MagicMock() @@ -92,93 +92,68 @@ def mock_db_session(): # Keep a pointer so repeated Document.first() calls iterate across provided docs session._doc_first_idx = 0 - def _query_side_effect(model): - q = MagicMock() + def _get_entity(stmt) -> type | None: + """Extract the mapped entity class from a SQLAlchemy select statement.""" + try: + descs = stmt.column_descriptions + if descs: + return descs[0].get("entity") + except (AttributeError, TypeError): + pass + return None - # Capture filters passed via where(...) so first()/all() can honor them. - q._filters = {} + def _extract_id_from_where(stmt) -> str | None: + """Return the value bound to the 'id' column in the WHERE clause, if present.""" + try: + where = stmt.whereclause + if where is None: + return None + # Both single-clause and AND-clause-list cases + clauses = list(getattr(where, "clauses", [where])) + for clause in clauses: + left = getattr(clause, "left", None) + right = getattr(clause, "right", None) + if left is not None and right is not None: + if getattr(left, "key", None) == "id": + return getattr(right, "value", None) + except Exception: + pass + return None - def _extract_filters(*conds, **kw): - # Support both SQLAlchemy expressions (BinaryExpression) and kwargs - # We only need the simple fields used by production code: id, dataset_id, and id.in_(...) - for cond in conds: - left = getattr(cond, "left", None) - right = getattr(cond, "right", None) - key = None - if left is not None: - key = getattr(left, "key", None) or getattr(left, "name", None) - if not key: - continue - # Right side might be a BindParameter with .value, or a raw value/sequence - val = getattr(right, "value", right) - q._filters[key] = val - # Also accept kwargs (e.g., where(id=...)) just in case - for k, v in kw.items(): - q._filters[k] = v - - def _where_side_effect(*conds, **kw): - _extract_filters(*conds, **kw) - return q - - q.where.side_effect = _where_side_effect - - # Dataset queries - if model.__name__ == "Dataset": - - def _dataset_first(): - ds = session._shared_data.get("dataset") - if not ds: - return None - if "id" in q._filters: - val = q._filters["id"] - if isinstance(val, (list, tuple, set)): - return ds if ds.id in val else None - return ds if ds.id == val else None - return ds - - def _dataset_all(): - ds = session._shared_data.get("dataset") - if not ds: - return [] - first = _dataset_first() - return [first] if first else [] - - q.first.side_effect = _dataset_first - q.all.side_effect = _dataset_all - return q - - # Document queries - if model.__name__ == "Document": - - def _apply_doc_filters(docs): - result = list(docs) - for key in ("id", "dataset_id"): - if key in q._filters: - val = q._filters[key] - if isinstance(val, (list, tuple, set)): - result = [d for d in result if getattr(d, key, None) in val] - else: - result = [d for d in result if getattr(d, key, None) == val] - return result - - def _docs_all(): + def _scalar_side_effect(stmt): + entity = _get_entity(stmt) + if entity is not None: + if entity.__name__ == "Dataset": + return session._shared_data.get("dataset") + elif entity.__name__ == "Document": docs = session._shared_data.get("documents", []) - return _apply_doc_filters(docs) + if not docs: + return None + # When the WHERE clause filters by id, return the matching document + queried_id = _extract_id_from_where(stmt) + if queried_id: + doc_map = {d.id: d for d in docs} + return doc_map.get(queried_id, docs[0]) + return docs[0] + return None - def _docs_first(): - docs = _docs_all() - return docs[0] if docs else None + def _scalars_side_effect(stmt): + entity = _get_entity(stmt) + result = MagicMock() + if entity is not None: + if entity.__name__ == "Document": + result.all.return_value = list(session._shared_data.get("documents", [])) + elif entity.__name__ == "Dataset": + ds = session._shared_data.get("dataset") + result.all.return_value = [ds] if ds else [] + else: + result.all.return_value = [] + else: + result.all.return_value = [] + return result - q.all.side_effect = _docs_all - q.first.side_effect = _docs_first - return q - - # Default fallback - q.first.return_value = None - q.all.return_value = [] - return q - - session.query.side_effect = _query_side_effect + session.scalar.side_effect = _scalar_side_effect + session.scalars.side_effect = _scalars_side_effect # Implement session.begin() context manager that commits on exit session.commit = MagicMock() @@ -638,8 +613,6 @@ class TestProgressTracking: wrapper = TaskWrapper(data=next_task_data) mock_redis.rpop.return_value = wrapper.serialize() - mock_db_session.query.return_value.where.return_value.first.return_value = mock_dataset - with patch("tasks.document_indexing_task.FeatureService.get_features") as mock_features: mock_features.return_value.billing.enabled = False @@ -662,7 +635,6 @@ class TestProgressTracking: """ # Arrange mock_redis.rpop.return_value = None # No more tasks - mock_db_session.query.return_value.where.return_value.first.return_value = mock_dataset with patch("tasks.document_indexing_task.FeatureService.get_features") as mock_features: mock_features.return_value.billing.enabled = False @@ -780,8 +752,7 @@ class TestErrorHandling: If the dataset doesn't exist, the task should exit gracefully. """ - # Arrange - mock_db_session.query.return_value.where.return_value.first.return_value = None + # Arrange - dataset is not in _shared_data (None by default), so scalar() returns None # Act _document_indexing(dataset_id, document_ids) @@ -806,8 +777,6 @@ class TestErrorHandling: # Set up rpop to return task once for concurrency check mock_redis.rpop.side_effect = [wrapper.serialize(), None] - mock_db_session.query.return_value.where.return_value.first.return_value = mock_dataset - # Make _document_indexing raise an error with patch("tasks.document_indexing_task._document_indexing") as mock_indexing: mock_indexing.side_effect = Exception("Processing failed") @@ -844,7 +813,7 @@ class TestErrorHandling: # Mock rpop to return tasks one by one mock_redis.rpop.side_effect = tasks[:concurrency_limit] + [None] - mock_db_session.query.return_value.where.return_value.first.return_value = mock_dataset + mock_db_session._shared_data["dataset"] = mock_dataset with patch("tasks.document_indexing_task.dify_config.TENANT_ISOLATED_TASK_CONCURRENCY", concurrency_limit): with patch("tasks.document_indexing_task.normal_document_indexing_task") as mock_task: @@ -977,7 +946,7 @@ class TestAdvancedScenarios: # Mock rpop to return tasks up to concurrency limit mock_redis.rpop.side_effect = waiting_tasks[:concurrency_limit] + [None] - mock_db_session.query.return_value.where.return_value.first.return_value = mock_dataset + mock_db_session._shared_data["dataset"] = mock_dataset with patch("tasks.document_indexing_task.dify_config.TENANT_ISOLATED_TASK_CONCURRENCY", concurrency_limit): with patch("tasks.document_indexing_task.normal_document_indexing_task") as mock_task: @@ -1070,7 +1039,7 @@ class TestAdvancedScenarios: # Mock rpop to return tasks in FIFO order mock_redis.rpop.side_effect = tasks + [None] - mock_db_session.query.return_value.where.return_value.first.return_value = mock_dataset + mock_db_session._shared_data["dataset"] = mock_dataset with patch("tasks.document_indexing_task.dify_config.TENANT_ISOLATED_TASK_CONCURRENCY", 3): with patch("tasks.document_indexing_task.normal_document_indexing_task") as mock_task: @@ -1108,7 +1077,7 @@ class TestAdvancedScenarios: """ # Arrange mock_redis.rpop.return_value = None # Empty queue - mock_db_session.query.return_value.where.return_value.first.return_value = mock_dataset + mock_db_session._shared_data["dataset"] = mock_dataset with patch("tasks.document_indexing_task.normal_document_indexing_task") as mock_task: # Act @@ -1276,7 +1245,7 @@ class TestIntegration: # First call returns task 2, second call returns None mock_redis.rpop.side_effect = [wrapper.serialize(), None] - mock_db_session.query.return_value.where.return_value.first.return_value = mock_dataset + mock_db_session._shared_data["dataset"] = mock_dataset with patch("tasks.document_indexing_task.FeatureService.get_features") as mock_features: mock_features.return_value.billing.enabled = False @@ -1433,7 +1402,7 @@ class TestPerformanceScenarios: # Mock rpop to return tasks up to concurrency limit mock_redis.rpop.side_effect = waiting_tasks[:concurrency_limit] + [None] - mock_db_session.query.return_value.where.return_value.first.return_value = mock_dataset + mock_db_session._shared_data["dataset"] = mock_dataset with patch("tasks.document_indexing_task.dify_config.TENANT_ISOLATED_TASK_CONCURRENCY", concurrency_limit): with patch("tasks.document_indexing_task.normal_document_indexing_task") as mock_task: @@ -1536,10 +1505,8 @@ class TestDocumentIndexingTaskSummaryFlow: """Test early return when dataset does not exist.""" # Arrange session = MagicMock() - dataset_query = MagicMock() - dataset_query.where.return_value = dataset_query - dataset_query.first.return_value = None - session.query.side_effect = lambda model: dataset_query + session = MagicMock() + session.scalar.return_value = None # dataset not found create_session_mock = MagicMock(return_value=_SessionContext(session)) monkeypatch.setattr("tasks.document_indexing_task.session_factory.create_session", create_session_mock) @@ -1560,16 +1527,15 @@ class TestDocumentIndexingTaskSummaryFlow: dataset = SimpleNamespace(id="dataset-1", tenant_id="tenant-1") document = SimpleNamespace(id="doc-1", indexing_status=None, error=None, stopped_at=None) - dataset_query = MagicMock() - dataset_query.where.return_value = dataset_query - dataset_query.first.return_value = dataset - - document_query = MagicMock() - document_query.where.return_value = document_query - document_query.first.return_value = document - session = MagicMock() - session.query.side_effect = lambda model: dataset_query if model is Dataset else document_query + + def _scalar_se(stmt): + entity = stmt.column_descriptions[0].get("entity") + if entity is Dataset: + return dataset + return document + + session.scalar.side_effect = _scalar_se monkeypatch.setattr( "tasks.document_indexing_task.session_factory.create_session", @@ -1643,9 +1609,12 @@ class TestDocumentIndexingTaskSummaryFlow: session2.begin.return_value = nullcontext() session3 = MagicMock() - session1.query.side_effect = lambda model: dataset_query - session2.query.side_effect = lambda model: phase1_document_query - session3.query.side_effect = lambda model: summary_document_query if model is Document else dataset_query + session1.scalar.return_value = dataset + session2.scalars.return_value = MagicMock(all=MagicMock(return_value=phase1_docs)) + session3.scalar.return_value = dataset + session3.scalars.return_value = MagicMock( + all=MagicMock(return_value=[doc_eligible, doc_skip_form, doc_skip_status]) + ) create_session_mock = MagicMock( side_effect=[_SessionContext(session1), _SessionContext(session2), _SessionContext(session3)] @@ -1704,9 +1673,11 @@ class TestDocumentIndexingTaskSummaryFlow: session2 = MagicMock() session2.begin.return_value = nullcontext() session3 = MagicMock() - session1.query.side_effect = lambda model: dataset_query - session2.query.side_effect = lambda model: phase1_query - session3.query.side_effect = lambda model: summary_query if model is Document else dataset_query + + session1.scalar.return_value = dataset + session2.scalars.return_value = MagicMock(all=MagicMock(return_value=[SimpleNamespace(id="doc-1")])) + session3.scalar.return_value = dataset + session3.scalars.return_value = MagicMock(all=MagicMock(return_value=[doc_eligible])) monkeypatch.setattr( "tasks.document_indexing_task.session_factory.create_session", @@ -1736,21 +1707,14 @@ class TestDocumentIndexingTaskSummaryFlow: """Test early return when dataset is missing after indexing.""" # Arrange dataset = SimpleNamespace(id="dataset-1", tenant_id="tenant-1") - dataset_query = MagicMock() - dataset_query.where.return_value = dataset_query - dataset_query.first.side_effect = [dataset, None] - - document_query = MagicMock() - document_query.where.return_value = document_query - document_query.all.return_value = [SimpleNamespace(id="doc-1")] session1 = MagicMock() session2 = MagicMock() session2.begin.return_value = nullcontext() session3 = MagicMock() - session1.query.side_effect = lambda model: dataset_query - session2.query.side_effect = lambda model: document_query - session3.query.side_effect = lambda model: dataset_query + session1.scalar.return_value = dataset + session2.scalars.return_value = MagicMock(all=MagicMock(return_value=[SimpleNamespace(id="doc-1")])) + session3.scalar.return_value = None # dataset not found on second query monkeypatch.setattr( "tasks.document_indexing_task.session_factory.create_session", @@ -1770,7 +1734,7 @@ class TestDocumentIndexingTaskSummaryFlow: _document_indexing("dataset-1", ["doc-1"]) # Assert - session3.query.assert_called() + session3.scalar.assert_called() def test_should_skip_summary_when_not_high_quality(self, monkeypatch: pytest.MonkeyPatch) -> None: """Test summary generation skipped when indexing_technique is not high_quality.""" @@ -1781,21 +1745,14 @@ class TestDocumentIndexingTaskSummaryFlow: indexing_technique="economy", summary_index_setting={"enable": True}, ) - dataset_query = MagicMock() - dataset_query.where.return_value = dataset_query - dataset_query.first.return_value = dataset - - document_query = MagicMock() - document_query.where.return_value = document_query - document_query.all.return_value = [SimpleNamespace(id="doc-1")] - session1 = MagicMock() session2 = MagicMock() session2.begin.return_value = nullcontext() session3 = MagicMock() - session1.query.side_effect = lambda model: dataset_query - session2.query.side_effect = lambda model: document_query - session3.query.side_effect = lambda model: dataset_query + + session1.scalar.return_value = dataset + session2.scalars.return_value = MagicMock(all=MagicMock(return_value=[SimpleNamespace(id="doc-1")])) + session3.scalar.return_value = dataset monkeypatch.setattr( "tasks.document_indexing_task.session_factory.create_session", @@ -1824,19 +1781,12 @@ class TestDocumentIndexingTaskSummaryFlow: """Test summary generation is skipped when indexing is paused.""" # Arrange dataset = SimpleNamespace(id="dataset-1", tenant_id="tenant-1") - dataset_query = MagicMock() - dataset_query.where.return_value = dataset_query - dataset_query.first.return_value = dataset - - document_query = MagicMock() - document_query.where.return_value = document_query - document_query.all.return_value = [SimpleNamespace(id="doc-1")] session1 = MagicMock() session2 = MagicMock() session2.begin.return_value = nullcontext() - session1.query.side_effect = lambda model: dataset_query - session2.query.side_effect = lambda model: document_query + session1.scalar.return_value = dataset + session2.scalars.return_value = MagicMock(all=MagicMock(return_value=[SimpleNamespace(id="doc-1")])) create_session_mock = MagicMock(side_effect=[_SessionContext(session1), _SessionContext(session2)]) monkeypatch.setattr("tasks.document_indexing_task.session_factory.create_session", create_session_mock) @@ -1865,19 +1815,12 @@ class TestDocumentIndexingTaskSummaryFlow: """Test generic indexing runner exception is handled.""" # Arrange dataset = SimpleNamespace(id="dataset-1", tenant_id="tenant-1") - dataset_query = MagicMock() - dataset_query.where.return_value = dataset_query - dataset_query.first.return_value = dataset - - document_query = MagicMock() - document_query.where.return_value = document_query - document_query.all.return_value = [SimpleNamespace(id="doc-1")] session1 = MagicMock() session2 = MagicMock() session2.begin.return_value = nullcontext() - session1.query.side_effect = lambda model: dataset_query - session2.query.side_effect = lambda model: document_query + session1.scalar.return_value = dataset + session2.scalars.return_value = MagicMock(all=MagicMock(return_value=[SimpleNamespace(id="doc-1")])) monkeypatch.setattr( "tasks.document_indexing_task.session_factory.create_session", @@ -1922,25 +1865,15 @@ class TestDocumentIndexingTaskSummaryFlow: indexing_technique="high_quality", summary_index_setting={"enable": True}, ) - dataset_query = MagicMock() - dataset_query.where.return_value = dataset_query - dataset_query.first.return_value = dataset - - phase1_query = MagicMock() - phase1_query.where.return_value = phase1_query - phase1_query.all.return_value = [SimpleNamespace(id="doc-1")] - - summary_query = MagicMock() - summary_query.where.return_value = summary_query - summary_query.all.return_value = [_FalseyDocument("missing-doc")] - session1 = MagicMock() session2 = MagicMock() session2.begin.return_value = nullcontext() session3 = MagicMock() - session1.query.side_effect = lambda model: dataset_query - session2.query.side_effect = lambda model: phase1_query - session3.query.side_effect = lambda model: summary_query if model is Document else dataset_query + + session1.scalar.return_value = dataset + session2.scalars.return_value = MagicMock(all=MagicMock(return_value=[SimpleNamespace(id="doc-1")])) + session3.scalar.return_value = dataset + session3.scalars.return_value = MagicMock(all=MagicMock(return_value=[_FalseyDocument("missing-doc")])) monkeypatch.setattr( "tasks.document_indexing_task.session_factory.create_session", diff --git a/api/tests/unit_tests/tasks/test_document_indexing_sync_task.py b/api/tests/unit_tests/tasks/test_document_indexing_sync_task.py index f49f4535af..41d3068a10 100644 --- a/api/tests/unit_tests/tasks/test_document_indexing_sync_task.py +++ b/api/tests/unit_tests/tasks/test_document_indexing_sync_task.py @@ -80,7 +80,7 @@ def mock_db_session(mock_document, mock_dataset): with patch("tasks.document_indexing_sync_task.session_factory", autospec=True) as mock_session_factory: session = MagicMock() session.scalars.return_value.all.return_value = [] - session.query.return_value.where.return_value.first.side_effect = [mock_document, mock_dataset] + session.scalar.side_effect = [mock_document, mock_dataset] begin_cm = MagicMock() begin_cm.__enter__.return_value = session @@ -242,14 +242,13 @@ class TestDataSourceInfoSerialization: # DB session mock — shared across all ``session_factory.create_session()`` calls session = MagicMock() session.scalars.return_value.all.return_value = [] - # .where() path: session 1 reads document + dataset, session 2 reads dataset - session.query.return_value.where.return_value.first.side_effect = [ + # All .first() calls are now session.scalar() — ordered by call sequence: + # session 1: document + dataset, session 2: dataset (clean), session 3: document (update), + # session 4: document (indexing) + session.scalar.side_effect = [ mock_document, mock_dataset, mock_dataset, - ] - # .filter_by() path: session 3 (update), session 4 (indexing) - session.query.return_value.filter_by.return_value.first.side_effect = [ mock_document, mock_document, ] diff --git a/api/tests/unit_tests/tasks/test_human_input_timeout_tasks.py b/api/tests/unit_tests/tasks/test_human_input_timeout_tasks.py index 7119217e94..591da56f49 100644 --- a/api/tests/unit_tests/tasks/test_human_input_timeout_tasks.py +++ b/api/tests/unit_tests/tasks/test_human_input_timeout_tasks.py @@ -5,8 +5,8 @@ from types import SimpleNamespace from typing import Any import pytest -from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus +from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from tasks import human_input_timeout_tasks as task_module diff --git a/api/tests/unit_tests/tasks/test_remove_app_and_related_data_task.py b/api/tests/unit_tests/tasks/test_remove_app_and_related_data_task.py index 0ed4ca05fa..626d1ee0a8 100644 --- a/api/tests/unit_tests/tasks/test_remove_app_and_related_data_task.py +++ b/api/tests/unit_tests/tasks/test_remove_app_and_related_data_task.py @@ -3,7 +3,6 @@ from unittest.mock import MagicMock, call, patch import pytest from libs.archive_storage import ArchiveStorageNotConfiguredError -from models.workflow import WorkflowArchiveLog from tasks.remove_app_and_related_data_task import ( _delete_app_workflow_archive_logs, _delete_archived_workflow_run_files, @@ -83,16 +82,11 @@ class TestDeleteWorkflowArchiveLogs: assert params == {"tenant_id": tenant_id, "app_id": app_id} assert name == "workflow archive log" - mock_query = MagicMock() - mock_delete_query = MagicMock() - mock_query.where.return_value = mock_delete_query - mock_db.session.query.return_value = mock_query + mock_session = MagicMock() - delete_func(mock_db.session, "log-1") + delete_func(mock_session, "log-1") - mock_db.session.query.assert_called_once_with(WorkflowArchiveLog) - mock_query.where.assert_called_once() - mock_delete_query.delete.assert_called_once_with(synchronize_session=False) + mock_session.execute.assert_called_once() class TestDeleteArchivedWorkflowRunFiles: diff --git a/api/tests/unit_tests/tools/test_mcp_tool.py b/api/tests/unit_tests/tools/test_mcp_tool.py index 68359ba078..689b973097 100644 --- a/api/tests/unit_tests/tools/test_mcp_tool.py +++ b/api/tests/unit_tests/tools/test_mcp_tool.py @@ -1,9 +1,9 @@ import base64 from decimal import Decimal +from typing import Any from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.entities.llm_entities import LLMUsage from core.mcp.types import ( AudioContent, @@ -18,9 +18,10 @@ from core.tools.__base.tool_runtime import ToolRuntime from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolEntity, ToolIdentity, ToolInvokeMessage from core.tools.mcp_tool.tool import MCPTool +from graphon.model_runtime.entities.llm_entities import LLMUsage -def _make_mcp_tool(output_schema: dict | None = None) -> MCPTool: +def _make_mcp_tool(output_schema: dict[str, Any] | None = None) -> MCPTool: identity = ToolIdentity( author="test", name="test_mcp_tool", diff --git a/api/tests/unit_tests/utils/structured_output_parser/test_structured_output_parser.py b/api/tests/unit_tests/utils/structured_output_parser/test_structured_output_parser.py index ffa6833524..c166a946d9 100644 --- a/api/tests/unit_tests/utils/structured_output_parser/test_structured_output_parser.py +++ b/api/tests/unit_tests/utils/structured_output_parser/test_structured_output_parser.py @@ -2,6 +2,9 @@ from decimal import Decimal from unittest.mock import MagicMock, patch import pytest + +from core.llm_generator.output_parser.errors import OutputParserError +from core.llm_generator.output_parser.structured_output import invoke_llm_with_structured_output from graphon.model_runtime.entities.llm_entities import ( LLMResult, LLMResultChunk, @@ -18,9 +21,6 @@ from graphon.model_runtime.entities.message_entities import ( ) from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelType -from core.llm_generator.output_parser.errors import OutputParserError -from core.llm_generator.output_parser.structured_output import invoke_llm_with_structured_output - def create_mock_usage(prompt_tokens: int = 10, completion_tokens: int = 5) -> LLMUsage: """Create a mock LLMUsage with all required fields""" diff --git a/api/tests/unit_tests/utils/test_text_processing.py b/api/tests/unit_tests/utils/test_text_processing.py index bf61162a66..5f6ccbcdff 100644 --- a/api/tests/unit_tests/utils/test_text_processing.py +++ b/api/tests/unit_tests/utils/test_text_processing.py @@ -19,7 +19,57 @@ from core.tools.utils.text_processing_utils import remove_leading_symbols ("[Google](https://google.com) is a search engine", "[Google](https://google.com) is a search engine"), ("[Example](http://example.com) some text", "[Example](http://example.com) some text"), # Leading symbols before markdown link are removed, including the opening bracket [ - ("@[Test](https://example.com)", "Test](https://example.com)"), + ("@[Test](https://example.com)", "[Test](https://example.com)"), + ("~~标题~~", "标题~~"), + ('""quoted', "quoted"), + ("''test", "test"), + ("##话题", "话题"), + ("$$价格", "价格"), + ("%%百分比", "百分比"), + ("&&与逻辑", "与逻辑"), + ("((括号))", "括号))"), + ("**强调**", "强调**"), + ("++自增", "自增"), + (",,逗号", "逗号"), + ("..省略", "省略"), + ("//注释", "注释"), + ("::范围", "范围"), + (";;分号", "分号"), + ("<<左移", "左移"), + ("==等于", "等于"), + (">>右移", "右移"), + ("??疑问", "疑问"), + ("@@提及", "提及"), + ("^^上标", "上标"), + ("__下划线", "下划线"), + ("``代码", "代码"), + ("~~删除线", "删除线"), + (" 全角空格开头", "全角空格开头"), + ("、顿号开头", "顿号开头"), + ("。句号开头", "句号开头"), + ("「引号」测试", "引号」测试"), + ("『书名号』", "书名号』"), + ("【保留】测试", "【保留】测试"), + ("〖括号〗测试", "括号〗测试"), + ("〔括号〕测试", "括号〕测试"), + ("~~【保留】~~", "【保留】~~"), + ('"[公告]"', '[公告]"'), + ("[公告] 更新", "[公告] 更新"), + ("【通知】重要", "【通知】重要"), + ("[[嵌套]]", "[[嵌套]]"), + ("【【嵌套】】", "【【嵌套】】"), + ("[【混合】]", "[【混合】]"), + ("normal text", "normal text"), + ("123数字", "123数字"), + ("中文开头", "中文开头"), + ("alpha", "alpha"), + ("~", ""), + ("【", "【"), + ("[", "["), + ("~~~", ""), + ("【【【", "【【【"), + ("\t制表符", "\t制表符"), + ("\n换行", "\n换行"), ], ) def test_remove_leading_symbols(input_text, expected_output): diff --git a/api/tests/workflow_test_utils.py b/api/tests/workflow_test_utils.py index d33ac2c710..1415bb1d52 100644 --- a/api/tests/workflow_test_utils.py +++ b/api/tests/workflow_test_utils.py @@ -1,13 +1,12 @@ from collections.abc import Mapping from typing import Any +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom, build_dify_run_context +from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add_variables_to_pool from graphon.entities import GraphInitParams from graphon.runtime import VariablePool from graphon.variables.variables import Variable -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom, build_dify_run_context -from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add_variables_to_pool - def build_test_run_context( *, diff --git a/api/uv.lock b/api/uv.lock index 51424fc502..77ba905a67 100644 --- a/api/uv.lock +++ b/api/uv.lock @@ -8,6 +8,42 @@ resolution-markers = [ "sys_platform != 'emscripten' and sys_platform != 'linux' and sys_platform != 'win32'", ] +[manifest] +members = [ + "dify-api", + "dify-vdb-alibabacloud-mysql", + "dify-vdb-analyticdb", + "dify-vdb-baidu", + "dify-vdb-chroma", + "dify-vdb-clickzetta", + "dify-vdb-couchbase", + "dify-vdb-elasticsearch", + "dify-vdb-hologres", + "dify-vdb-huawei-cloud", + "dify-vdb-iris", + "dify-vdb-lindorm", + "dify-vdb-matrixone", + "dify-vdb-milvus", + "dify-vdb-myscale", + "dify-vdb-oceanbase", + "dify-vdb-opengauss", + "dify-vdb-opensearch", + "dify-vdb-oracle", + "dify-vdb-pgvecto-rs", + "dify-vdb-pgvector", + "dify-vdb-qdrant", + "dify-vdb-relyt", + "dify-vdb-tablestore", + "dify-vdb-tencent", + "dify-vdb-tidb-on-qdrant", + "dify-vdb-tidb-vector", + "dify-vdb-upstash", + "dify-vdb-vastbase", + "dify-vdb-vikingdb", + "dify-vdb-weaviate", +] +overrides = [{ name = "pyarrow", specifier = ">=18.0.0" }] + [[package]] name = "abnf" version = "2.2.0" @@ -195,7 +231,7 @@ wheels = [ [[package]] name = "aliyun-log-python-sdk" -version = "0.9.37" +version = "0.9.44" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "dateparser" }, @@ -207,7 +243,7 @@ dependencies = [ { name = "requests" }, { name = "six" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/90/70/291d494619bb7b0cbcc00689ad995945737c2c9e0bff2733e0aa7dbaee14/aliyun_log_python_sdk-0.9.37.tar.gz", hash = "sha256:ea65c9cca3a7377cef87d568e897820338328a53a7acb1b02f1383910e103f68", size = 152549, upload-time = "2025-11-27T07:56:06.098Z" } +sdist = { url = "https://files.pythonhosted.org/packages/2d/5c/f4076b129fe9168f5424f9d89afc587baf36a844f4ae7b619a951a97c76c/aliyun_log_python_sdk-0.9.44.tar.gz", hash = "sha256:891d0ba91cdce8e5e6b430a50512e092751621680bc9f0b7c7325aaa7c1944f1", size = 154147, upload-time = "2026-03-30T08:40:59.04Z" } [[package]] name = "aliyun-python-sdk-core" @@ -286,14 +322,14 @@ wheels = [ [[package]] name = "apscheduler" -version = "3.11.1" +version = "3.11.2" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "tzlocal" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/d0/81/192db4f8471de5bc1f0d098783decffb1e6e69c4f8b4bc6711094691950b/apscheduler-3.11.1.tar.gz", hash = "sha256:0db77af6400c84d1747fe98a04b8b58f0080c77d11d338c4f507a9752880f221", size = 108044, upload-time = "2025-10-31T18:55:42.819Z" } +sdist = { url = "https://files.pythonhosted.org/packages/07/12/3e4389e5920b4c1763390c6d371162f3784f86f85cd6d6c1bfe68eef14e2/apscheduler-3.11.2.tar.gz", hash = "sha256:2a9966b052ec805f020c8c4c3ae6e6a06e24b1bf19f2e11d91d8cca0473eef41", size = 108683, upload-time = "2025-12-22T00:39:34.884Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/58/9f/d3c76f76c73fcc959d28e9def45b8b1cc3d7722660c5003b19c1022fd7f4/apscheduler-3.11.1-py3-none-any.whl", hash = "sha256:6162cb5683cb09923654fa9bdd3130c4be4bfda6ad8990971c9597ecd52965d2", size = 64278, upload-time = "2025-10-31T18:55:41.186Z" }, + { url = "https://files.pythonhosted.org/packages/9f/64/2e54428beba8d9992aa478bb8f6de9e4ecaa5f8f513bcfd567ed7fb0262d/apscheduler-3.11.2-py3-none-any.whl", hash = "sha256:ce005177f741409db4e4dd40a7431b76feb856b9dd69d57e0da49d6715bfd26d", size = 64439, upload-time = "2025-12-22T00:39:33.303Z" }, ] [[package]] @@ -335,14 +371,14 @@ wheels = [ [[package]] name = "authlib" -version = "1.6.9" +version = "1.6.11" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "cryptography" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/af/98/00d3dd826d46959ad8e32af2dbb2398868fd9fd0683c26e56d0789bd0e68/authlib-1.6.9.tar.gz", hash = "sha256:d8f2421e7e5980cc1ddb4e32d3f5fa659cfaf60d8eaf3281ebed192e4ab74f04", size = 165134, upload-time = "2026-03-02T07:44:01.998Z" } +sdist = { url = "https://files.pythonhosted.org/packages/28/10/b325d58ffe86815b399334a101e63bc6fa4e1953921cb23703b48a0a0220/authlib-1.6.11.tar.gz", hash = "sha256:64db35b9b01aeccb4715a6c9a6613a06f2bd7be2ab9d2eb89edd1dfc7580a38f", size = 165359, upload-time = "2026-04-16T07:22:50.279Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/53/23/b65f568ed0c22f1efacb744d2db1a33c8068f384b8c9b482b52ebdbc3ef6/authlib-1.6.9-py2.py3-none-any.whl", hash = "sha256:f08b4c14e08f0861dc18a32357b33fbcfd2ea86cfe3fe149484b4d764c4a0ac3", size = 244197, upload-time = "2026-03-02T07:44:00.307Z" }, + { url = "https://files.pythonhosted.org/packages/57/2f/55fca558f925a51db046e5b929deb317ddb05afed74b22d89f4eca578980/authlib-1.6.11-py2.py3-none-any.whl", hash = "sha256:c8687a9a26451c51a34a06fa17bb97cb15bba46a6a626755e2d7f50da8bff3e3", size = 244469, upload-time = "2026-04-16T07:22:48.413Z" }, ] [[package]] @@ -437,7 +473,7 @@ wheels = [ [[package]] name = "bce-python-sdk" -version = "0.9.68" +version = "0.9.69" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "crc32c" }, @@ -445,9 +481,9 @@ dependencies = [ { name = "pycryptodome" }, { name = "six" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/ca/7c/8b4d9128e571f898f9f177dc9f41e31692d8ddb08a963b0c576f219d1304/bce_python_sdk-0.9.68.tar.gz", hash = "sha256:adf182868ed25e53cc3c1573dad9a2b1e9b72ed1ffd0d3ef326f5fa93da7cfa6", size = 296349, upload-time = "2026-03-30T02:57:32.948Z" } +sdist = { url = "https://files.pythonhosted.org/packages/07/9c/8fdaf7f9259002b5aa9101bb88252f6d05f65c4535bbca567457da84d765/bce_python_sdk-0.9.69.tar.gz", hash = "sha256:2aaa9f4fc118b3efb720a66d7a541789b7d838a1ddacb9f3c6faa6b75e1c7d23", size = 300008, upload-time = "2026-04-10T08:13:29.769Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/fa/4e/eaaba9264667d675c3de76485dc511f0f233c31bada8752411f7fc5170be/bce_python_sdk-0.9.68-py3-none-any.whl", hash = "sha256:fcb484db4a54aa2c4675834c10bc6c37d42929fd138faaf6c01f933d8fa927ed", size = 411932, upload-time = "2026-03-30T02:57:27.847Z" }, + { url = "https://files.pythonhosted.org/packages/ca/3b/41c2985d1b3b3bd5cdf103b4156b08320268ee7a0617f2a40c34fdd377e9/bce_python_sdk-0.9.69-py3-none-any.whl", hash = "sha256:50fb94833b5f4931255296396081b85143101bd9a7a894efbf20d1f759779de5", size = 415659, upload-time = "2026-04-10T08:13:27.958Z" }, ] [[package]] @@ -501,6 +537,15 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/1a/39/47f9197bdd44df24d67ac8893641e16f386c984a0619ef2ee4c51fbbc019/beautifulsoup4-4.14.3-py3-none-any.whl", hash = "sha256:0918bfe44902e6ad8d57732ba310582e98da931428d231a5ecb9e7c703a735bb", size = 107721, upload-time = "2025-11-30T15:08:24.087Z" }, ] +[[package]] +name = "bidict" +version = "0.23.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/9a/6e/026678aa5a830e07cd9498a05d3e7e650a4f56a42f267a53d22bcda1bdc9/bidict-0.23.1.tar.gz", hash = "sha256:03069d763bc387bbd20e7d49914e75fc4132a41937fa3405417e1a5a2d006d71", size = 29093, upload-time = "2024-02-18T19:09:05.748Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/99/37/e8730c3587a65eb5645d4aba2d27aae48e8003614d6aaf15dda67f702f1f/bidict-0.23.1-py3-none-any.whl", hash = "sha256:5dae8d4d79b552a71cbabc7deb25dfe8ce710b17ff41711e13010ead2abfc3e5", size = 32764, upload-time = "2024-02-18T19:09:04.156Z" }, +] + [[package]] name = "billiard" version = "4.2.3" @@ -551,29 +596,29 @@ wheels = [ [[package]] name = "boto3" -version = "1.42.83" +version = "1.42.88" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "botocore" }, { name = "jmespath" }, { name = "s3transfer" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/9f/87/1ed88eaa1e814841a37e71fee74c2b74341d14b791c0c6038b7ba914bea1/boto3-1.42.83.tar.gz", hash = "sha256:cc5621e603982cb3145b7f6c9970e02e297a1a0eb94637cc7f7b69d3017640ee", size = 112719, upload-time = "2026-04-03T19:34:21.254Z" } +sdist = { url = "https://files.pythonhosted.org/packages/da/bb/7d4435cca6fccf235dd40c891c731bcb9078e815917b57ebadd1e0ffabaf/boto3-1.42.88.tar.gz", hash = "sha256:2d22c70de5726918676a06f1a03acfb4d5d9ea92fc759354800b67b22aaeef19", size = 113238, upload-time = "2026-04-10T19:41:06.912Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/c1/b1/8a066bc8f02937d49783c0b3948ab951d8284e6fde436cab9f359dbd4d93/boto3-1.42.83-py3-none-any.whl", hash = "sha256:544846fdb10585bb7837e409868e8e04c6b372fa04479ba1597ce82cf1242076", size = 140555, upload-time = "2026-04-03T19:34:17.935Z" }, + { url = "https://files.pythonhosted.org/packages/0a/2b/8bfddb39a19f5fbc16a869f1a394771e6223f07160dbc0ff6b38e05ea0ae/boto3-1.42.88-py3-none-any.whl", hash = "sha256:2d0f52c971503377e4370d2a83edee6f077ddb8e684366ff38df4f13581d9cfc", size = 140557, upload-time = "2026-04-10T19:41:05.309Z" }, ] [[package]] name = "boto3-stubs" -version = "1.42.83" +version = "1.42.88" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "botocore-stubs" }, { name = "types-s3transfer" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/2d/fe/6c43a048074d8567db38befe51bf0b770e8456aa2b91ce8fe6758f29ec3d/boto3_stubs-1.42.83.tar.gz", hash = "sha256:1ecbd88f4ae35764b9ea3579ca1e851b67ea0a73a442cb406de277fc1478daeb", size = 102188, upload-time = "2026-04-03T19:54:20.613Z" } +sdist = { url = "https://files.pythonhosted.org/packages/c2/c7/d4dfbb4757cd72fd350ba666902ec3ac19e04d6be639e96cdad4543d4726/boto3_stubs-1.42.88.tar.gz", hash = "sha256:85215fb4938a94d1cf83cd8632f46ae7728b5ec88187d83468f393bbe64236d6", size = 102495, upload-time = "2026-04-10T19:55:57.526Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/9c/4d/eee0444fd466ebe69fdb61cc1f24b97d8e21e9e545865f7c1d846294a413/boto3_stubs-1.42.83-py3-none-any.whl", hash = "sha256:06185ca5f11a1edc880286f5f33779a2b08be356bf270bf1ec128d0819782a20", size = 70448, upload-time = "2026-04-03T19:54:16.315Z" }, + { url = "https://files.pythonhosted.org/packages/b4/6f/3befd72080aedbb4ad26b353a6e364645668664930ce49668fd0bab8f2b5/boto3_stubs-1.42.88-py3-none-any.whl", hash = "sha256:9e74350715ca8ccd63fc250f8eca9fa3161b3d1704339554344d72e4e21c5ed1", size = 70603, upload-time = "2026-04-10T19:55:49.921Z" }, ] [package.optional-dependencies] @@ -583,16 +628,16 @@ bedrock-runtime = [ [[package]] name = "botocore" -version = "1.42.83" +version = "1.42.88" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "jmespath" }, { name = "python-dateutil" }, { name = "urllib3" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/4e/01/b46a3f8b6e9362258f78f1890db1a96d4ed73214d6a36420dc158dcfd221/botocore-1.42.83.tar.gz", hash = "sha256:34bc8cb64b17ac17f8901f073fe4fc9572a5cac9393a37b2b3ea372a83b87f4a", size = 15140337, upload-time = "2026-04-03T19:34:08.779Z" } +sdist = { url = "https://files.pythonhosted.org/packages/93/50/87966238f7aa3f7e5f87081185d5a407a95ede8b551e11bbe134ca3306dc/botocore-1.42.88.tar.gz", hash = "sha256:cbb59ee464662039b0c2c95a520cdf85b1e8ce00b72375ab9cd9f842cc001301", size = 15195331, upload-time = "2026-04-10T19:40:57.012Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/a3/97/0d6f50822dc8c1df7f3eadb0bc6822fc0f98f02287c4efc7c7c88fde129a/botocore-1.42.83-py3-none-any.whl", hash = "sha256:ec0c3ecb3772936ed22a3bdda09883b34858933f71004686d460d829bab39d8e", size = 14818388, upload-time = "2026-04-03T19:34:03.333Z" }, + { url = "https://files.pythonhosted.org/packages/2a/46/ad14e41245adb8b0c83663ba13e822b68a0df08999dd250e75b0750fdf6c/botocore-1.42.88-py3-none-any.whl", hash = "sha256:032375b213305b6b81eedb269eaeefdf96f674620799bbf96117dca86052cc1a", size = 14876640, upload-time = "2026-04-10T19:40:53.663Z" }, ] [[package]] @@ -607,24 +652,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/57/b7/f4a051cefaf76930c77558b31646bcce7e9b3fbdcbc89e4073783e961519/botocore_stubs-1.41.3-py3-none-any.whl", hash = "sha256:6ab911bd9f7256f1dcea2e24a4af7ae0f9f07e83d0a760bba37f028f4a2e5589", size = 66749, upload-time = "2025-11-24T20:29:26.142Z" }, ] -[[package]] -name = "bottleneck" -version = "1.6.0" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "numpy" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/14/d8/6d641573e210768816023a64966d66463f2ce9fc9945fa03290c8a18f87c/bottleneck-1.6.0.tar.gz", hash = "sha256:028d46ee4b025ad9ab4d79924113816f825f62b17b87c9e1d0d8ce144a4a0e31", size = 104311, upload-time = "2025-09-08T16:30:38.617Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/8d/72/7e3593a2a3dd69ec831a9981a7b1443647acb66a5aec34c1620a5f7f8498/bottleneck-1.6.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3bb16a16a86a655fdbb34df672109a8a227bb5f9c9cf5bb8ae400a639bc52fa3", size = 100515, upload-time = "2025-09-08T16:29:55.141Z" }, - { url = "https://files.pythonhosted.org/packages/b5/d4/e7bbea08f4c0f0bab819d38c1a613da5f194fba7b19aae3e2b3a27e78886/bottleneck-1.6.0-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:0fbf5d0787af9aee6cef4db9cdd14975ce24bd02e0cc30155a51411ebe2ff35f", size = 377451, upload-time = "2025-09-08T16:29:56.718Z" }, - { url = "https://files.pythonhosted.org/packages/fe/80/a6da430e3b1a12fd85f9fe90d3ad8fe9a527ecb046644c37b4b3f4baacfc/bottleneck-1.6.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d08966f4a22384862258940346a72087a6f7cebb19038fbf3a3f6690ee7fd39f", size = 368303, upload-time = "2025-09-08T16:29:57.834Z" }, - { url = "https://files.pythonhosted.org/packages/30/11/abd30a49f3251f4538430e5f876df96f2b39dabf49e05c5836820d2c31fe/bottleneck-1.6.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:604f0b898b43b7bc631c564630e936a8759d2d952641c8b02f71e31dbcd9deaa", size = 361232, upload-time = "2025-09-08T16:29:59.104Z" }, - { url = "https://files.pythonhosted.org/packages/1d/ac/1c0e09d8d92b9951f675bd42463ce76c3c3657b31c5bf53ca1f6dd9eccff/bottleneck-1.6.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:d33720bad761e642abc18eda5f188ff2841191c9f63f9d0c052245decc0faeb9", size = 373234, upload-time = "2025-09-08T16:30:00.488Z" }, - { url = "https://files.pythonhosted.org/packages/fb/ea/382c572ae3057ba885d484726bb63629d1f63abedf91c6cd23974eb35a9b/bottleneck-1.6.0-cp312-cp312-win32.whl", hash = "sha256:a1e5907ec2714efbe7075d9207b58c22ab6984a59102e4ecd78dced80dab8374", size = 108020, upload-time = "2025-09-08T16:30:01.773Z" }, - { url = "https://files.pythonhosted.org/packages/48/ad/d71da675eef85ac153eef5111ca0caa924548c9591da00939bcabba8de8e/bottleneck-1.6.0-cp312-cp312-win_amd64.whl", hash = "sha256:81e3822499f057a917b7d3972ebc631ac63c6bbcc79ad3542a66c4c40634e3a6", size = 113493, upload-time = "2025-09-08T16:30:02.872Z" }, -] - [[package]] name = "brotli" version = "1.2.0" @@ -659,18 +686,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/07/6b/6e92009df3b8b7272f85a0992b306b61c34b7ea1c4776643746e61c380ac/brotlicffi-1.2.0.0-cp38-abi3-win_amd64.whl", hash = "sha256:f139a7cdfe4ae7859513067b736eb44d19fae1186f9e99370092f6915216451b", size = 378586, upload-time = "2025-11-21T18:17:50.531Z" }, ] -[[package]] -name = "bs4" -version = "0.0.2" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "beautifulsoup4" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/c9/aa/4acaf814ff901145da37332e05bb510452ebed97bc9602695059dd46ef39/bs4-0.0.2.tar.gz", hash = "sha256:a48685c58f50fe127722417bae83fe6badf500d54b55f7e39ffe43b798653925", size = 698, upload-time = "2024-01-17T18:15:47.371Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/51/bb/bf7aab772a159614954d84aa832c129624ba6c32faa559dfb200a534e50b/bs4-0.0.2-py2.py3-none-any.whl", hash = "sha256:abf8742c0805ef7f662dce4b51cca104cffe52b835238afc169142ab9b3fbccc", size = 1189, upload-time = "2024-01-17T18:15:48.613Z" }, -] - [[package]] name = "build" version = "1.3.0" @@ -687,11 +702,11 @@ wheels = [ [[package]] name = "cachetools" -version = "5.3.3" +version = "7.0.5" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/b3/4d/27a3e6dd09011649ad5210bdf963765bc8fa81a0827a4fc01bafd2705c5b/cachetools-5.3.3.tar.gz", hash = "sha256:ba29e2dfa0b8b556606f097407ed1aa62080ee108ab0dc5ec9d6a723a007d105", size = 26522, upload-time = "2024-02-26T20:33:23.386Z" } +sdist = { url = "https://files.pythonhosted.org/packages/af/dd/57fe3fdb6e65b25a5987fd2cdc7e22db0aef508b91634d2e57d22928d41b/cachetools-7.0.5.tar.gz", hash = "sha256:0cd042c24377200c1dcd225f8b7b12b0ca53cc2c961b43757e774ebe190fd990", size = 37367, upload-time = "2026-03-09T20:51:29.451Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/fb/2b/a64c2d25a37aeb921fddb929111413049fc5f8b9a4c1aefaffaafe768d54/cachetools-5.3.3-py3-none-any.whl", hash = "sha256:0abad1021d3f8325b2fc1d2e9c8b9c9d57b04c3932657a72465447332c24d945", size = 9325, upload-time = "2024-02-26T20:33:20.308Z" }, + { url = "https://files.pythonhosted.org/packages/06/f3/39cf3367b8107baa44f861dc802cbf16263c945b62d8265d36034fc07bea/cachetools-7.0.5-py3-none-any.whl", hash = "sha256:46bc8ebefbe485407621d0a4264b23c080cedd913921bad7ac3ed2f26c183114", size = 13918, upload-time = "2026-03-09T20:51:27.33Z" }, ] [[package]] @@ -705,7 +720,7 @@ wheels = [ [[package]] name = "celery" -version = "5.6.2" +version = "5.6.3" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "billiard" }, @@ -718,9 +733,9 @@ dependencies = [ { name = "tzlocal" }, { name = "vine" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/8f/9d/3d13596519cfa7207a6f9834f4b082554845eb3cd2684b5f8535d50c7c44/celery-5.6.2.tar.gz", hash = "sha256:4a8921c3fcf2ad76317d3b29020772103581ed2454c4c042cc55dcc43585009b", size = 1718802, upload-time = "2026-01-04T12:35:58.012Z" } +sdist = { url = "https://files.pythonhosted.org/packages/e8/b4/a1233943ab5c8ea05fb877a88a0a0622bf47444b99e4991a8045ac37ea1d/celery-5.6.3.tar.gz", hash = "sha256:177006bd2054b882e9f01be59abd8529e88879ef50d7918a7050c5a9f4e12912", size = 1742243, upload-time = "2026-03-26T12:14:51.76Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/dd/bd/9ecd619e456ae4ba73b6583cc313f26152afae13e9a82ac4fe7f8856bfd1/celery-5.6.2-py3-none-any.whl", hash = "sha256:3ffafacbe056951b629c7abcf9064c4a2366de0bdfc9fdba421b97ebb68619a5", size = 445502, upload-time = "2026-01-04T12:35:55.894Z" }, + { url = "https://files.pythonhosted.org/packages/cf/c9/6eccdda96e098f7ae843162db2d3c149c6931a24fda69fe4ab84d0027eb5/celery-5.6.3-py3-none-any.whl", hash = "sha256:0808f42f80909c4d5833202360ffafb2a4f83f4d8e23e1285d926610e9a7afa6", size = 451235, upload-time = "2026-03-26T12:14:49.491Z" }, ] [[package]] @@ -778,27 +793,27 @@ wheels = [ [[package]] name = "charset-normalizer" -version = "3.4.4" +version = "3.4.7" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/13/69/33ddede1939fdd074bce5434295f38fae7136463422fe4fd3e0e89b98062/charset_normalizer-3.4.4.tar.gz", hash = "sha256:94537985111c35f28720e43603b8e7b43a6ecfb2ce1d3058bbe955b73404e21a", size = 129418, upload-time = "2025-10-14T04:42:32.879Z" } +sdist = { url = "https://files.pythonhosted.org/packages/e7/a1/67fe25fac3c7642725500a3f6cfe5821ad557c3abb11c9d20d12c7008d3e/charset_normalizer-3.4.7.tar.gz", hash = "sha256:ae89db9e5f98a11a4bf50407d4363e7b09b31e55bc117b4f7d80aab97ba009e5", size = 144271, upload-time = "2026-04-02T09:28:39.342Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/f3/85/1637cd4af66fa687396e757dec650f28025f2a2f5a5531a3208dc0ec43f2/charset_normalizer-3.4.4-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:0a98e6759f854bd25a58a73fa88833fba3b7c491169f86ce1180c948ab3fd394", size = 208425, upload-time = "2025-10-14T04:40:53.353Z" }, - { url = "https://files.pythonhosted.org/packages/9d/6a/04130023fef2a0d9c62d0bae2649b69f7b7d8d24ea5536feef50551029df/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b5b290ccc2a263e8d185130284f8501e3e36c5e02750fc6b6bdeb2e9e96f1e25", size = 148162, upload-time = "2025-10-14T04:40:54.558Z" }, - { url = "https://files.pythonhosted.org/packages/78/29/62328d79aa60da22c9e0b9a66539feae06ca0f5a4171ac4f7dc285b83688/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74bb723680f9f7a6234dcf67aea57e708ec1fbdf5699fb91dfd6f511b0a320ef", size = 144558, upload-time = "2025-10-14T04:40:55.677Z" }, - { url = "https://files.pythonhosted.org/packages/86/bb/b32194a4bf15b88403537c2e120b817c61cd4ecffa9b6876e941c3ee38fe/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f1e34719c6ed0b92f418c7c780480b26b5d9c50349e9a9af7d76bf757530350d", size = 161497, upload-time = "2025-10-14T04:40:57.217Z" }, - { url = "https://files.pythonhosted.org/packages/19/89/a54c82b253d5b9b111dc74aca196ba5ccfcca8242d0fb64146d4d3183ff1/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2437418e20515acec67d86e12bf70056a33abdacb5cb1655042f6538d6b085a8", size = 159240, upload-time = "2025-10-14T04:40:58.358Z" }, - { url = "https://files.pythonhosted.org/packages/c0/10/d20b513afe03acc89ec33948320a5544d31f21b05368436d580dec4e234d/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:11d694519d7f29d6cd09f6ac70028dba10f92f6cdd059096db198c283794ac86", size = 153471, upload-time = "2025-10-14T04:40:59.468Z" }, - { url = "https://files.pythonhosted.org/packages/61/fa/fbf177b55bdd727010f9c0a3c49eefa1d10f960e5f09d1d887bf93c2e698/charset_normalizer-3.4.4-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:ac1c4a689edcc530fc9d9aa11f5774b9e2f33f9a0c6a57864e90908f5208d30a", size = 150864, upload-time = "2025-10-14T04:41:00.623Z" }, - { url = "https://files.pythonhosted.org/packages/05/12/9fbc6a4d39c0198adeebbde20b619790e9236557ca59fc40e0e3cebe6f40/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:21d142cc6c0ec30d2efee5068ca36c128a30b0f2c53c1c07bd78cb6bc1d3be5f", size = 150647, upload-time = "2025-10-14T04:41:01.754Z" }, - { url = "https://files.pythonhosted.org/packages/ad/1f/6a9a593d52e3e8c5d2b167daf8c6b968808efb57ef4c210acb907c365bc4/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:5dbe56a36425d26d6cfb40ce79c314a2e4dd6211d51d6d2191c00bed34f354cc", size = 145110, upload-time = "2025-10-14T04:41:03.231Z" }, - { url = "https://files.pythonhosted.org/packages/30/42/9a52c609e72471b0fc54386dc63c3781a387bb4fe61c20231a4ebcd58bdd/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:5bfbb1b9acf3334612667b61bd3002196fe2a1eb4dd74d247e0f2a4d50ec9bbf", size = 162839, upload-time = "2025-10-14T04:41:04.715Z" }, - { url = "https://files.pythonhosted.org/packages/c4/5b/c0682bbf9f11597073052628ddd38344a3d673fda35a36773f7d19344b23/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:d055ec1e26e441f6187acf818b73564e6e6282709e9bcb5b63f5b23068356a15", size = 150667, upload-time = "2025-10-14T04:41:05.827Z" }, - { url = "https://files.pythonhosted.org/packages/e4/24/a41afeab6f990cf2daf6cb8c67419b63b48cf518e4f56022230840c9bfb2/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:af2d8c67d8e573d6de5bc30cdb27e9b95e49115cd9baad5ddbd1a6207aaa82a9", size = 160535, upload-time = "2025-10-14T04:41:06.938Z" }, - { url = "https://files.pythonhosted.org/packages/2a/e5/6a4ce77ed243c4a50a1fecca6aaaab419628c818a49434be428fe24c9957/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:780236ac706e66881f3b7f2f32dfe90507a09e67d1d454c762cf642e6e1586e0", size = 154816, upload-time = "2025-10-14T04:41:08.101Z" }, - { url = "https://files.pythonhosted.org/packages/a8/ef/89297262b8092b312d29cdb2517cb1237e51db8ecef2e9af5edbe7b683b1/charset_normalizer-3.4.4-cp312-cp312-win32.whl", hash = "sha256:5833d2c39d8896e4e19b689ffc198f08ea58116bee26dea51e362ecc7cd3ed26", size = 99694, upload-time = "2025-10-14T04:41:09.23Z" }, - { url = "https://files.pythonhosted.org/packages/3d/2d/1e5ed9dd3b3803994c155cd9aacb60c82c331bad84daf75bcb9c91b3295e/charset_normalizer-3.4.4-cp312-cp312-win_amd64.whl", hash = "sha256:a79cfe37875f822425b89a82333404539ae63dbdddf97f84dcbc3d339aae9525", size = 107131, upload-time = "2025-10-14T04:41:10.467Z" }, - { url = "https://files.pythonhosted.org/packages/d0/d9/0ed4c7098a861482a7b6a95603edce4c0d9db2311af23da1fb2b75ec26fc/charset_normalizer-3.4.4-cp312-cp312-win_arm64.whl", hash = "sha256:376bec83a63b8021bb5c8ea75e21c4ccb86e7e45ca4eb81146091b56599b80c3", size = 100390, upload-time = "2025-10-14T04:41:11.915Z" }, - { url = "https://files.pythonhosted.org/packages/0a/4c/925909008ed5a988ccbb72dcc897407e5d6d3bd72410d69e051fc0c14647/charset_normalizer-3.4.4-py3-none-any.whl", hash = "sha256:7a32c560861a02ff789ad905a2fe94e3f840803362c84fecf1851cb4cf3dc37f", size = 53402, upload-time = "2025-10-14T04:42:31.76Z" }, + { url = "https://files.pythonhosted.org/packages/0c/eb/4fc8d0a7110eb5fc9cc161723a34a8a6c200ce3b4fbf681bc86feee22308/charset_normalizer-3.4.7-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:eca9705049ad3c7345d574e3510665cb2cf844c2f2dcfe675332677f081cbd46", size = 311328, upload-time = "2026-04-02T09:26:24.331Z" }, + { url = "https://files.pythonhosted.org/packages/f8/e3/0fadc706008ac9d7b9b5be6dc767c05f9d3e5df51744ce4cc9605de7b9f4/charset_normalizer-3.4.7-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6178f72c5508bfc5fd446a5905e698c6212932f25bcdd4b47a757a50605a90e2", size = 208061, upload-time = "2026-04-02T09:26:25.568Z" }, + { url = "https://files.pythonhosted.org/packages/42/f0/3dd1045c47f4a4604df85ec18ad093912ae1344ac706993aff91d38773a2/charset_normalizer-3.4.7-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:e1421b502d83040e6d7fb2fb18dff63957f720da3d77b2fbd3187ceb63755d7b", size = 229031, upload-time = "2026-04-02T09:26:26.865Z" }, + { url = "https://files.pythonhosted.org/packages/dc/67/675a46eb016118a2fbde5a277a5d15f4f69d5f3f5f338e5ee2f8948fcf43/charset_normalizer-3.4.7-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:edac0f1ab77644605be2cbba52e6b7f630731fc42b34cb0f634be1a6eface56a", size = 225239, upload-time = "2026-04-02T09:26:28.044Z" }, + { url = "https://files.pythonhosted.org/packages/4b/f8/d0118a2f5f23b02cd166fa385c60f9b0d4f9194f574e2b31cef350ad7223/charset_normalizer-3.4.7-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5649fd1c7bade02f320a462fdefd0b4bd3ce036065836d4f42e0de958038e116", size = 216589, upload-time = "2026-04-02T09:26:29.239Z" }, + { url = "https://files.pythonhosted.org/packages/b1/f1/6d2b0b261b6c4ceef0fcb0d17a01cc5bc53586c2d4796fa04b5c540bc13d/charset_normalizer-3.4.7-cp312-cp312-manylinux_2_31_armv7l.whl", hash = "sha256:203104ed3e428044fd943bc4bf45fa73c0730391f9621e37fe39ecf477b128cb", size = 202733, upload-time = "2026-04-02T09:26:30.5Z" }, + { url = "https://files.pythonhosted.org/packages/6f/c0/7b1f943f7e87cc3db9626ba17807d042c38645f0a1d4415c7a14afb5591f/charset_normalizer-3.4.7-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:298930cec56029e05497a76988377cbd7457ba864beeea92ad7e844fe74cd1f1", size = 212652, upload-time = "2026-04-02T09:26:31.709Z" }, + { url = "https://files.pythonhosted.org/packages/38/dd/5a9ab159fe45c6e72079398f277b7d2b523e7f716acc489726115a910097/charset_normalizer-3.4.7-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:708838739abf24b2ceb208d0e22403dd018faeef86ddac04319a62ae884c4f15", size = 211229, upload-time = "2026-04-02T09:26:33.282Z" }, + { url = "https://files.pythonhosted.org/packages/d5/ff/531a1cad5ca855d1c1a8b69cb71abfd6d85c0291580146fda7c82857caa1/charset_normalizer-3.4.7-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:0f7eb884681e3938906ed0434f20c63046eacd0111c4ba96f27b76084cd679f5", size = 203552, upload-time = "2026-04-02T09:26:34.845Z" }, + { url = "https://files.pythonhosted.org/packages/c1/4c/a5fb52d528a8ca41f7598cb619409ece30a169fbdf9cdce592e53b46c3a6/charset_normalizer-3.4.7-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:4dc1e73c36828f982bfe79fadf5919923f8a6f4df2860804db9a98c48824ce8d", size = 230806, upload-time = "2026-04-02T09:26:36.152Z" }, + { url = "https://files.pythonhosted.org/packages/59/7a/071feed8124111a32b316b33ae4de83d36923039ef8cf48120266844285b/charset_normalizer-3.4.7-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:aed52fea0513bac0ccde438c188c8a471c4e0f457c2dd20cdbf6ea7a450046c7", size = 212316, upload-time = "2026-04-02T09:26:37.672Z" }, + { url = "https://files.pythonhosted.org/packages/fd/35/f7dba3994312d7ba508e041eaac39a36b120f32d4c8662b8814dab876431/charset_normalizer-3.4.7-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:fea24543955a6a729c45a73fe90e08c743f0b3334bbf3201e6c4bc1b0c7fa464", size = 227274, upload-time = "2026-04-02T09:26:38.93Z" }, + { url = "https://files.pythonhosted.org/packages/8a/2d/a572df5c9204ab7688ec1edc895a73ebded3b023bb07364710b05dd1c9be/charset_normalizer-3.4.7-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:bb6d88045545b26da47aa879dd4a89a71d1dce0f0e549b1abcb31dfe4a8eac49", size = 218468, upload-time = "2026-04-02T09:26:40.17Z" }, + { url = "https://files.pythonhosted.org/packages/86/eb/890922a8b03a568ca2f336c36585a4713c55d4d67bf0f0c78924be6315ca/charset_normalizer-3.4.7-cp312-cp312-win32.whl", hash = "sha256:2257141f39fe65a3fdf38aeccae4b953e5f3b3324f4ff0daf9f15b8518666a2c", size = 148460, upload-time = "2026-04-02T09:26:41.416Z" }, + { url = "https://files.pythonhosted.org/packages/35/d9/0e7dffa06c5ab081f75b1b786f0aefc88365825dfcd0ac544bdb7b2b6853/charset_normalizer-3.4.7-cp312-cp312-win_amd64.whl", hash = "sha256:5ed6ab538499c8644b8a3e18debabcd7ce684f3fa91cf867521a7a0279cab2d6", size = 159330, upload-time = "2026-04-02T09:26:42.554Z" }, + { url = "https://files.pythonhosted.org/packages/9e/5d/481bcc2a7c88ea6b0878c299547843b2521ccbc40980cb406267088bc701/charset_normalizer-3.4.7-cp312-cp312-win_arm64.whl", hash = "sha256:56be790f86bfb2c98fb742ce566dfb4816e5a83384616ab59c49e0604d49c51d", size = 147828, upload-time = "2026-04-02T09:26:44.075Z" }, + { url = "https://files.pythonhosted.org/packages/db/8f/61959034484a4a7c527811f4721e75d02d653a35afb0b6054474d8185d4c/charset_normalizer-3.4.7-py3-none-any.whl", hash = "sha256:3dce51d0f5e7951f8bb4900c257dad282f49190fdbebecd4ba99bcc41fef404d", size = 61958, upload-time = "2026-04-02T09:28:37.794Z" }, ] [[package]] @@ -950,7 +965,7 @@ wheels = [ [[package]] name = "clickzetta-connector-python" -version = "0.8.106" +version = "0.8.104" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "future" }, @@ -964,7 +979,7 @@ dependencies = [ { name = "urllib3" }, ] wheels = [ - { url = "https://files.pythonhosted.org/packages/23/38/749c708619f402d4d582dfa73fbeb64ade77b1f250a93bd064d2a1aa3776/clickzetta_connector_python-0.8.106-py3-none-any.whl", hash = "sha256:120d6700051d97609dbd6655c002ab3bc260b7c8e67d39dfc7191e749563f7b4", size = 78121, upload-time = "2025-10-29T02:38:15.014Z" }, + { url = "https://files.pythonhosted.org/packages/8f/94/c7eee2224bdab39d16dfe5bb7687f5525c7ed345b7fe8812e18a2d9a6335/clickzetta_connector_python-0.8.104-py3-none-any.whl", hash = "sha256:ae3e466d990677f96c769ec1c29318237df80c80fe9c1e21ba1eaf42bdef0207", size = 79382, upload-time = "2025-09-10T08:46:39.731Z" }, ] [[package]] @@ -1115,54 +1130,53 @@ sdist = { url = "https://files.pythonhosted.org/packages/6b/b0/e595ce2a2527e169c [[package]] name = "croniter" -version = "6.0.0" +version = "6.2.2" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "python-dateutil" }, - { name = "pytz" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/ad/2f/44d1ae153a0e27be56be43465e5cb39b9650c781e001e7864389deb25090/croniter-6.0.0.tar.gz", hash = "sha256:37c504b313956114a983ece2c2b07790b1f1094fe9d81cc94739214748255577", size = 64481, upload-time = "2024-12-17T17:17:47.32Z" } +sdist = { url = "https://files.pythonhosted.org/packages/df/de/5832661ed55107b8a09af3f0a2e71e0957226a59eb1dcf0a445cce6daf20/croniter-6.2.2.tar.gz", hash = "sha256:ba60832a5ec8e12e51b8691c3309a113d1cf6526bdf1a48150ce8ec7a532d0ab", size = 113762, upload-time = "2026-03-15T08:43:48.112Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/07/4b/290b4c3efd6417a8b0c284896de19b1d5855e6dbdb97d2a35e68fa42de85/croniter-6.0.0-py2.py3-none-any.whl", hash = "sha256:2f878c3856f17896979b2a4379ba1f09c83e374931ea15cc835c5dd2eee9b368", size = 25468, upload-time = "2024-12-17T17:17:45.359Z" }, + { url = "https://files.pythonhosted.org/packages/d0/39/783980e78cb92c2d7bdb1fc7dbc86e94ccc6d58224d76a7f1f51b6c51e30/croniter-6.2.2-py3-none-any.whl", hash = "sha256:a5d17b1060974d36251ea4faf388233eca8acf0d09cbd92d35f4c4ac8f279960", size = 45422, upload-time = "2026-03-15T08:43:46.626Z" }, ] [[package]] name = "cryptography" -version = "46.0.6" +version = "46.0.7" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "cffi", marker = "platform_python_implementation != 'PyPy'" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/a4/ba/04b1bd4218cbc58dc90ce967106d51582371b898690f3ae0402876cc4f34/cryptography-46.0.6.tar.gz", hash = "sha256:27550628a518c5c6c903d84f637fbecf287f6cb9ced3804838a1295dc1fd0759", size = 750542, upload-time = "2026-03-25T23:34:53.396Z" } +sdist = { url = "https://files.pythonhosted.org/packages/47/93/ac8f3d5ff04d54bc814e961a43ae5b0b146154c89c61b47bb07557679b18/cryptography-46.0.7.tar.gz", hash = "sha256:e4cfd68c5f3e0bfdad0d38e023239b96a2fe84146481852dffbcca442c245aa5", size = 750652, upload-time = "2026-04-08T01:57:54.692Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/47/23/9285e15e3bc57325b0a72e592921983a701efc1ee8f91c06c5f0235d86d9/cryptography-46.0.6-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:64235194bad039a10bb6d2d930ab3323baaec67e2ce36215fd0952fad0930ca8", size = 7176401, upload-time = "2026-03-25T23:33:22.096Z" }, - { url = "https://files.pythonhosted.org/packages/60/f8/e61f8f13950ab6195b31913b42d39f0f9afc7d93f76710f299b5ec286ae6/cryptography-46.0.6-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:26031f1e5ca62fcb9d1fcb34b2b60b390d1aacaa15dc8b895a9ed00968b97b30", size = 4275275, upload-time = "2026-03-25T23:33:23.844Z" }, - { url = "https://files.pythonhosted.org/packages/19/69/732a736d12c2631e140be2348b4ad3d226302df63ef64d30dfdb8db7ad1c/cryptography-46.0.6-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:9a693028b9cbe51b5a1136232ee8f2bc242e4e19d456ded3fa7c86e43c713b4a", size = 4425320, upload-time = "2026-03-25T23:33:25.703Z" }, - { url = "https://files.pythonhosted.org/packages/d4/12/123be7292674abf76b21ac1fc0e1af50661f0e5b8f0ec8285faac18eb99e/cryptography-46.0.6-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:67177e8a9f421aa2d3a170c3e56eca4e0128883cf52a071a7cbf53297f18b175", size = 4278082, upload-time = "2026-03-25T23:33:27.423Z" }, - { url = "https://files.pythonhosted.org/packages/5b/ba/d5e27f8d68c24951b0a484924a84c7cdaed7502bac9f18601cd357f8b1d2/cryptography-46.0.6-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:d9528b535a6c4f8ff37847144b8986a9a143585f0540fbcb1a98115b543aa463", size = 4926514, upload-time = "2026-03-25T23:33:29.206Z" }, - { url = "https://files.pythonhosted.org/packages/34/71/1ea5a7352ae516d5512d17babe7e1b87d9db5150b21f794b1377eac1edc0/cryptography-46.0.6-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:22259338084d6ae497a19bae5d4c66b7ca1387d3264d1c2c0e72d9e9b6a77b97", size = 4457766, upload-time = "2026-03-25T23:33:30.834Z" }, - { url = "https://files.pythonhosted.org/packages/01/59/562be1e653accee4fdad92c7a2e88fced26b3fdfce144047519bbebc299e/cryptography-46.0.6-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:760997a4b950ff00d418398ad73fbc91aa2894b5c1db7ccb45b4f68b42a63b3c", size = 3986535, upload-time = "2026-03-25T23:33:33.02Z" }, - { url = "https://files.pythonhosted.org/packages/d6/8b/b1ebfeb788bf4624d36e45ed2662b8bd43a05ff62157093c1539c1288a18/cryptography-46.0.6-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:3dfa6567f2e9e4c5dceb8ccb5a708158a2a871052fa75c8b78cb0977063f1507", size = 4277618, upload-time = "2026-03-25T23:33:34.567Z" }, - { url = "https://files.pythonhosted.org/packages/dd/52/a005f8eabdb28df57c20f84c44d397a755782d6ff6d455f05baa2785bd91/cryptography-46.0.6-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:cdcd3edcbc5d55757e5f5f3d330dd00007ae463a7e7aa5bf132d1f22a4b62b19", size = 4890802, upload-time = "2026-03-25T23:33:37.034Z" }, - { url = "https://files.pythonhosted.org/packages/ec/4d/8e7d7245c79c617d08724e2efa397737715ca0ec830ecb3c91e547302555/cryptography-46.0.6-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:d4e4aadb7fc1f88687f47ca20bb7227981b03afaae69287029da08096853b738", size = 4457425, upload-time = "2026-03-25T23:33:38.904Z" }, - { url = "https://files.pythonhosted.org/packages/1d/5c/f6c3596a1430cec6f949085f0e1a970638d76f81c3ea56d93d564d04c340/cryptography-46.0.6-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2b417edbe8877cda9022dde3a008e2deb50be9c407eef034aeeb3a8b11d9db3c", size = 4405530, upload-time = "2026-03-25T23:33:40.842Z" }, - { url = "https://files.pythonhosted.org/packages/7e/c9/9f9cea13ee2dbde070424e0c4f621c091a91ffcc504ffea5e74f0e1daeff/cryptography-46.0.6-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:380343e0653b1c9d7e1f55b52aaa2dbb2fdf2730088d48c43ca1c7c0abb7cc2f", size = 4667896, upload-time = "2026-03-25T23:33:42.781Z" }, - { url = "https://files.pythonhosted.org/packages/ad/b5/1895bc0821226f129bc74d00eccfc6a5969e2028f8617c09790bf89c185e/cryptography-46.0.6-cp311-abi3-win32.whl", hash = "sha256:bcb87663e1f7b075e48c3be3ecb5f0b46c8fc50b50a97cf264e7f60242dca3f2", size = 3026348, upload-time = "2026-03-25T23:33:45.021Z" }, - { url = "https://files.pythonhosted.org/packages/c3/f8/c9bcbf0d3e6ad288b9d9aa0b1dee04b063d19e8c4f871855a03ab3a297ab/cryptography-46.0.6-cp311-abi3-win_amd64.whl", hash = "sha256:6739d56300662c468fddb0e5e291f9b4d084bead381667b9e654c7dd81705124", size = 3483896, upload-time = "2026-03-25T23:33:46.649Z" }, - { url = "https://files.pythonhosted.org/packages/c4/cc/f330e982852403da79008552de9906804568ae9230da8432f7496ce02b71/cryptography-46.0.6-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:12cae594e9473bca1a7aceb90536060643128bb274fcea0fc459ab90f7d1ae7a", size = 7162776, upload-time = "2026-03-25T23:34:13.308Z" }, - { url = "https://files.pythonhosted.org/packages/49/b3/dc27efd8dcc4bff583b3f01d4a3943cd8b5821777a58b3a6a5f054d61b79/cryptography-46.0.6-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:639301950939d844a9e1c4464d7e07f902fe9a7f6b215bb0d4f28584729935d8", size = 4270529, upload-time = "2026-03-25T23:34:15.019Z" }, - { url = "https://files.pythonhosted.org/packages/e6/05/e8d0e6eb4f0d83365b3cb0e00eb3c484f7348db0266652ccd84632a3d58d/cryptography-46.0.6-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ed3775295fb91f70b4027aeba878d79b3e55c0b3e97eaa4de71f8f23a9f2eb77", size = 4414827, upload-time = "2026-03-25T23:34:16.604Z" }, - { url = "https://files.pythonhosted.org/packages/2f/97/daba0f5d2dc6d855e2dcb70733c812558a7977a55dd4a6722756628c44d1/cryptography-46.0.6-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:8927ccfbe967c7df312ade694f987e7e9e22b2425976ddbf28271d7e58845290", size = 4271265, upload-time = "2026-03-25T23:34:18.586Z" }, - { url = "https://files.pythonhosted.org/packages/89/06/fe1fce39a37ac452e58d04b43b0855261dac320a2ebf8f5260dd55b201a9/cryptography-46.0.6-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:b12c6b1e1651e42ab5de8b1e00dc3b6354fdfd778e7fa60541ddacc27cd21410", size = 4916800, upload-time = "2026-03-25T23:34:20.561Z" }, - { url = "https://files.pythonhosted.org/packages/ff/8a/b14f3101fe9c3592603339eb5d94046c3ce5f7fc76d6512a2d40efd9724e/cryptography-46.0.6-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:063b67749f338ca9c5a0b7fe438a52c25f9526b851e24e6c9310e7195aad3b4d", size = 4448771, upload-time = "2026-03-25T23:34:22.406Z" }, - { url = "https://files.pythonhosted.org/packages/01/b3/0796998056a66d1973fd52ee89dc1bb3b6581960a91ad4ac705f182d398f/cryptography-46.0.6-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:02fad249cb0e090b574e30b276a3da6a149e04ee2f049725b1f69e7b8351ec70", size = 3978333, upload-time = "2026-03-25T23:34:24.281Z" }, - { url = "https://files.pythonhosted.org/packages/c5/3d/db200af5a4ffd08918cd55c08399dc6c9c50b0bc72c00a3246e099d3a849/cryptography-46.0.6-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:7e6142674f2a9291463e5e150090b95a8519b2fb6e6aaec8917dd8d094ce750d", size = 4271069, upload-time = "2026-03-25T23:34:25.895Z" }, - { url = "https://files.pythonhosted.org/packages/d7/18/61acfd5b414309d74ee838be321c636fe71815436f53c9f0334bf19064fa/cryptography-46.0.6-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:456b3215172aeefb9284550b162801d62f5f264a081049a3e94307fe20792cfa", size = 4878358, upload-time = "2026-03-25T23:34:27.67Z" }, - { url = "https://files.pythonhosted.org/packages/8b/65/5bf43286d566f8171917cae23ac6add941654ccf085d739195a4eacf1674/cryptography-46.0.6-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:341359d6c9e68834e204ceaf25936dffeafea3829ab80e9503860dcc4f4dac58", size = 4448061, upload-time = "2026-03-25T23:34:29.375Z" }, - { url = "https://files.pythonhosted.org/packages/e0/25/7e49c0fa7205cf3597e525d156a6bce5b5c9de1fd7e8cb01120e459f205a/cryptography-46.0.6-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:9a9c42a2723999a710445bc0d974e345c32adfd8d2fac6d8a251fa829ad31cfb", size = 4399103, upload-time = "2026-03-25T23:34:32.036Z" }, - { url = "https://files.pythonhosted.org/packages/44/46/466269e833f1c4718d6cd496ffe20c56c9c8d013486ff66b4f69c302a68d/cryptography-46.0.6-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:6617f67b1606dfd9fe4dbfa354a9508d4a6d37afe30306fe6c101b7ce3274b72", size = 4659255, upload-time = "2026-03-25T23:34:33.679Z" }, - { url = "https://files.pythonhosted.org/packages/0a/09/ddc5f630cc32287d2c953fc5d32705e63ec73e37308e5120955316f53827/cryptography-46.0.6-cp38-abi3-win32.whl", hash = "sha256:7f6690b6c55e9c5332c0b59b9c8a3fb232ebf059094c17f9019a51e9827df91c", size = 3010660, upload-time = "2026-03-25T23:34:35.418Z" }, - { url = "https://files.pythonhosted.org/packages/1b/82/ca4893968aeb2709aacfb57a30dec6fa2ab25b10fa9f064b8882ce33f599/cryptography-46.0.6-cp38-abi3-win_amd64.whl", hash = "sha256:79e865c642cfc5c0b3eb12af83c35c5aeff4fa5c672dc28c43721c2c9fdd2f0f", size = 3471160, upload-time = "2026-03-25T23:34:37.191Z" }, + { url = "https://files.pythonhosted.org/packages/0b/5d/4a8f770695d73be252331e60e526291e3df0c9b27556a90a6b47bccca4c2/cryptography-46.0.7-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:ea42cbe97209df307fdc3b155f1b6fa2577c0defa8f1f7d3be7d31d189108ad4", size = 7179869, upload-time = "2026-04-08T01:56:17.157Z" }, + { url = "https://files.pythonhosted.org/packages/5f/45/6d80dc379b0bbc1f9d1e429f42e4cb9e1d319c7a8201beffd967c516ea01/cryptography-46.0.7-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b36a4695e29fe69215d75960b22577197aca3f7a25b9cf9d165dcfe9d80bc325", size = 4275492, upload-time = "2026-04-08T01:56:19.36Z" }, + { url = "https://files.pythonhosted.org/packages/4a/9a/1765afe9f572e239c3469f2cb429f3ba7b31878c893b246b4b2994ffe2fe/cryptography-46.0.7-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:5ad9ef796328c5e3c4ceed237a183f5d41d21150f972455a9d926593a1dcb308", size = 4426670, upload-time = "2026-04-08T01:56:21.415Z" }, + { url = "https://files.pythonhosted.org/packages/8f/3e/af9246aaf23cd4ee060699adab1e47ced3f5f7e7a8ffdd339f817b446462/cryptography-46.0.7-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:73510b83623e080a2c35c62c15298096e2a5dc8d51c3b4e1740211839d0dea77", size = 4280275, upload-time = "2026-04-08T01:56:23.539Z" }, + { url = "https://files.pythonhosted.org/packages/0f/54/6bbbfc5efe86f9d71041827b793c24811a017c6ac0fd12883e4caa86b8ed/cryptography-46.0.7-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:cbd5fb06b62bd0721e1170273d3f4d5a277044c47ca27ee257025146c34cbdd1", size = 4928402, upload-time = "2026-04-08T01:56:25.624Z" }, + { url = "https://files.pythonhosted.org/packages/2d/cf/054b9d8220f81509939599c8bdbc0c408dbd2bdd41688616a20731371fe0/cryptography-46.0.7-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:420b1e4109cc95f0e5700eed79908cef9268265c773d3a66f7af1eef53d409ef", size = 4459985, upload-time = "2026-04-08T01:56:27.309Z" }, + { url = "https://files.pythonhosted.org/packages/f9/46/4e4e9c6040fb01c7467d47217d2f882daddeb8828f7df800cb806d8a2288/cryptography-46.0.7-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:24402210aa54baae71d99441d15bb5a1919c195398a87b563df84468160a65de", size = 3990652, upload-time = "2026-04-08T01:56:29.095Z" }, + { url = "https://files.pythonhosted.org/packages/36/5f/313586c3be5a2fbe87e4c9a254207b860155a8e1f3cca99f9910008e7d08/cryptography-46.0.7-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:8a469028a86f12eb7d2fe97162d0634026d92a21f3ae0ac87ed1c4a447886c83", size = 4279805, upload-time = "2026-04-08T01:56:30.928Z" }, + { url = "https://files.pythonhosted.org/packages/69/33/60dfc4595f334a2082749673386a4d05e4f0cf4df8248e63b2c3437585f2/cryptography-46.0.7-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:9694078c5d44c157ef3162e3bf3946510b857df5a3955458381d1c7cfc143ddb", size = 4892883, upload-time = "2026-04-08T01:56:32.614Z" }, + { url = "https://files.pythonhosted.org/packages/c7/0b/333ddab4270c4f5b972f980adef4faa66951a4aaf646ca067af597f15563/cryptography-46.0.7-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:42a1e5f98abb6391717978baf9f90dc28a743b7d9be7f0751a6f56a75d14065b", size = 4459756, upload-time = "2026-04-08T01:56:34.306Z" }, + { url = "https://files.pythonhosted.org/packages/d2/14/633913398b43b75f1234834170947957c6b623d1701ffc7a9600da907e89/cryptography-46.0.7-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:91bbcb08347344f810cbe49065914fe048949648f6bd5c2519f34619142bbe85", size = 4410244, upload-time = "2026-04-08T01:56:35.977Z" }, + { url = "https://files.pythonhosted.org/packages/10/f2/19ceb3b3dc14009373432af0c13f46aa08e3ce334ec6eff13492e1812ccd/cryptography-46.0.7-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:5d1c02a14ceb9148cc7816249f64f623fbfee39e8c03b3650d842ad3f34d637e", size = 4674868, upload-time = "2026-04-08T01:56:38.034Z" }, + { url = "https://files.pythonhosted.org/packages/1a/bb/a5c213c19ee94b15dfccc48f363738633a493812687f5567addbcbba9f6f/cryptography-46.0.7-cp311-abi3-win32.whl", hash = "sha256:d23c8ca48e44ee015cd0a54aeccdf9f09004eba9fc96f38c911011d9ff1bd457", size = 3026504, upload-time = "2026-04-08T01:56:39.666Z" }, + { url = "https://files.pythonhosted.org/packages/2b/02/7788f9fefa1d060ca68717c3901ae7fffa21ee087a90b7f23c7a603c32ae/cryptography-46.0.7-cp311-abi3-win_amd64.whl", hash = "sha256:397655da831414d165029da9bc483bed2fe0e75dde6a1523ec2fe63f3c46046b", size = 3488363, upload-time = "2026-04-08T01:56:41.893Z" }, + { url = "https://files.pythonhosted.org/packages/a7/7f/cd42fc3614386bc0c12f0cb3c4ae1fc2bbca5c9662dfed031514911d513d/cryptography-46.0.7-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:462ad5cb1c148a22b2e3bcc5ad52504dff325d17daf5df8d88c17dda1f75f2a4", size = 7165618, upload-time = "2026-04-08T01:57:10.645Z" }, + { url = "https://files.pythonhosted.org/packages/a5/d0/36a49f0262d2319139d2829f773f1b97ef8aef7f97e6e5bd21455e5a8fb5/cryptography-46.0.7-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:84d4cced91f0f159a7ddacad249cc077e63195c36aac40b4150e7a57e84fffe7", size = 4270628, upload-time = "2026-04-08T01:57:12.885Z" }, + { url = "https://files.pythonhosted.org/packages/8a/6c/1a42450f464dda6ffbe578a911f773e54dd48c10f9895a23a7e88b3e7db5/cryptography-46.0.7-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:128c5edfe5e5938b86b03941e94fac9ee793a94452ad1365c9fc3f4f62216832", size = 4415405, upload-time = "2026-04-08T01:57:14.923Z" }, + { url = "https://files.pythonhosted.org/packages/9a/92/4ed714dbe93a066dc1f4b4581a464d2d7dbec9046f7c8b7016f5286329e2/cryptography-46.0.7-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:5e51be372b26ef4ba3de3c167cd3d1022934bc838ae9eaad7e644986d2a3d163", size = 4272715, upload-time = "2026-04-08T01:57:16.638Z" }, + { url = "https://files.pythonhosted.org/packages/b7/e6/a26b84096eddd51494bba19111f8fffe976f6a09f132706f8f1bf03f51f7/cryptography-46.0.7-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:cdf1a610ef82abb396451862739e3fc93b071c844399e15b90726ef7470eeaf2", size = 4918400, upload-time = "2026-04-08T01:57:19.021Z" }, + { url = "https://files.pythonhosted.org/packages/c7/08/ffd537b605568a148543ac3c2b239708ae0bd635064bab41359252ef88ed/cryptography-46.0.7-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:1d25aee46d0c6f1a501adcddb2d2fee4b979381346a78558ed13e50aa8a59067", size = 4450634, upload-time = "2026-04-08T01:57:21.185Z" }, + { url = "https://files.pythonhosted.org/packages/16/01/0cd51dd86ab5b9befe0d031e276510491976c3a80e9f6e31810cce46c4ad/cryptography-46.0.7-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:cdfbe22376065ffcf8be74dc9a909f032df19bc58a699456a21712d6e5eabfd0", size = 3985233, upload-time = "2026-04-08T01:57:22.862Z" }, + { url = "https://files.pythonhosted.org/packages/92/49/819d6ed3a7d9349c2939f81b500a738cb733ab62fbecdbc1e38e83d45e12/cryptography-46.0.7-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:abad9dac36cbf55de6eb49badd4016806b3165d396f64925bf2999bcb67837ba", size = 4271955, upload-time = "2026-04-08T01:57:24.814Z" }, + { url = "https://files.pythonhosted.org/packages/80/07/ad9b3c56ebb95ed2473d46df0847357e01583f4c52a85754d1a55e29e4d0/cryptography-46.0.7-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:935ce7e3cfdb53e3536119a542b839bb94ec1ad081013e9ab9b7cfd478b05006", size = 4879888, upload-time = "2026-04-08T01:57:26.88Z" }, + { url = "https://files.pythonhosted.org/packages/b8/c7/201d3d58f30c4c2bdbe9b03844c291feb77c20511cc3586daf7edc12a47b/cryptography-46.0.7-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:35719dc79d4730d30f1c2b6474bd6acda36ae2dfae1e3c16f2051f215df33ce0", size = 4449961, upload-time = "2026-04-08T01:57:29.068Z" }, + { url = "https://files.pythonhosted.org/packages/a5/ef/649750cbf96f3033c3c976e112265c33906f8e462291a33d77f90356548c/cryptography-46.0.7-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:7bbc6ccf49d05ac8f7d7b5e2e2c33830d4fe2061def88210a126d130d7f71a85", size = 4401696, upload-time = "2026-04-08T01:57:31.029Z" }, + { url = "https://files.pythonhosted.org/packages/41/52/a8908dcb1a389a459a29008c29966c1d552588d4ae6d43f3a1a4512e0ebe/cryptography-46.0.7-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a1529d614f44b863a7b480c6d000fe93b59acee9c82ffa027cfadc77521a9f5e", size = 4664256, upload-time = "2026-04-08T01:57:33.144Z" }, + { url = "https://files.pythonhosted.org/packages/4b/fa/f0ab06238e899cc3fb332623f337a7364f36f4bb3f2534c2bb95a35b132c/cryptography-46.0.7-cp38-abi3-win32.whl", hash = "sha256:f247c8c1a1fb45e12586afbb436ef21ff1e80670b2861a90353d9b025583d246", size = 3013001, upload-time = "2026-04-08T01:57:34.933Z" }, + { url = "https://files.pythonhosted.org/packages/d2/f1/00ce3bde3ca542d1acd8f8cfa38e446840945aa6363f9b74746394b14127/cryptography-46.0.7-cp38-abi3-win_amd64.whl", hash = "sha256:506c4ff91eff4f82bdac7633318a526b1d1309fc07ca76a3ad182cb5b686d6d3", size = 3472985, upload-time = "2026-04-08T01:57:36.714Z" }, ] [[package]] @@ -1271,92 +1285,49 @@ version = "1.13.3" source = { virtual = "." } dependencies = [ { name = "aliyun-log-python-sdk" }, - { name = "apscheduler" }, { name = "arize-phoenix-otel" }, { name = "azure-identity" }, - { name = "beautifulsoup4" }, { name = "bleach" }, { name = "boto3" }, - { name = "bs4" }, - { name = "cachetools" }, { name = "celery" }, - { name = "charset-normalizer" }, { name = "croniter" }, { name = "fastopenapi", extra = ["flask"] }, - { name = "flask" }, { name = "flask-compress" }, { name = "flask-cors" }, { name = "flask-login" }, { name = "flask-migrate" }, { name = "flask-orjson" }, { name = "flask-restx" }, - { name = "flask-sqlalchemy" }, { name = "gevent" }, + { name = "gevent-websocket" }, { name = "gmpy2" }, - { name = "google-api-core" }, { name = "google-api-python-client" }, - { name = "google-auth" }, - { name = "google-auth-httplib2" }, { name = "google-cloud-aiplatform" }, - { name = "googleapis-common-protos" }, { name = "graphon" }, { name = "gunicorn" }, { name = "httpx", extra = ["socks"] }, { name = "httpx-sse" }, - { name = "jieba" }, { name = "json-repair" }, { name = "langfuse" }, { name = "langsmith" }, - { name = "litellm" }, - { name = "markdown" }, { name = "mlflow-skinny" }, - { name = "numpy" }, - { name = "openpyxl" }, - { name = "opentelemetry-api" }, { name = "opentelemetry-distro" }, - { name = "opentelemetry-exporter-otlp" }, - { name = "opentelemetry-exporter-otlp-proto-common" }, - { name = "opentelemetry-exporter-otlp-proto-grpc" }, - { name = "opentelemetry-exporter-otlp-proto-http" }, - { name = "opentelemetry-instrumentation" }, { name = "opentelemetry-instrumentation-celery" }, { name = "opentelemetry-instrumentation-flask" }, { name = "opentelemetry-instrumentation-httpx" }, { name = "opentelemetry-instrumentation-redis" }, { name = "opentelemetry-instrumentation-sqlalchemy" }, { name = "opentelemetry-propagator-b3" }, - { name = "opentelemetry-proto" }, - { name = "opentelemetry-sdk" }, - { name = "opentelemetry-semantic-conventions" }, - { name = "opentelemetry-util-http" }, { name = "opik" }, - { name = "packaging" }, - { name = "pandas", extra = ["excel", "output-formatting", "performance"] }, { name = "psycogreen" }, { name = "psycopg2-binary" }, - { name = "pycryptodome" }, - { name = "pydantic" }, - { name = "pydantic-settings" }, - { name = "pyjwt" }, - { name = "pypandoc" }, - { name = "pypdfium2" }, - { name = "python-docx" }, - { name = "python-dotenv" }, - { name = "pyyaml" }, + { name = "python-socketio" }, { name = "readabilipy" }, { name = "redis", extra = ["hiredis"] }, { name = "resend" }, { name = "sendgrid" }, - { name = "sentry-sdk", extra = ["flask"] }, - { name = "sqlalchemy" }, { name = "sseclient-py" }, - { name = "starlette" }, - { name = "tiktoken" }, - { name = "transformers" }, - { name = "unstructured", extra = ["docx", "epub", "md", "ppt", "pptx"] }, { name = "weave" }, - { name = "weaviate-client" }, - { name = "yarl" }, ] [package.dev-dependencies] @@ -1382,7 +1353,6 @@ dev = [ { name = "pytest-xdist" }, { name = "ruff" }, { name = "scipy-stubs" }, - { name = "sseclient-py" }, { name = "testcontainers" }, { name = "types-aiofiles" }, { name = "types-beautifulsoup4" }, @@ -1423,6 +1393,7 @@ dev = [ { name = "types-tensorflow" }, { name = "types-tqdm" }, { name = "types-ujson" }, + { name = "xinference-client" }, ] storage = [ { name = "azure-storage-blob" }, @@ -1439,242 +1410,663 @@ tools = [ { name = "cloudscraper" }, { name = "nltk" }, ] -vdb = [ - { name = "alibabacloud-gpdb20160503" }, - { name = "alibabacloud-tea-openapi" }, - { name = "chromadb" }, - { name = "clickhouse-connect" }, - { name = "clickzetta-connector-python" }, - { name = "couchbase" }, - { name = "elasticsearch" }, - { name = "holo-search-sdk" }, - { name = "intersystems-irispython" }, - { name = "mo-vector" }, - { name = "mysql-connector-python" }, - { name = "opensearch-py" }, - { name = "oracledb" }, - { name = "pgvecto-rs", extra = ["sqlalchemy"] }, - { name = "pgvector" }, - { name = "pymilvus" }, - { name = "pymochow" }, - { name = "pyobvector" }, - { name = "qdrant-client" }, - { name = "tablestore" }, - { name = "tcvectordb" }, - { name = "tidb-vector" }, - { name = "upstash-vector" }, - { name = "volcengine-compat" }, - { name = "weaviate-client" }, +vdb-alibabacloud-mysql = [ + { name = "dify-vdb-alibabacloud-mysql" }, +] +vdb-all = [ + { name = "dify-vdb-alibabacloud-mysql" }, + { name = "dify-vdb-analyticdb" }, + { name = "dify-vdb-baidu" }, + { name = "dify-vdb-chroma" }, + { name = "dify-vdb-clickzetta" }, + { name = "dify-vdb-couchbase" }, + { name = "dify-vdb-elasticsearch" }, + { name = "dify-vdb-hologres" }, + { name = "dify-vdb-huawei-cloud" }, + { name = "dify-vdb-iris" }, + { name = "dify-vdb-lindorm" }, + { name = "dify-vdb-matrixone" }, + { name = "dify-vdb-milvus" }, + { name = "dify-vdb-myscale" }, + { name = "dify-vdb-oceanbase" }, + { name = "dify-vdb-opengauss" }, + { name = "dify-vdb-opensearch" }, + { name = "dify-vdb-oracle" }, + { name = "dify-vdb-pgvecto-rs" }, + { name = "dify-vdb-pgvector" }, + { name = "dify-vdb-qdrant" }, + { name = "dify-vdb-relyt" }, + { name = "dify-vdb-tablestore" }, + { name = "dify-vdb-tencent" }, + { name = "dify-vdb-tidb-on-qdrant" }, + { name = "dify-vdb-tidb-vector" }, + { name = "dify-vdb-upstash" }, + { name = "dify-vdb-vastbase" }, + { name = "dify-vdb-vikingdb" }, + { name = "dify-vdb-weaviate" }, +] +vdb-analyticdb = [ + { name = "dify-vdb-analyticdb" }, +] +vdb-baidu = [ + { name = "dify-vdb-baidu" }, +] +vdb-chroma = [ + { name = "dify-vdb-chroma" }, +] +vdb-clickzetta = [ + { name = "dify-vdb-clickzetta" }, +] +vdb-couchbase = [ + { name = "dify-vdb-couchbase" }, +] +vdb-elasticsearch = [ + { name = "dify-vdb-elasticsearch" }, +] +vdb-hologres = [ + { name = "dify-vdb-hologres" }, +] +vdb-huawei-cloud = [ + { name = "dify-vdb-huawei-cloud" }, +] +vdb-iris = [ + { name = "dify-vdb-iris" }, +] +vdb-lindorm = [ + { name = "dify-vdb-lindorm" }, +] +vdb-matrixone = [ + { name = "dify-vdb-matrixone" }, +] +vdb-milvus = [ + { name = "dify-vdb-milvus" }, +] +vdb-myscale = [ + { name = "dify-vdb-myscale" }, +] +vdb-oceanbase = [ + { name = "dify-vdb-oceanbase" }, +] +vdb-opengauss = [ + { name = "dify-vdb-opengauss" }, +] +vdb-opensearch = [ + { name = "dify-vdb-opensearch" }, +] +vdb-oracle = [ + { name = "dify-vdb-oracle" }, +] +vdb-pgvecto-rs = [ + { name = "dify-vdb-pgvecto-rs" }, +] +vdb-pgvector = [ + { name = "dify-vdb-pgvector" }, +] +vdb-qdrant = [ + { name = "dify-vdb-qdrant" }, +] +vdb-relyt = [ + { name = "dify-vdb-relyt" }, +] +vdb-tablestore = [ + { name = "dify-vdb-tablestore" }, +] +vdb-tencent = [ + { name = "dify-vdb-tencent" }, +] +vdb-tidb-on-qdrant = [ + { name = "dify-vdb-tidb-on-qdrant" }, +] +vdb-tidb-vector = [ + { name = "dify-vdb-tidb-vector" }, +] +vdb-upstash = [ + { name = "dify-vdb-upstash" }, +] +vdb-vastbase = [ + { name = "dify-vdb-vastbase" }, +] +vdb-vikingdb = [ + { name = "dify-vdb-vikingdb" }, +] +vdb-weaviate = [ + { name = "dify-vdb-weaviate" }, +] +vdb-xinference = [ { name = "xinference-client" }, ] [package.metadata] requires-dist = [ - { name = "aliyun-log-python-sdk", specifier = "~=0.9.37" }, - { name = "apscheduler", specifier = ">=3.11.0" }, + { name = "aliyun-log-python-sdk", specifier = ">=0.9.44,<1.0.0" }, { name = "arize-phoenix-otel", specifier = "~=0.15.0" }, - { name = "azure-identity", specifier = "==1.25.3" }, - { name = "beautifulsoup4", specifier = "==4.14.3" }, - { name = "bleach", specifier = "~=6.3.0" }, - { name = "boto3", specifier = "==1.42.83" }, - { name = "bs4", specifier = "~=0.0.1" }, - { name = "cachetools", specifier = "~=5.3.0" }, - { name = "celery", specifier = "~=5.6.2" }, - { name = "charset-normalizer", specifier = ">=3.4.4" }, - { name = "croniter", specifier = ">=6.0.0" }, - { name = "fastopenapi", extras = ["flask"], specifier = ">=0.7.0" }, - { name = "flask", specifier = "~=3.1.2" }, - { name = "flask-compress", specifier = ">=1.17,<1.25" }, - { name = "flask-cors", specifier = "~=6.0.0" }, - { name = "flask-login", specifier = "~=0.6.3" }, - { name = "flask-migrate", specifier = "~=4.1.0" }, - { name = "flask-orjson", specifier = "~=2.0.0" }, - { name = "flask-restx", specifier = "~=1.3.2" }, - { name = "flask-sqlalchemy", specifier = "~=3.1.1" }, - { name = "gevent", specifier = "~=25.9.1" }, - { name = "gmpy2", specifier = "~=2.3.0" }, - { name = "google-api-core", specifier = ">=2.19.1" }, - { name = "google-api-python-client", specifier = "==2.193.0" }, - { name = "google-auth", specifier = ">=2.47.0" }, - { name = "google-auth-httplib2", specifier = "==0.3.1" }, - { name = "google-cloud-aiplatform", specifier = ">=1.123.0" }, - { name = "googleapis-common-protos", specifier = ">=1.65.0" }, - { name = "graphon", specifier = ">=0.1.2" }, - { name = "gunicorn", specifier = "~=25.3.0" }, - { name = "httpx", extras = ["socks"], specifier = "~=0.28.0" }, + { name = "azure-identity", specifier = ">=1.25.3,<2.0.0" }, + { name = "bleach", specifier = ">=6.3.0" }, + { name = "boto3", specifier = ">=1.42.88" }, + { name = "celery", specifier = ">=5.6.3" }, + { name = "croniter", specifier = ">=6.2.2" }, + { name = "fastopenapi", extras = ["flask"], specifier = "~=0.7.0" }, + { name = "flask-compress", specifier = ">=1.24,<2.0.0" }, + { name = "flask-cors", specifier = ">=6.0.2" }, + { name = "flask-login", specifier = ">=0.6.3,<1.0.0" }, + { name = "flask-migrate", specifier = ">=4.1.0,<5.0.0" }, + { name = "flask-orjson", specifier = ">=2.0.0,<3.0.0" }, + { name = "flask-restx", specifier = ">=1.3.2,<2.0.0" }, + { name = "gevent", specifier = ">=26.4.0" }, + { name = "gevent-websocket", specifier = ">=0.10.1" }, + { name = "gmpy2", specifier = ">=2.3.0" }, + { name = "google-api-python-client", specifier = ">=2.194.0" }, + { name = "google-cloud-aiplatform", specifier = ">=1.147.0,<2.0.0" }, + { name = "graphon", specifier = "~=0.1.2" }, + { name = "gunicorn", specifier = ">=25.3.0" }, + { name = "httpx", extras = ["socks"], specifier = ">=0.28.1,<1.0.0" }, { name = "httpx-sse", specifier = "~=0.4.0" }, - { name = "jieba", specifier = "==0.42.1" }, - { name = "json-repair", specifier = ">=0.55.1" }, - { name = "langfuse", specifier = ">=3.0.0,<5.0.0" }, - { name = "langsmith", specifier = "~=0.7.16" }, - { name = "litellm", specifier = "==1.82.6" }, - { name = "markdown", specifier = "~=3.10.2" }, - { name = "mlflow-skinny", specifier = ">=3.0.0" }, - { name = "numpy", specifier = "~=1.26.4" }, - { name = "openpyxl", specifier = "~=3.1.5" }, - { name = "opentelemetry-api", specifier = "==1.40.0" }, - { name = "opentelemetry-distro", specifier = "==0.61b0" }, - { name = "opentelemetry-exporter-otlp", specifier = "==1.40.0" }, - { name = "opentelemetry-exporter-otlp-proto-common", specifier = "==1.40.0" }, - { name = "opentelemetry-exporter-otlp-proto-grpc", specifier = "==1.40.0" }, - { name = "opentelemetry-exporter-otlp-proto-http", specifier = "==1.40.0" }, - { name = "opentelemetry-instrumentation", specifier = "==0.61b0" }, - { name = "opentelemetry-instrumentation-celery", specifier = "==0.61b0" }, - { name = "opentelemetry-instrumentation-flask", specifier = "==0.61b0" }, - { name = "opentelemetry-instrumentation-httpx", specifier = "==0.61b0" }, - { name = "opentelemetry-instrumentation-redis", specifier = "==0.61b0" }, - { name = "opentelemetry-instrumentation-sqlalchemy", specifier = "==0.61b0" }, - { name = "opentelemetry-propagator-b3", specifier = "==1.40.0" }, - { name = "opentelemetry-proto", specifier = "==1.40.0" }, - { name = "opentelemetry-sdk", specifier = "==1.40.0" }, - { name = "opentelemetry-semantic-conventions", specifier = "==0.61b0" }, - { name = "opentelemetry-util-http", specifier = "==0.61b0" }, - { name = "opik", specifier = "~=1.10.37" }, - { name = "packaging", specifier = "~=23.2" }, - { name = "pandas", extras = ["excel", "output-formatting", "performance"], specifier = "~=3.0.1" }, - { name = "psycogreen", specifier = "~=1.0.2" }, - { name = "psycopg2-binary", specifier = "~=2.9.6" }, - { name = "pycryptodome", specifier = "==3.23.0" }, - { name = "pydantic", specifier = "~=2.12.5" }, - { name = "pydantic-settings", specifier = "~=2.13.1" }, - { name = "pyjwt", specifier = "~=2.12.0" }, - { name = "pypandoc", specifier = "~=1.13" }, - { name = "pypdfium2", specifier = "==5.6.0" }, - { name = "python-docx", specifier = "~=1.2.0" }, - { name = "python-dotenv", specifier = "==1.2.2" }, - { name = "pyyaml", specifier = "~=6.0.1" }, - { name = "readabilipy", specifier = "~=0.3.0" }, - { name = "redis", extras = ["hiredis"], specifier = "~=7.4.0" }, - { name = "resend", specifier = "~=2.26.0" }, - { name = "sendgrid", specifier = "~=6.12.3" }, - { name = "sentry-sdk", extras = ["flask"], specifier = "~=2.55.0" }, - { name = "sqlalchemy", specifier = "~=2.0.29" }, - { name = "sseclient-py", specifier = "~=1.9.0" }, - { name = "starlette", specifier = "==1.0.0" }, - { name = "tiktoken", specifier = "~=0.12.0" }, - { name = "transformers", specifier = "~=5.3.0" }, - { name = "unstructured", extras = ["docx", "epub", "md", "ppt", "pptx"], specifier = "~=0.21.5" }, - { name = "weave", specifier = ">=0.52.16" }, - { name = "weaviate-client", specifier = "==4.20.4" }, - { name = "yarl", specifier = "~=1.23.0" }, + { name = "json-repair", specifier = "~=0.59.2" }, + { name = "langfuse", specifier = ">=4.2.0,<5.0.0" }, + { name = "langsmith", specifier = ">=0.7.31,<1.0.0" }, + { name = "mlflow-skinny", specifier = ">=3.11.1,<4.0.0" }, + { name = "opentelemetry-distro", specifier = ">=0.62b0,<1.0.0" }, + { name = "opentelemetry-instrumentation-celery", specifier = ">=0.62b0,<1.0.0" }, + { name = "opentelemetry-instrumentation-flask", specifier = ">=0.62b0,<1.0.0" }, + { name = "opentelemetry-instrumentation-httpx", specifier = ">=0.62b0,<1.0.0" }, + { name = "opentelemetry-instrumentation-redis", specifier = ">=0.62b0,<1.0.0" }, + { name = "opentelemetry-instrumentation-sqlalchemy", specifier = ">=0.62b0,<1.0.0" }, + { name = "opentelemetry-propagator-b3", specifier = ">=1.41.0,<2.0.0" }, + { name = "opik", specifier = "~=1.11.2" }, + { name = "psycogreen", specifier = ">=1.0.2" }, + { name = "psycopg2-binary", specifier = ">=2.9.11" }, + { name = "python-socketio", specifier = ">=5.13.0" }, + { name = "readabilipy", specifier = ">=0.3.0,<1.0.0" }, + { name = "redis", extras = ["hiredis"], specifier = ">=7.4.0" }, + { name = "resend", specifier = ">=2.27.0,<3.0.0" }, + { name = "sendgrid", specifier = ">=6.12.5" }, + { name = "sseclient-py", specifier = ">=1.8.0" }, + { name = "weave", specifier = ">=0.52.36,<1.0.0" }, ] [package.metadata.requires-dev] dev = [ - { name = "basedpyright", specifier = "~=1.39.0" }, - { name = "boto3-stubs", specifier = ">=1.38.20" }, + { name = "basedpyright", specifier = ">=1.39.0" }, + { name = "boto3-stubs", specifier = ">=1.42.88" }, { name = "celery-types", specifier = ">=0.23.0" }, - { name = "coverage", specifier = "~=7.13.4" }, - { name = "dotenv-linter", specifier = "~=0.7.0" }, - { name = "faker", specifier = "~=40.12.0" }, - { name = "hypothesis", specifier = ">=6.131.15" }, + { name = "coverage", specifier = ">=7.13.4" }, + { name = "dotenv-linter", specifier = ">=0.7.0" }, + { name = "faker", specifier = ">=20.1.0" }, + { name = "hypothesis", specifier = ">=6.151.12" }, { name = "import-linter", specifier = ">=2.3" }, - { name = "lxml-stubs", specifier = "~=0.5.1" }, - { name = "mypy", specifier = "~=1.20.0" }, - { name = "pandas-stubs", specifier = "~=3.0.0" }, - { name = "pyrefly", specifier = ">=0.59.1" }, - { name = "pytest", specifier = "~=9.0.2" }, - { name = "pytest-benchmark", specifier = "~=5.2.3" }, - { name = "pytest-cov", specifier = "~=7.1.0" }, - { name = "pytest-env", specifier = "~=1.6.0" }, - { name = "pytest-mock", specifier = "~=3.15.1" }, + { name = "lxml-stubs", specifier = ">=0.5.1" }, + { name = "mypy", specifier = ">=1.20.1" }, + { name = "pandas-stubs", specifier = ">=3.0.0" }, + { name = "pyrefly", specifier = ">=0.60.0" }, + { name = "pytest", specifier = ">=9.0.3" }, + { name = "pytest-benchmark", specifier = ">=5.2.3" }, + { name = "pytest-cov", specifier = ">=7.1.0" }, + { name = "pytest-env", specifier = ">=1.6.0" }, + { name = "pytest-mock", specifier = ">=3.15.1" }, { name = "pytest-timeout", specifier = ">=2.4.0" }, { name = "pytest-xdist", specifier = ">=3.8.0" }, - { name = "ruff", specifier = "~=0.15.5" }, + { name = "ruff", specifier = ">=0.15.10" }, { name = "scipy-stubs", specifier = ">=1.15.3.0" }, - { name = "sseclient-py", specifier = ">=1.8.0" }, - { name = "testcontainers", specifier = "~=4.14.1" }, - { name = "types-aiofiles", specifier = "~=25.1.0" }, - { name = "types-beautifulsoup4", specifier = "~=4.12.0" }, - { name = "types-cachetools", specifier = "~=6.2.0" }, - { name = "types-cffi", specifier = ">=1.17.0" }, - { name = "types-colorama", specifier = "~=0.4.15" }, - { name = "types-defusedxml", specifier = "~=0.7.0" }, - { name = "types-deprecated", specifier = "~=1.3.1" }, - { name = "types-docutils", specifier = "~=0.22.3" }, - { name = "types-flask-cors", specifier = "~=6.0.0" }, - { name = "types-flask-migrate", specifier = "~=4.1.0" }, - { name = "types-gevent", specifier = "~=25.9.0" }, - { name = "types-greenlet", specifier = "~=3.3.0" }, - { name = "types-html5lib", specifier = "~=1.1.11" }, - { name = "types-jmespath", specifier = ">=1.0.2.20240106" }, - { name = "types-markdown", specifier = "~=3.10.2" }, - { name = "types-oauthlib", specifier = "~=3.3.0" }, - { name = "types-objgraph", specifier = "~=3.6.0" }, - { name = "types-olefile", specifier = "~=0.47.0" }, - { name = "types-openpyxl", specifier = "~=3.1.5" }, - { name = "types-pexpect", specifier = "~=4.9.0" }, - { name = "types-protobuf", specifier = "~=7.34.1" }, - { name = "types-psutil", specifier = "~=7.2.2" }, - { name = "types-psycopg2", specifier = "~=2.9.21" }, - { name = "types-pygments", specifier = "~=2.20.0" }, - { name = "types-pymysql", specifier = "~=1.1.0" }, + { name = "testcontainers", specifier = ">=4.14.2" }, + { name = "types-aiofiles", specifier = ">=25.1.0" }, + { name = "types-beautifulsoup4", specifier = ">=4.12.0" }, + { name = "types-cachetools", specifier = ">=6.2.0" }, + { name = "types-cffi", specifier = ">=2.0.0.20260408" }, + { name = "types-colorama", specifier = ">=0.4.15" }, + { name = "types-defusedxml", specifier = ">=0.7.0" }, + { name = "types-deprecated", specifier = ">=1.3.1" }, + { name = "types-docutils", specifier = ">=0.22.3" }, + { name = "types-flask-cors", specifier = ">=6.0.0" }, + { name = "types-flask-migrate", specifier = ">=4.1.0" }, + { name = "types-gevent", specifier = ">=26.4.0" }, + { name = "types-greenlet", specifier = ">=3.4.0" }, + { name = "types-html5lib", specifier = ">=1.1.11" }, + { name = "types-jmespath", specifier = ">=1.1.0.20260408" }, + { name = "types-markdown", specifier = ">=3.10.2" }, + { name = "types-oauthlib", specifier = ">=3.3.0" }, + { name = "types-objgraph", specifier = ">=3.6.0" }, + { name = "types-olefile", specifier = ">=0.47.0" }, + { name = "types-openpyxl", specifier = ">=3.1.5" }, + { name = "types-pexpect", specifier = ">=4.9.0" }, + { name = "types-protobuf", specifier = ">=7.34.1" }, + { name = "types-psutil", specifier = ">=7.2.2" }, + { name = "types-psycopg2", specifier = ">=2.9.21" }, + { name = "types-pygments", specifier = ">=2.20.0" }, + { name = "types-pymysql", specifier = ">=1.1.0" }, { name = "types-pyopenssl", specifier = ">=24.1.0" }, - { name = "types-python-dateutil", specifier = "~=2.9.0" }, - { name = "types-python-http-client", specifier = ">=3.3.7.20240910" }, - { name = "types-pywin32", specifier = "~=311.0.0" }, - { name = "types-pyyaml", specifier = "~=6.0.12" }, + { name = "types-python-dateutil", specifier = ">=2.9.0" }, + { name = "types-python-http-client", specifier = ">=3.3.7.20260408" }, + { name = "types-pywin32", specifier = ">=311.0.0" }, + { name = "types-pyyaml", specifier = ">=6.0.12" }, { name = "types-redis", specifier = ">=4.6.0.20241004" }, - { name = "types-regex", specifier = "~=2026.4.4" }, - { name = "types-setuptools", specifier = ">=80.9.0" }, - { name = "types-shapely", specifier = "~=2.1.0" }, - { name = "types-simplejson", specifier = ">=3.20.0" }, - { name = "types-six", specifier = ">=1.17.0" }, - { name = "types-tensorflow", specifier = ">=2.18.0" }, - { name = "types-tqdm", specifier = ">=4.67.0" }, + { name = "types-regex", specifier = ">=2026.4.4" }, + { name = "types-setuptools", specifier = ">=82.0.0.20260408" }, + { name = "types-shapely", specifier = ">=2.1.0" }, + { name = "types-simplejson", specifier = ">=3.20.0.20260408" }, + { name = "types-six", specifier = ">=1.17.0.20260408" }, + { name = "types-tensorflow", specifier = ">=2.18.0.20260408" }, + { name = "types-tqdm", specifier = ">=4.67.3.20260408" }, { name = "types-ujson", specifier = ">=5.10.0" }, + { name = "xinference-client", specifier = ">=2.4.0" }, ] storage = [ - { name = "azure-storage-blob", specifier = "==12.28.0" }, - { name = "bce-python-sdk", specifier = "~=0.9.23" }, - { name = "cos-python-sdk-v5", specifier = "==1.9.41" }, - { name = "esdk-obs-python", specifier = "==3.26.2" }, - { name = "google-cloud-storage", specifier = ">=3.0.0" }, - { name = "opendal", specifier = "~=0.46.0" }, - { name = "oss2", specifier = "==2.19.1" }, - { name = "supabase", specifier = "~=2.18.1" }, - { name = "tos", specifier = "~=2.9.0" }, + { name = "azure-storage-blob", specifier = ">=12.28.0" }, + { name = "bce-python-sdk", specifier = ">=0.9.69" }, + { name = "cos-python-sdk-v5", specifier = ">=1.9.41" }, + { name = "esdk-obs-python", specifier = ">=3.22.2" }, + { name = "google-cloud-storage", specifier = ">=3.10.1" }, + { name = "opendal", specifier = ">=0.46.0" }, + { name = "oss2", specifier = ">=2.19.1" }, + { name = "supabase", specifier = ">=2.18.1" }, + { name = "tos", specifier = ">=2.9.0" }, ] tools = [ - { name = "cloudscraper", specifier = "~=1.2.71" }, - { name = "nltk", specifier = "~=3.9.1" }, + { name = "cloudscraper", specifier = ">=1.2.71" }, + { name = "nltk", specifier = ">=3.9.1" }, ] -vdb = [ +vdb-alibabacloud-mysql = [{ name = "dify-vdb-alibabacloud-mysql", editable = "providers/vdb/vdb-alibabacloud-mysql" }] +vdb-all = [ + { name = "dify-vdb-alibabacloud-mysql", editable = "providers/vdb/vdb-alibabacloud-mysql" }, + { name = "dify-vdb-analyticdb", editable = "providers/vdb/vdb-analyticdb" }, + { name = "dify-vdb-baidu", editable = "providers/vdb/vdb-baidu" }, + { name = "dify-vdb-chroma", editable = "providers/vdb/vdb-chroma" }, + { name = "dify-vdb-clickzetta", editable = "providers/vdb/vdb-clickzetta" }, + { name = "dify-vdb-couchbase", editable = "providers/vdb/vdb-couchbase" }, + { name = "dify-vdb-elasticsearch", editable = "providers/vdb/vdb-elasticsearch" }, + { name = "dify-vdb-hologres", editable = "providers/vdb/vdb-hologres" }, + { name = "dify-vdb-huawei-cloud", editable = "providers/vdb/vdb-huawei-cloud" }, + { name = "dify-vdb-iris", editable = "providers/vdb/vdb-iris" }, + { name = "dify-vdb-lindorm", editable = "providers/vdb/vdb-lindorm" }, + { name = "dify-vdb-matrixone", editable = "providers/vdb/vdb-matrixone" }, + { name = "dify-vdb-milvus", editable = "providers/vdb/vdb-milvus" }, + { name = "dify-vdb-myscale", editable = "providers/vdb/vdb-myscale" }, + { name = "dify-vdb-oceanbase", editable = "providers/vdb/vdb-oceanbase" }, + { name = "dify-vdb-opengauss", editable = "providers/vdb/vdb-opengauss" }, + { name = "dify-vdb-opensearch", editable = "providers/vdb/vdb-opensearch" }, + { name = "dify-vdb-oracle", editable = "providers/vdb/vdb-oracle" }, + { name = "dify-vdb-pgvecto-rs", editable = "providers/vdb/vdb-pgvecto-rs" }, + { name = "dify-vdb-pgvector", editable = "providers/vdb/vdb-pgvector" }, + { name = "dify-vdb-qdrant", editable = "providers/vdb/vdb-qdrant" }, + { name = "dify-vdb-relyt", editable = "providers/vdb/vdb-relyt" }, + { name = "dify-vdb-tablestore", editable = "providers/vdb/vdb-tablestore" }, + { name = "dify-vdb-tencent", editable = "providers/vdb/vdb-tencent" }, + { name = "dify-vdb-tidb-on-qdrant", editable = "providers/vdb/vdb-tidb-on-qdrant" }, + { name = "dify-vdb-tidb-vector", editable = "providers/vdb/vdb-tidb-vector" }, + { name = "dify-vdb-upstash", editable = "providers/vdb/vdb-upstash" }, + { name = "dify-vdb-vastbase", editable = "providers/vdb/vdb-vastbase" }, + { name = "dify-vdb-vikingdb", editable = "providers/vdb/vdb-vikingdb" }, + { name = "dify-vdb-weaviate", editable = "providers/vdb/vdb-weaviate" }, +] +vdb-analyticdb = [{ name = "dify-vdb-analyticdb", editable = "providers/vdb/vdb-analyticdb" }] +vdb-baidu = [{ name = "dify-vdb-baidu", editable = "providers/vdb/vdb-baidu" }] +vdb-chroma = [{ name = "dify-vdb-chroma", editable = "providers/vdb/vdb-chroma" }] +vdb-clickzetta = [{ name = "dify-vdb-clickzetta", editable = "providers/vdb/vdb-clickzetta" }] +vdb-couchbase = [{ name = "dify-vdb-couchbase", editable = "providers/vdb/vdb-couchbase" }] +vdb-elasticsearch = [{ name = "dify-vdb-elasticsearch", editable = "providers/vdb/vdb-elasticsearch" }] +vdb-hologres = [{ name = "dify-vdb-hologres", editable = "providers/vdb/vdb-hologres" }] +vdb-huawei-cloud = [{ name = "dify-vdb-huawei-cloud", editable = "providers/vdb/vdb-huawei-cloud" }] +vdb-iris = [{ name = "dify-vdb-iris", editable = "providers/vdb/vdb-iris" }] +vdb-lindorm = [{ name = "dify-vdb-lindorm", editable = "providers/vdb/vdb-lindorm" }] +vdb-matrixone = [{ name = "dify-vdb-matrixone", editable = "providers/vdb/vdb-matrixone" }] +vdb-milvus = [{ name = "dify-vdb-milvus", editable = "providers/vdb/vdb-milvus" }] +vdb-myscale = [{ name = "dify-vdb-myscale", editable = "providers/vdb/vdb-myscale" }] +vdb-oceanbase = [{ name = "dify-vdb-oceanbase", editable = "providers/vdb/vdb-oceanbase" }] +vdb-opengauss = [{ name = "dify-vdb-opengauss", editable = "providers/vdb/vdb-opengauss" }] +vdb-opensearch = [{ name = "dify-vdb-opensearch", editable = "providers/vdb/vdb-opensearch" }] +vdb-oracle = [{ name = "dify-vdb-oracle", editable = "providers/vdb/vdb-oracle" }] +vdb-pgvecto-rs = [{ name = "dify-vdb-pgvecto-rs", editable = "providers/vdb/vdb-pgvecto-rs" }] +vdb-pgvector = [{ name = "dify-vdb-pgvector", editable = "providers/vdb/vdb-pgvector" }] +vdb-qdrant = [{ name = "dify-vdb-qdrant", editable = "providers/vdb/vdb-qdrant" }] +vdb-relyt = [{ name = "dify-vdb-relyt", editable = "providers/vdb/vdb-relyt" }] +vdb-tablestore = [{ name = "dify-vdb-tablestore", editable = "providers/vdb/vdb-tablestore" }] +vdb-tencent = [{ name = "dify-vdb-tencent", editable = "providers/vdb/vdb-tencent" }] +vdb-tidb-on-qdrant = [{ name = "dify-vdb-tidb-on-qdrant", editable = "providers/vdb/vdb-tidb-on-qdrant" }] +vdb-tidb-vector = [{ name = "dify-vdb-tidb-vector", editable = "providers/vdb/vdb-tidb-vector" }] +vdb-upstash = [{ name = "dify-vdb-upstash", editable = "providers/vdb/vdb-upstash" }] +vdb-vastbase = [{ name = "dify-vdb-vastbase", editable = "providers/vdb/vdb-vastbase" }] +vdb-vikingdb = [{ name = "dify-vdb-vikingdb", editable = "providers/vdb/vdb-vikingdb" }] +vdb-weaviate = [{ name = "dify-vdb-weaviate", editable = "providers/vdb/vdb-weaviate" }] +vdb-xinference = [{ name = "xinference-client", specifier = ">=2.4.0" }] + +[[package]] +name = "dify-vdb-alibabacloud-mysql" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-alibabacloud-mysql" } +dependencies = [ + { name = "mysql-connector-python" }, +] + +[package.metadata] +requires-dist = [{ name = "mysql-connector-python", specifier = ">=9.3.0" }] + +[[package]] +name = "dify-vdb-analyticdb" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-analyticdb" } +dependencies = [ + { name = "alibabacloud-gpdb20160503" }, + { name = "alibabacloud-tea-openapi" }, + { name = "clickhouse-connect" }, +] + +[package.metadata] +requires-dist = [ { name = "alibabacloud-gpdb20160503", specifier = "~=5.2.0" }, { name = "alibabacloud-tea-openapi", specifier = "~=0.4.3" }, - { name = "chromadb", specifier = "==0.5.20" }, { name = "clickhouse-connect", specifier = "~=0.15.0" }, - { name = "clickzetta-connector-python", specifier = ">=0.8.102" }, - { name = "couchbase", specifier = "~=4.6.0" }, - { name = "elasticsearch", specifier = "==8.14.0" }, - { name = "holo-search-sdk", specifier = ">=0.4.1" }, - { name = "intersystems-irispython", specifier = ">=5.1.0" }, - { name = "mo-vector", specifier = "~=0.1.13" }, - { name = "mysql-connector-python", specifier = ">=9.3.0" }, - { name = "opensearch-py", specifier = "==3.1.0" }, - { name = "oracledb", specifier = "==3.4.2" }, - { name = "pgvecto-rs", extras = ["sqlalchemy"], specifier = "~=0.2.1" }, - { name = "pgvector", specifier = "==0.4.2" }, - { name = "pymilvus", specifier = "~=2.6.10" }, - { name = "pymochow", specifier = "==2.4.0" }, - { name = "pyobvector", specifier = "~=0.2.17" }, - { name = "qdrant-client", specifier = "==1.9.0" }, - { name = "tablestore", specifier = "==6.4.3" }, - { name = "tcvectordb", specifier = "~=2.1.0" }, - { name = "tidb-vector", specifier = "==0.0.15" }, - { name = "upstash-vector", specifier = "==0.8.0" }, - { name = "volcengine-compat", specifier = "~=1.0.0" }, - { name = "weaviate-client", specifier = "==4.20.4" }, - { name = "xinference-client", specifier = "~=2.4.0" }, ] [[package]] -name = "diskcache" -version = "5.6.3" +name = "dify-vdb-baidu" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-baidu" } +dependencies = [ + { name = "pymochow" }, +] + +[package.metadata] +requires-dist = [{ name = "pymochow", specifier = "==2.4.0" }] + +[[package]] +name = "dify-vdb-chroma" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-chroma" } +dependencies = [ + { name = "chromadb" }, +] + +[package.metadata] +requires-dist = [{ name = "chromadb", specifier = "==0.5.20" }] + +[[package]] +name = "dify-vdb-clickzetta" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-clickzetta" } +dependencies = [ + { name = "clickzetta-connector-python" }, +] + +[package.metadata] +requires-dist = [{ name = "clickzetta-connector-python", specifier = ">=0.8.102" }] + +[[package]] +name = "dify-vdb-couchbase" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-couchbase" } +dependencies = [ + { name = "couchbase" }, +] + +[package.metadata] +requires-dist = [{ name = "couchbase", specifier = "~=4.6.0" }] + +[[package]] +name = "dify-vdb-elasticsearch" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-elasticsearch" } +dependencies = [ + { name = "elasticsearch" }, +] + +[package.metadata] +requires-dist = [{ name = "elasticsearch", specifier = "==8.14.0" }] + +[[package]] +name = "dify-vdb-hologres" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-hologres" } +dependencies = [ + { name = "holo-search-sdk" }, +] + +[package.metadata] +requires-dist = [{ name = "holo-search-sdk", specifier = ">=0.4.2" }] + +[[package]] +name = "dify-vdb-huawei-cloud" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-huawei-cloud" } +dependencies = [ + { name = "elasticsearch" }, +] + +[package.metadata] +requires-dist = [{ name = "elasticsearch", specifier = "==8.14.0" }] + +[[package]] +name = "dify-vdb-iris" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-iris" } +dependencies = [ + { name = "intersystems-irispython" }, +] + +[package.metadata] +requires-dist = [{ name = "intersystems-irispython", specifier = ">=5.1.0" }] + +[[package]] +name = "dify-vdb-lindorm" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-lindorm" } +dependencies = [ + { name = "opensearch-py" }, + { name = "tenacity" }, +] + +[package.metadata] +requires-dist = [ + { name = "opensearch-py", specifier = "==3.1.0" }, + { name = "tenacity", specifier = ">=8.0.0" }, +] + +[[package]] +name = "dify-vdb-matrixone" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-matrixone" } +dependencies = [ + { name = "mo-vector" }, +] + +[package.metadata] +requires-dist = [{ name = "mo-vector", specifier = "~=0.1.13" }] + +[[package]] +name = "dify-vdb-milvus" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-milvus" } +dependencies = [ + { name = "pymilvus" }, +] + +[package.metadata] +requires-dist = [{ name = "pymilvus", specifier = "~=2.6.12" }] + +[[package]] +name = "dify-vdb-myscale" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-myscale" } +dependencies = [ + { name = "clickhouse-connect" }, +] + +[package.metadata] +requires-dist = [{ name = "clickhouse-connect", specifier = "~=0.15.0" }] + +[[package]] +name = "dify-vdb-oceanbase" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-oceanbase" } +dependencies = [ + { name = "mysql-connector-python" }, + { name = "pyobvector" }, +] + +[package.metadata] +requires-dist = [ + { name = "mysql-connector-python", specifier = ">=9.3.0" }, + { name = "pyobvector", specifier = "~=0.2.17" }, +] + +[[package]] +name = "dify-vdb-opengauss" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-opengauss" } + +[[package]] +name = "dify-vdb-opensearch" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-opensearch" } +dependencies = [ + { name = "opensearch-py" }, +] + +[package.metadata] +requires-dist = [{ name = "opensearch-py", specifier = "==3.1.0" }] + +[[package]] +name = "dify-vdb-oracle" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-oracle" } +dependencies = [ + { name = "oracledb" }, +] + +[package.metadata] +requires-dist = [{ name = "oracledb", specifier = "==3.4.2" }] + +[[package]] +name = "dify-vdb-pgvecto-rs" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-pgvecto-rs" } +dependencies = [ + { name = "pgvecto-rs", extra = ["sqlalchemy"] }, +] + +[package.metadata] +requires-dist = [{ name = "pgvecto-rs", extras = ["sqlalchemy"], specifier = "~=0.2.2" }] + +[[package]] +name = "dify-vdb-pgvector" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-pgvector" } +dependencies = [ + { name = "pgvector" }, +] + +[package.metadata] +requires-dist = [{ name = "pgvector", specifier = "==0.4.2" }] + +[[package]] +name = "dify-vdb-qdrant" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-qdrant" } +dependencies = [ + { name = "qdrant-client" }, +] + +[package.metadata] +requires-dist = [{ name = "qdrant-client", specifier = "==1.9.0" }] + +[[package]] +name = "dify-vdb-relyt" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-relyt" } + +[[package]] +name = "dify-vdb-tablestore" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-tablestore" } +dependencies = [ + { name = "tablestore" }, +] + +[package.metadata] +requires-dist = [{ name = "tablestore", specifier = "==6.4.4" }] + +[[package]] +name = "dify-vdb-tencent" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-tencent" } +dependencies = [ + { name = "tcvectordb" }, +] + +[package.metadata] +requires-dist = [{ name = "tcvectordb", specifier = "~=2.1.0" }] + +[[package]] +name = "dify-vdb-tidb-on-qdrant" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-tidb-on-qdrant" } +dependencies = [ + { name = "qdrant-client" }, +] + +[package.metadata] +requires-dist = [{ name = "qdrant-client", specifier = "==1.9.0" }] + +[[package]] +name = "dify-vdb-tidb-vector" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-tidb-vector" } +dependencies = [ + { name = "tidb-vector" }, +] + +[package.metadata] +requires-dist = [{ name = "tidb-vector", specifier = "==0.0.15" }] + +[[package]] +name = "dify-vdb-upstash" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-upstash" } +dependencies = [ + { name = "upstash-vector" }, +] + +[package.metadata] +requires-dist = [{ name = "upstash-vector", specifier = "==0.8.0" }] + +[[package]] +name = "dify-vdb-vastbase" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-vastbase" } +dependencies = [ + { name = "pyobvector" }, +] + +[package.metadata] +requires-dist = [{ name = "pyobvector", specifier = "~=0.2.17" }] + +[[package]] +name = "dify-vdb-vikingdb" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-vikingdb" } +dependencies = [ + { name = "volcengine-compat" }, +] + +[package.metadata] +requires-dist = [{ name = "volcengine-compat", specifier = "~=1.0.0" }] + +[[package]] +name = "dify-vdb-weaviate" +version = "0.0.1" +source = { editable = "providers/vdb/vdb-weaviate" } +dependencies = [ + { name = "weaviate-client" }, +] + +[package.metadata] +requires-dist = [{ name = "weaviate-client", specifier = "==4.20.5" }] + +[[package]] +name = "diskcache-weave" +version = "5.6.3.post1" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/3f/21/1c1ffc1a039ddcc459db43cc108658f32c57d271d7289a2794e401d0fdb6/diskcache-5.6.3.tar.gz", hash = "sha256:2c3a3fa2743d8535d832ec61c2054a1641f41775aa7c556758a109941e33e4fc", size = 67916, upload-time = "2023-08-31T06:12:00.316Z" } +sdist = { url = "https://files.pythonhosted.org/packages/a6/52/634e1f43486489fdaded1a7c9bd3524b7e0ca9bcc43af426afa511c541e2/diskcache_weave-5.6.3.post1.tar.gz", hash = "sha256:1fe7e648d1d85d517c05b296f1692e7c425a71714dc31a4b7a584a8f8f5604f2", size = 68297, upload-time = "2026-03-19T14:57:54.299Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/3f/27/4570e78fc0bf5ea0ca45eb1de3818a23787af9b390c0b0a0033a1b8236f9/diskcache-5.6.3-py3-none-any.whl", hash = "sha256:5e31b2d5fbad117cc363ebaf6b689474db18a1f6438bc82358b024abd4c2ca19", size = 45550, upload-time = "2023-08-31T06:11:58.822Z" }, + { url = "https://files.pythonhosted.org/packages/d9/8d/92887441bc338fb8d0b8ea75eb0392c00e20a85ec0bbe02f273188849568/diskcache_weave-5.6.3.post1-py3-none-any.whl", hash = "sha256:b00e9842b74eeecf314456f9c833a6d4f7792ed12b20297b4d3b9df7859ee66f", size = 45905, upload-time = "2026-03-19T14:57:52.819Z" }, ] [[package]] @@ -1734,18 +2126,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/b0/0d/9feae160378a3553fa9a339b0e9c1a048e147a4127210e286ef18b730f03/durationpy-0.10-py3-none-any.whl", hash = "sha256:3b41e1b601234296b4fb368338fdcd3e13e0b4fb5b67345948f4f2bf9868b286", size = 3922, upload-time = "2025-05-17T13:52:36.463Z" }, ] -[[package]] -name = "ecdsa" -version = "0.19.1" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "six" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/c0/1f/924e3caae75f471eae4b26bd13b698f6af2c44279f67af317439c2f4c46a/ecdsa-0.19.1.tar.gz", hash = "sha256:478cba7b62555866fcb3bb3fe985e06decbdb68ef55713c4e5ab98c57d508e61", size = 201793, upload-time = "2025-03-13T11:52:43.25Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/cb/a3/460c57f094a4a165c84a1341c373b0a4f5ec6ac244b998d5021aade89b77/ecdsa-0.19.1-py2.py3-none-any.whl", hash = "sha256:30638e27cf77b7e15c4c4cc1973720149e1033827cfd00661ca5c8cc0cdb24c3", size = 150607, upload-time = "2025-03-13T11:52:41.757Z" }, -] - [[package]] name = "elastic-transport" version = "8.17.1" @@ -1800,15 +2180,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/c1/8b/5fe2cc11fee489817272089c4203e679c63b570a5aaeb18d852ae3cbba6a/et_xmlfile-2.0.0-py3-none-any.whl", hash = "sha256:7a91720bc756843502c3b7504c77b8fe44217c85c537d85037f0f536151b2caa", size = 18059, upload-time = "2024-10-25T17:25:39.051Z" }, ] -[[package]] -name = "eval-type-backport" -version = "0.3.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/51/23/079e39571d6dd8d90d7a369ecb55ad766efb6bae4e77389629e14458c280/eval_type_backport-0.3.0.tar.gz", hash = "sha256:1638210401e184ff17f877e9a2fa076b60b5838790f4532a21761cc2be67aea1", size = 9272, upload-time = "2025-11-13T20:56:50.845Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/19/d8/2a1c638d9e0aa7e269269a1a1bf423ddd94267f1a01bbe3ad03432b67dd4/eval_type_backport-0.3.0-py3-none-any.whl", hash = "sha256:975a10a0fe333c8b6260d7fdb637698c9a16c3a9e3b6eb943fee6a6f67a37fe8", size = 6061, upload-time = "2025-11-13T20:56:49.499Z" }, -] - [[package]] name = "events" version = "0.5" @@ -1828,14 +2199,14 @@ wheels = [ [[package]] name = "faker" -version = "40.12.0" +version = "40.13.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "tzdata", marker = "sys_platform == 'win32'" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/66/c1/f8224fe97fea2f98d455c22438c1b09b10e14ef2cb95ae4f7cec9aa59659/faker-40.12.0.tar.gz", hash = "sha256:58b5a9054c367bd5fb2e948634105364cc570e78a98a8e5161a74691c45f158f", size = 1962003, upload-time = "2026-03-30T18:00:56.596Z" } +sdist = { url = "https://files.pythonhosted.org/packages/89/95/4822ffe94723553789aef783104f4f18fc20d7c4c68e1bbd633e11d09758/faker-40.13.0.tar.gz", hash = "sha256:a0751c84c3abac17327d7bb4c98e8afe70ebf7821e01dd7d0b15cd8856415525", size = 1962043, upload-time = "2026-04-06T16:44:55.68Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/2b/5c/39452a6b6aa76ffa518fa7308e1975b37e9ba77caa6172a69d61e7180221/faker-40.12.0-py3-none-any.whl", hash = "sha256:6238a4058a8b581892e3d78fe5fdfa7568739e1c8283e4ede83f1dde0bfc1a3b", size = 1994601, upload-time = "2026-03-30T18:00:54.804Z" }, + { url = "https://files.pythonhosted.org/packages/da/8a/708103325edff16a0b0e004de0d37db8ba216a32713948c64d71f6d4a4c2/faker-40.13.0-py3-none-any.whl", hash = "sha256:c1298fd0d819b3688fb5fd358c4ba8f56c7c8c740b411fd3dbd8e30bf2c05019", size = 1994597, upload-time = "2026-04-06T16:44:53.698Z" }, ] [[package]] @@ -2086,7 +2457,7 @@ wheels = [ [[package]] name = "gevent" -version = "25.9.1" +version = "26.4.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "cffi", marker = "platform_python_implementation == 'CPython' and sys_platform == 'win32'" }, @@ -2094,16 +2465,28 @@ dependencies = [ { name = "zope-event" }, { name = "zope-interface" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/9e/48/b3ef2673ffb940f980966694e40d6d32560f3ffa284ecaeb5ea3a90a6d3f/gevent-25.9.1.tar.gz", hash = "sha256:adf9cd552de44a4e6754c51ff2e78d9193b7fa6eab123db9578a210e657235dd", size = 5059025, upload-time = "2025-09-17T16:15:34.528Z" } +sdist = { url = "https://files.pythonhosted.org/packages/20/27/1062fa31333dc3428a1f5f33cd6598b0552165ba679ca3ba116de42c9e8e/gevent-26.4.0.tar.gz", hash = "sha256:288d03addfccf0d1c67268358b6759b04392bf3bc35d26f3d9a45c82899c292d", size = 6242440, upload-time = "2026-04-09T12:08:19.482Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/f7/49/e55930ba5259629eb28ac7ee1abbca971996a9165f902f0249b561602f24/gevent-25.9.1-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:46b188248c84ffdec18a686fcac5dbb32365d76912e14fda350db5dc0bfd4f86", size = 2955991, upload-time = "2025-09-17T14:52:30.568Z" }, - { url = "https://files.pythonhosted.org/packages/aa/88/63dc9e903980e1da1e16541ec5c70f2b224ec0a8e34088cb42794f1c7f52/gevent-25.9.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:f2b54ea3ca6f0c763281cd3f96010ac7e98c2e267feb1221b5a26e2ca0b9a692", size = 1808503, upload-time = "2025-09-17T15:41:25.59Z" }, - { url = "https://files.pythonhosted.org/packages/7a/8d/7236c3a8f6ef7e94c22e658397009596fa90f24c7d19da11ad7ab3a9248e/gevent-25.9.1-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:7a834804ac00ed8a92a69d3826342c677be651b1c3cd66cc35df8bc711057aa2", size = 1890001, upload-time = "2025-09-17T15:49:01.227Z" }, - { url = "https://files.pythonhosted.org/packages/4f/63/0d7f38c4a2085ecce26b50492fc6161aa67250d381e26d6a7322c309b00f/gevent-25.9.1-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:323a27192ec4da6b22a9e51c3d9d896ff20bc53fdc9e45e56eaab76d1c39dd74", size = 1855335, upload-time = "2025-09-17T15:49:20.582Z" }, - { url = "https://files.pythonhosted.org/packages/95/18/da5211dfc54c7a57e7432fd9a6ffeae1ce36fe5a313fa782b1c96529ea3d/gevent-25.9.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:6ea78b39a2c51d47ff0f130f4c755a9a4bbb2dd9721149420ad4712743911a51", size = 2109046, upload-time = "2025-09-17T15:15:13.817Z" }, - { url = "https://files.pythonhosted.org/packages/a6/5a/7bb5ec8e43a2c6444853c4a9f955f3e72f479d7c24ea86c95fb264a2de65/gevent-25.9.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:dc45cd3e1cc07514a419960af932a62eb8515552ed004e56755e4bf20bad30c5", size = 1827099, upload-time = "2025-09-17T15:52:41.384Z" }, - { url = "https://files.pythonhosted.org/packages/ca/d4/b63a0a60635470d7d986ef19897e893c15326dd69e8fb342c76a4f07fe9e/gevent-25.9.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:34e01e50c71eaf67e92c186ee0196a039d6e4f4b35670396baed4a2d8f1b347f", size = 2172623, upload-time = "2025-09-17T15:24:12.03Z" }, - { url = "https://files.pythonhosted.org/packages/d5/98/caf06d5d22a7c129c1fb2fc1477306902a2c8ddfd399cd26bbbd4caf2141/gevent-25.9.1-cp312-cp312-win_amd64.whl", hash = "sha256:4acd6bcd5feabf22c7c5174bd3b9535ee9f088d2bbce789f740ad8d6554b18f3", size = 1682837, upload-time = "2025-09-17T19:48:47.318Z" }, + { url = "https://files.pythonhosted.org/packages/3d/16/131d3874f50974b355c90a061a12d3fe2292cde0f875a1fa3d8b224f1251/gevent-26.4.0-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:318a0a73f664113e8d86d0cb0e328e7650e2d7d9c2e045418ab6fb1285831ad3", size = 2928699, upload-time = "2026-04-08T21:25:36.215Z" }, + { url = "https://files.pythonhosted.org/packages/ea/8b/199e59b303adaff7f7365def9ab569c7ecd863363c974548bce3ddc2c89d/gevent-26.4.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:ce7aa033a3f68beb6732d1450a80c1af29e63e0c2d01abad7918cf2507f72fa6", size = 1783821, upload-time = "2026-04-08T22:23:18.73Z" }, + { url = "https://files.pythonhosted.org/packages/e2/2d/b8249c9bd3f386191311c3a9bec4068e192a3f9df2fad92a71a15265ba15/gevent-26.4.0-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:a1b897c952baefd72232efaeb3bdb1ca2fa7ae94cbfe68ac21201b03e843190a", size = 1879424, upload-time = "2026-04-08T22:27:10.561Z" }, + { url = "https://files.pythonhosted.org/packages/ef/89/59216985c1f2c11f2f28bbc88e583588ad44cdde823c530ad4e307be6612/gevent-26.4.0-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:7eef2ea508ce41795e20587a5fc868ae4919543097c81a40fbdfd65bc479f54f", size = 1830575, upload-time = "2026-04-08T22:34:37.093Z" }, + { url = "https://files.pythonhosted.org/packages/ee/a9/2d67d2b0aa0ca9d7bb7fe73c3bbb97b3695cb15c338a6ea7734f58da9add/gevent-26.4.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:f7e12fdd28cc9f39a463d8df5172d698c64a8ed385a21d98e7092fd8308a139a", size = 2113898, upload-time = "2026-04-08T21:54:14.9Z" }, + { url = "https://files.pythonhosted.org/packages/95/a3/457d58d9b3e7da17c8456d841c37a32af8d231a1d71237ad201b19129317/gevent-26.4.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d48e3ee13d7678c24c22f19d441ad6bc220a79f23662d03ff36fae0d62efdb59", size = 1795890, upload-time = "2026-04-08T22:26:53.252Z" }, + { url = "https://files.pythonhosted.org/packages/a7/cc/cbe78f2626643b20275aaa41cd2cc45ba75056e3665bde36bc190af3cae0/gevent-26.4.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:c58c8e034f94329be4dc0979fba3301005a433dbab42cea0b2c33fd736946872", size = 2139791, upload-time = "2026-04-08T22:00:02.375Z" }, + { url = "https://files.pythonhosted.org/packages/f6/df/7875e08b06a95f4577b71708ec470d029fadf873a66eb813a2861d79dfb5/gevent-26.4.0-cp312-cp312-win_amd64.whl", hash = "sha256:1c737e6ac6ce1398df0e3f41c58d982e397c993cbe73ac05b7edbe39e128c9cb", size = 1680530, upload-time = "2026-04-08T23:15:38.714Z" }, +] + +[[package]] +name = "gevent-websocket" +version = "0.10.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "gevent" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/98/d2/6fa19239ff1ab072af40ebf339acd91fb97f34617c2ee625b8e34bf42393/gevent-websocket-0.10.1.tar.gz", hash = "sha256:7eaef32968290c9121f7c35b973e2cc302ffb076d018c9068d2f5ca8b2d85fb0", size = 18366, upload-time = "2017-03-12T22:46:05.68Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/7b/84/2dc373eb6493e00c884cc11e6c059ec97abae2678d42f06bf780570b0193/gevent_websocket-0.10.1-py3-none-any.whl", hash = "sha256:17b67d91282f8f4c973eba0551183fc84f56f1c90c8f6b6b30256f31f66f5242", size = 22987, upload-time = "2017-03-12T22:46:03.611Z" }, ] [[package]] @@ -2160,7 +2543,7 @@ wheels = [ [[package]] name = "google-api-core" -version = "2.30.2" +version = "2.30.3" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "google-auth" }, @@ -2169,9 +2552,9 @@ dependencies = [ { name = "protobuf" }, { name = "requests" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/1a/2e/83ca41eb400eb228f9279ec14ed66f6475218b59af4c6daec2d5a509fe83/google_api_core-2.30.2.tar.gz", hash = "sha256:9a8113e1a88bdc09a7ff629707f2214d98d61c7f6ceb0ea38c42a095d02dc0f9", size = 176862, upload-time = "2026-04-02T21:23:44.876Z" } +sdist = { url = "https://files.pythonhosted.org/packages/16/ce/502a57fb0ec752026d24df1280b162294b22a0afb98a326084f9a979138b/google_api_core-2.30.3.tar.gz", hash = "sha256:e601a37f148585319b26db36e219df68c5d07b6382cff2d580e83404e44d641b", size = 177001, upload-time = "2026-04-10T00:41:28.035Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/84/e1/ebd5100cbb202e561c0c8b59e485ef3bd63fa9beb610f3fdcaea443f0288/google_api_core-2.30.2-py3-none-any.whl", hash = "sha256:a4c226766d6af2580577db1f1a51bf53cd262f722b49731ce7414c43068a9594", size = 173236, upload-time = "2026-04-02T21:23:06.395Z" }, + { url = "https://files.pythonhosted.org/packages/03/15/e56f351cf6ef1cfea58e6ac226a7318ed1deb2218c4b3cc9bd9e4b786c5a/google_api_core-2.30.3-py3-none-any.whl", hash = "sha256:a85761ba72c444dad5d611c2220633480b2b6be2521eca69cca2dbb3ffd6bfe8", size = 173274, upload-time = "2026-04-09T22:57:16.198Z" }, ] [package.optional-dependencies] @@ -2182,7 +2565,7 @@ grpc = [ [[package]] name = "google-api-python-client" -version = "2.193.0" +version = "2.194.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "google-api-core" }, @@ -2191,22 +2574,22 @@ dependencies = [ { name = "httplib2" }, { name = "uritemplate" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/90/f4/e14b6815d3b1885328dd209676a3a4c704882743ac94e18ef0093894f5c8/google_api_python_client-2.193.0.tar.gz", hash = "sha256:8f88d16e89d11341e0a8b199cafde0fb7e6b44260dffb88d451577cbd1bb5d33", size = 14281006, upload-time = "2026-03-17T18:25:29.415Z" } +sdist = { url = "https://files.pythonhosted.org/packages/60/ab/e83af0eb043e4ccc49571ca7a6a49984e9d00f4e9e6e6f1238d60bc84dce/google_api_python_client-2.194.0.tar.gz", hash = "sha256:db92647bd1a90f40b79c9618461553c2b20b6a43ce7395fa6de07132dc14f023", size = 14443469, upload-time = "2026-04-08T23:07:35.757Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/f0/6d/fe75167797790a56d17799b75e1129bb93f7ff061efc7b36e9731bd4be2b/google_api_python_client-2.193.0-py3-none-any.whl", hash = "sha256:c42aa324b822109901cfecab5dc4fc3915d35a7b376835233c916c70610322db", size = 14856490, upload-time = "2026-03-17T18:25:26.608Z" }, + { url = "https://files.pythonhosted.org/packages/b0/34/5a624e49f179aa5b0cb87b2ce8093960299030ff40423bfbde09360eb908/google_api_python_client-2.194.0-py3-none-any.whl", hash = "sha256:61eaaac3b8fc8fdf11c08af87abc3d1342d1b37319cc1b57405f86ef7697e717", size = 15016514, upload-time = "2026-04-08T23:07:33.093Z" }, ] [[package]] name = "google-auth" -version = "2.49.1" +version = "2.49.2" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "cryptography" }, { name = "pyasn1-modules" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/ea/80/6a696a07d3d3b0a92488933532f03dbefa4a24ab80fb231395b9a2a1be77/google_auth-2.49.1.tar.gz", hash = "sha256:16d40da1c3c5a0533f57d268fe72e0ebb0ae1cc3b567024122651c045d879b64", size = 333825, upload-time = "2026-03-12T19:30:58.135Z" } +sdist = { url = "https://files.pythonhosted.org/packages/c6/fc/e925290a1ad95c975c459e2df070fac2b90954e13a0370ac505dff78cb99/google_auth-2.49.2.tar.gz", hash = "sha256:c1ae38500e73065dcae57355adb6278cf8b5c8e391994ae9cbadbcb9631ab409", size = 333958, upload-time = "2026-04-10T00:41:21.888Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/e9/eb/c6c2478d8a8d633460be40e2a8a6f8f429171997a35a96f81d3b680dec83/google_auth-2.49.1-py3-none-any.whl", hash = "sha256:195ebe3dca18eddd1b3db5edc5189b76c13e96f29e73043b923ebcf3f1a860f7", size = 240737, upload-time = "2026-03-12T19:30:53.159Z" }, + { url = "https://files.pythonhosted.org/packages/73/76/d241a5c927433420507215df6cac1b1fa4ac0ba7a794df42a84326c68da8/google_auth-2.49.2-py3-none-any.whl", hash = "sha256:c2720924dfc82dedb962c9f52cabb2ab16714fd0a6a707e40561d217574ed6d5", size = 240638, upload-time = "2026-04-10T00:41:14.501Z" }, ] [package.optional-dependencies] @@ -2229,7 +2612,7 @@ wheels = [ [[package]] name = "google-cloud-aiplatform" -version = "1.145.0" +version = "1.147.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "docstring-parser" }, @@ -2245,9 +2628,9 @@ dependencies = [ { name = "pydantic" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/26/e5/6442d9d2c019456638825d4665b1e87ec4eaf1d182950ba426d0f0210eab/google_cloud_aiplatform-1.145.0.tar.gz", hash = "sha256:7894c4f3d2684bdb60e9a122004c01678e3b585174a27298ae7a3ed1e5eaf3bd", size = 10222904, upload-time = "2026-04-02T14:06:58.322Z" } +sdist = { url = "https://files.pythonhosted.org/packages/23/93/9bfcaaf1ceab12999a881ccf69ebd9b30f467ec5623989c66894e81fc139/google_cloud_aiplatform-1.147.0.tar.gz", hash = "sha256:b2e1b669ba37f02426e03eb13187eebf4cbfeaa0a3bfed37b5578abb375ab689", size = 10235245, upload-time = "2026-04-09T17:14:49.179Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/3d/c6/23e98d3407d5e2416a3dfaecb0a053da899848c50db69e5f2b61a555ce06/google_cloud_aiplatform-1.145.0-py2.py3-none-any.whl", hash = "sha256:4d1c31797a8bd8f3342ed5f186dd30d1f6bca73ddbee2bde452777100d2ddc11", size = 8396640, upload-time = "2026-04-02T14:06:54.125Z" }, + { url = "https://files.pythonhosted.org/packages/d3/d2/1c1c582f6bbed9bbc0daa5acf3a5d98751ca8bc48584548d28569b8ce1a7/google_cloud_aiplatform-1.147.0-py2.py3-none-any.whl", hash = "sha256:29f7ae020718d3c45094f0475464e06a97f81b1572bea150ae6a1b22c5f45997", size = 8408951, upload-time = "2026-04-09T17:14:45.482Z" }, ] [[package]] @@ -2330,7 +2713,7 @@ wheels = [ [[package]] name = "google-genai" -version = "1.65.0" +version = "1.72.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "anyio" }, @@ -2344,9 +2727,9 @@ dependencies = [ { name = "typing-extensions" }, { name = "websockets" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/79/f9/cc1191c2540d6a4e24609a586c4ed45d2db57cfef47931c139ee70e5874a/google_genai-1.65.0.tar.gz", hash = "sha256:d470eb600af802d58a79c7f13342d9ea0d05d965007cae8f76c7adff3d7a4750", size = 497206, upload-time = "2026-02-26T00:20:33.824Z" } +sdist = { url = "https://files.pythonhosted.org/packages/3c/20/2aff5ea3cd7459f85101d119c136d9ca4369fcda3dcf0cfee89b305611a4/google_genai-1.72.0.tar.gz", hash = "sha256:abe7d3aecfafb464b904e3a09c81b626fb425e160e123e71a5125a7021cea7b2", size = 522844, upload-time = "2026-04-09T21:35:46.283Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/68/3c/3fea4e7c91357c71782d7dcaad7a2577d636c90317e003386893c25bc62c/google_genai-1.65.0-py3-none-any.whl", hash = "sha256:68c025205856919bc03edb0155c11b4b833810b7ce17ad4b7a9eeba5158f6c44", size = 724429, upload-time = "2026-02-26T00:20:32.186Z" }, + { url = "https://files.pythonhosted.org/packages/9f/3d/9f70246114cdf56a2615a40428ced08bc844f5a26247fe812b2f0dd4eaca/google_genai-1.72.0-py3-none-any.whl", hash = "sha256:ea861e4c6946e3185c24b40d95503e088fc230a73a71fec0ef78164b369a8489", size = 764230, upload-time = "2026-04-09T21:35:44.587Z" }, ] [[package]] @@ -2394,12 +2777,8 @@ wheels = [ ] [package.optional-dependencies] -aiohttp = [ - { name = "aiohttp" }, -] -requests = [ - { name = "requests" }, - { name = "requests-toolbelt" }, +httpx = [ + { name = "httpx" }, ] [[package]] @@ -2637,16 +3016,16 @@ wheels = [ [[package]] name = "holo-search-sdk" -version = "0.4.1" +version = "0.4.2" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "numpy" }, { name = "psycopg", extra = ["binary"] }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/0b/b8/70a4999dabbba15e98d201a7399aab76ab96931ad1a27392ba5252cc9165/holo_search_sdk-0.4.1.tar.gz", hash = "sha256:9aea98b6078b9202abb568ed69d798d5e0505d2b4cc3a136a6aa84402bcd2133", size = 56701, upload-time = "2026-01-28T01:44:57.645Z" } +sdist = { url = "https://files.pythonhosted.org/packages/a0/6d/62bc3f27002a6e1fa6aefdc17f9e95bec67eebb5348542637bf01c8caa6a/holo_search_sdk-0.4.2.tar.gz", hash = "sha256:630ade92c82d3d610a6e4f933f530045a6acbab4528512f5dc5d7f67dd743263", size = 57433, upload-time = "2026-03-25T05:59:25.146Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/8a/30/3059a979272f90a96f31b167443cc27675e8cc8f970a3ac0cb80bf803c70/holo_search_sdk-0.4.1-py3-none-any.whl", hash = "sha256:ef1059895ea936ff6a087f68dac92bd1ae0320e51ec5b1d4e7bed7a5dd6beb45", size = 32647, upload-time = "2026-01-28T01:44:56.098Z" }, + { url = "https://files.pythonhosted.org/packages/fd/9a/5021e499a1aa4fc1f1b8ca5dcbc9987d2ab7115da4fa9d1e464a6590d142/holo_search_sdk-0.4.2-py3-none-any.whl", hash = "sha256:b0ef8e6ee6a6980526317951ab0967d18dd2973500b7e3f38259f061471ac5da", size = 33488, upload-time = "2026-03-25T05:59:23.216Z" }, ] [[package]] @@ -2786,14 +3165,14 @@ wheels = [ [[package]] name = "hypothesis" -version = "6.151.11" +version = "6.151.12" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "sortedcontainers" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/a9/58/41af0d539b3c95644d1e4e353cbd6ac9473e892ea21802546a8886b79078/hypothesis-6.151.11.tar.gz", hash = "sha256:f33dcb68b62c7b07c9ac49664989be898fa8ce57583f0dc080259a197c6c7ff1", size = 463779, upload-time = "2026-04-05T17:35:55.935Z" } +sdist = { url = "https://files.pythonhosted.org/packages/ce/ab/67ca321d1ab96fd3828b12142f1c258e2d4a668a025d06cd50ab3409787f/hypothesis-6.151.12.tar.gz", hash = "sha256:be485f503979af4c3dfa19e3fc2b967d0458e7f8c4e28128d7e215e0a55102e0", size = 463900, upload-time = "2026-04-08T19:40:06.205Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/1d/06/f49393eca84b87b17a67aaebf9f6251190ba1e9fe9f2236504049fc43fee/hypothesis-6.151.11-py3-none-any.whl", hash = "sha256:7ac05173206746cec8312f95164a30a4eb4916815413a278922e63ff1e404648", size = 529572, upload-time = "2026-04-05T17:35:53.438Z" }, + { url = "https://files.pythonhosted.org/packages/0e/5a/6cecf134b631050a1f8605096adbe812483b60790d951470989d39b56860/hypothesis-6.151.12-py3-none-any.whl", hash = "sha256:37d4f3a768365c30571b11dfd7a6857a12173d933010b2c4ab65619f1b5952c5", size = 529656, upload-time = "2026-04-08T19:40:03.126Z" }, ] [[package]] @@ -2961,11 +3340,11 @@ wheels = [ [[package]] name = "json-repair" -version = "0.55.1" +version = "0.59.2" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/c0/de/71d6bb078d167c0d0959776cee6b6bb8d2ad843f512a5222d7151dde4955/json_repair-0.55.1.tar.gz", hash = "sha256:b27aa0f6bf2e5bf58554037468690446ef26f32ca79c8753282adb3df25fb888", size = 39231, upload-time = "2026-01-23T09:37:20.93Z" } +sdist = { url = "https://files.pythonhosted.org/packages/ed/cb/a49f1661737a78098ce33668350590c981a4163055bc9a01e0cc688d896a/json_repair-0.59.2.tar.gz", hash = "sha256:1d8abb2fa94c4035a66ef9892ea3785dace8dcf09c583e6de781cfd31b278b3d", size = 48341, upload-time = "2026-04-11T15:55:41.145Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/56/da/289ba9eb550ae420cfc457926f6c49b87cacf8083ee9927e96921888a665/json_repair-0.55.1-py3-none-any.whl", hash = "sha256:a1bcc151982a12bc3ef9e9528198229587b1074999cfe08921ab6333b0c8e206", size = 29743, upload-time = "2026-01-23T09:37:19.404Z" }, + { url = "https://files.pythonhosted.org/packages/e1/03/7afcecb4242d93b684708b47fb014abdc1922a01b38c0e30f1117ae74a83/json_repair-0.59.2-py3-none-any.whl", hash = "sha256:6ca6238519c24f671bcb05d1f38a0d6a452bb4ca5af82137595c5c2f1a0fb785", size = 46918, upload-time = "2026-04-11T15:55:39.817Z" }, ] [[package]] @@ -3052,7 +3431,7 @@ sdist = { url = "https://files.pythonhosted.org/packages/0e/72/a3add0e4eec4eb9e2 [[package]] name = "langfuse" -version = "4.0.6" +version = "4.2.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "backoff" }, @@ -3064,14 +3443,14 @@ dependencies = [ { name = "pydantic" }, { name = "wrapt" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/ab/d0/6d79ed5614f86f27f5df199cf10c6facf6874ff6f91b828ae4dad90aa86d/langfuse-4.0.6.tar.gz", hash = "sha256:83a6f8cc8f1431fa2958c91e2673bc4179f993297e9b1acd1dbf001785e6cf83", size = 274094, upload-time = "2026-04-01T20:04:15.153Z" } +sdist = { url = "https://files.pythonhosted.org/packages/45/9c/b912a00ffae92ff9955cdd9b74fb839be58f631d4329ae2a8a0376f697f2/langfuse-4.2.0.tar.gz", hash = "sha256:d0bd26d5065cf6a59d7d1093b08d8910e2458dc3da7ed8ccec160db114c18342", size = 275582, upload-time = "2026-04-10T11:55:25.21Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/50/b4/088048e37b6d7ec1b52c6a11bc33101454285a22eaab8303dcccfd78344d/langfuse-4.0.6-py3-none-any.whl", hash = "sha256:0562b1dcf83247f9d8349f0f755eaed9a7f952fee67e66580970f0738bf3adbf", size = 472841, upload-time = "2026-04-01T20:04:16.451Z" }, + { url = "https://files.pythonhosted.org/packages/be/0a/b84e3e68a690ccfe6d64953c572772c685fcb0915b7f2ee3a87c22e388ab/langfuse-4.2.0-py3-none-any.whl", hash = "sha256:bfd760bf10fd0228f297f6369436620f76d16b589de46393d65706b27e4e4082", size = 475449, upload-time = "2026-04-10T11:55:23.624Z" }, ] [[package]] name = "langsmith" -version = "0.7.25" +version = "0.7.31" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "httpx" }, @@ -3084,9 +3463,9 @@ dependencies = [ { name = "xxhash" }, { name = "zstandard" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/7e/d7/21ffae5ccdc3c9b8de283e8f8bf48a92039681df0d39f15133d8ff8965bd/langsmith-0.7.25.tar.gz", hash = "sha256:d17da71f156ca69eafd28ac9627c8e0e93170260ec37cd27cedc83205a067598", size = 1145410, upload-time = "2026-04-03T13:11:42.36Z" } +sdist = { url = "https://files.pythonhosted.org/packages/e6/11/696019490992db5c87774dc20515529ef42a01e1d770fb754ed6d9b12fb0/langsmith-0.7.31.tar.gz", hash = "sha256:331ee4f7c26bb5be4022b9859b7d7b122cbf8c9d01d9f530114c1914b0349ffb", size = 1178480, upload-time = "2026-04-14T17:55:41.242Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/29/13/67889d41baf7dbaf13ffd0b334a0f284e107fad1cc8782a1abb1e56e5eeb/langsmith-0.7.25-py3-none-any.whl", hash = "sha256:55ecc24c547f6c79b5a684ff8685c669eec34e52fcac5d2c0af7d613aef5a632", size = 359417, upload-time = "2026-04-03T13:11:40.729Z" }, + { url = "https://files.pythonhosted.org/packages/1d/a1/a013cf458c301cda86a213dd153ce0a01c93f1ab5833f951e6a44c9763ce/langsmith-0.7.31-py3-none-any.whl", hash = "sha256:0291d49203f6e80dda011af1afda61eb0595a4d697adb684590a8805e1d61fb6", size = 373276, upload-time = "2026-04-14T17:55:39.677Z" }, ] [[package]] @@ -3121,7 +3500,7 @@ wheels = [ [[package]] name = "litellm" -version = "1.82.6" +version = "1.83.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "aiohttp" }, @@ -3137,22 +3516,21 @@ dependencies = [ { name = "tiktoken" }, { name = "tokenizers" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/29/75/1c537aa458426a9127a92bc2273787b2f987f4e5044e21f01f2eed5244fd/litellm-1.82.6.tar.gz", hash = "sha256:2aa1c2da21fe940c33613aa447119674a3ad4d2ad5eb064e4d5ce5ee42420136", size = 17414147, upload-time = "2026-03-22T06:36:00.452Z" } +sdist = { url = "https://files.pythonhosted.org/packages/22/92/6ce9737554994ca8e536e5f4f6a87cc7c4774b656c9eb9add071caf7d54b/litellm-1.83.0.tar.gz", hash = "sha256:860bebc76c4bb27b4cf90b4a77acd66dba25aced37e3db98750de8a1766bfb7a", size = 17333062, upload-time = "2026-03-31T05:08:25.331Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/02/6c/5327667e6dbe9e98cbfbd4261c8e91386a52e38f41419575854248bbab6a/litellm-1.82.6-py3-none-any.whl", hash = "sha256:164a3ef3e19f309e3cabc199bef3d2045212712fefdfa25fc7f75884a5b5b205", size = 15591595, upload-time = "2026-03-22T06:35:56.795Z" }, + { url = "https://files.pythonhosted.org/packages/19/2c/a670cc050fcd6f45c6199eb99e259c73aea92edba8d5c2fc1b3686d36217/litellm-1.83.0-py3-none-any.whl", hash = "sha256:88c536d339248f3987571493015784671ba3f193a328e1ea6780dbebaa2094a8", size = 15610306, upload-time = "2026-03-31T05:08:21.987Z" }, ] [[package]] name = "llvmlite" -version = "0.45.1" +version = "0.47.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/99/8d/5baf1cef7f9c084fb35a8afbde88074f0d6a727bc63ef764fe0e7543ba40/llvmlite-0.45.1.tar.gz", hash = "sha256:09430bb9d0bb58fc45a45a57c7eae912850bedc095cd0810a57de109c69e1c32", size = 185600, upload-time = "2025-10-01T17:59:52.046Z" } +sdist = { url = "https://files.pythonhosted.org/packages/01/88/a8952b6d5c21e74cbf158515b779666f692846502623e9e3c39d8e8ba25f/llvmlite-0.47.0.tar.gz", hash = "sha256:62031ce968ec74e95092184d4b0e857e444f8fdff0b8f9213707699570c33ccc", size = 193614, upload-time = "2026-03-31T18:29:53.497Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/e2/7c/82cbd5c656e8991bcc110c69d05913be2229302a92acb96109e166ae31fb/llvmlite-0.45.1-cp312-cp312-macosx_10_15_x86_64.whl", hash = "sha256:28e763aba92fe9c72296911e040231d486447c01d4f90027c8e893d89d49b20e", size = 43043524, upload-time = "2025-10-01T18:03:30.666Z" }, - { url = "https://files.pythonhosted.org/packages/9d/bc/5314005bb2c7ee9f33102c6456c18cc81745d7055155d1218f1624463774/llvmlite-0.45.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1a53f4b74ee9fd30cb3d27d904dadece67a7575198bd80e687ee76474620735f", size = 37253123, upload-time = "2025-10-01T18:04:18.177Z" }, - { url = "https://files.pythonhosted.org/packages/96/76/0f7154952f037cb320b83e1c952ec4a19d5d689cf7d27cb8a26887d7bbc1/llvmlite-0.45.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:5b3796b1b1e1c14dcae34285d2f4ea488402fbd2c400ccf7137603ca3800864f", size = 56288211, upload-time = "2025-10-01T18:01:24.079Z" }, - { url = "https://files.pythonhosted.org/packages/00/b1/0b581942be2683ceb6862d558979e87387e14ad65a1e4db0e7dd671fa315/llvmlite-0.45.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:779e2f2ceefef0f4368548685f0b4adde34e5f4b457e90391f570a10b348d433", size = 55140958, upload-time = "2025-10-01T18:02:30.482Z" }, - { url = "https://files.pythonhosted.org/packages/33/94/9ba4ebcf4d541a325fd8098ddc073b663af75cc8b065b6059848f7d4dce7/llvmlite-0.45.1-cp312-cp312-win_amd64.whl", hash = "sha256:9e6c9949baf25d9aa9cd7cf0f6d011b9ca660dd17f5ba2b23bdbdb77cc86b116", size = 38132231, upload-time = "2025-10-01T18:05:03.664Z" }, + { url = "https://files.pythonhosted.org/packages/fa/48/4b7fe0e34c169fa2f12532916133e0b219d2823b540733651b34fdac509a/llvmlite-0.47.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:306a265f408c259067257a732c8e159284334018b4083a9e35f67d19792b164f", size = 37232769, upload-time = "2026-03-31T18:28:43.735Z" }, + { url = "https://files.pythonhosted.org/packages/e6/4b/e3f2cd17822cf772a4a51a0a8080b0032e6d37b2dbe8cfb724eac4e31c52/llvmlite-0.47.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:5853bf26160857c0c2573415ff4efe01c4c651e59e2c55c2a088740acfee51cd", size = 56275178, upload-time = "2026-03-31T18:28:48.342Z" }, + { url = "https://files.pythonhosted.org/packages/b6/55/a3b4a543185305a9bdf3d9759d53646ed96e55e7dfd43f53e7a421b8fbae/llvmlite-0.47.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:003bcf7fa579e14db59c1a1e113f93ab8a06b56a4be31c7f08264d1d4072d077", size = 55128632, upload-time = "2026-03-31T18:28:52.901Z" }, + { url = "https://files.pythonhosted.org/packages/2f/f5/d281ae0f79378a5a91f308ea9fdb9f9cc068fddd09629edc0725a5a8fde1/llvmlite-0.47.0-cp312-cp312-win_amd64.whl", hash = "sha256:f3079f25bdc24cd9d27c4b2b5e68f5f60c4fdb7e8ad5ee2b9b006007558f9df7", size = 38138692, upload-time = "2026-03-31T18:28:57.147Z" }, ] [[package]] @@ -3208,14 +3586,14 @@ wheels = [ [[package]] name = "mako" -version = "1.3.10" +version = "1.3.11" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "markupsafe" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/9e/38/bd5b78a920a64d708fe6bc8e0a2c075e1389d53bef8413725c63ba041535/mako-1.3.10.tar.gz", hash = "sha256:99579a6f39583fa7e5630a28c3c1f440e4e97a414b80372649c0ce338da2ea28", size = 392474, upload-time = "2025-04-10T12:44:31.16Z" } +sdist = { url = "https://files.pythonhosted.org/packages/59/8a/805404d0c0b9f3d7a326475ca008db57aea9c5c9f2e1e39ed0faa335571c/mako-1.3.11.tar.gz", hash = "sha256:071eb4ab4c5010443152255d77db7faa6ce5916f35226eb02dc34479b6858069", size = 399811, upload-time = "2026-04-14T20:19:51.493Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/87/fb/99f81ac72ae23375f22b7afdb7642aba97c00a713c217124420147681a2f/mako-1.3.10-py3-none-any.whl", hash = "sha256:baef24a52fc4fc514a0887ac600f9f1cff3d82c61d4d700a1fa84d597b88db59", size = 78509, upload-time = "2025-04-10T12:50:53.297Z" }, + { url = "https://files.pythonhosted.org/packages/68/a5/19d7aaa7e433713ffe881df33705925a196afb9532efc8475d26593921a6/mako-1.3.11-py3-none-any.whl", hash = "sha256:e372c6e333cf004aa736a15f425087ec977e1fcbd2966aae7f17c8dc1da27a77", size = 78503, upload-time = "2026-04-14T20:19:53.233Z" }, ] [[package]] @@ -3269,7 +3647,7 @@ wheels = [ [[package]] name = "mlflow-skinny" -version = "3.10.1" +version = "3.11.1" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "cachetools" }, @@ -3292,9 +3670,9 @@ dependencies = [ { name = "typing-extensions" }, { name = "uvicorn" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/71/65/5b2c28e74c167ba8a5afe59399ef44291a0f140487f534db1900f09f59f6/mlflow_skinny-3.10.1.tar.gz", hash = "sha256:3d1c5c30245b6e7065b492b09dd47be7528e0a14c4266b782fe58f9bcd1e0be0", size = 2478631, upload-time = "2026-03-05T10:49:01.47Z" } +sdist = { url = "https://files.pythonhosted.org/packages/40/77/fe2027ddad9e52ed1ac360fbc262169e6366f6678632e350cbd0d901bb9b/mlflow_skinny-3.11.1.tar.gz", hash = "sha256:86ce63491349f6713afc8a4ef0bf77a8314d0e79e03753cb150d6c860a0b0475", size = 2642799, upload-time = "2026-04-07T14:26:43.818Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/4b/52/17460157271e70b0d8444d27f8ad730ef7d95fb82fac59dc19f11519b921/mlflow_skinny-3.10.1-py3-none-any.whl", hash = "sha256:df1dd507d8ddadf53bfab2423c76cdcafc235cd1a46921a06d1a6b4dd04b023c", size = 2987098, upload-time = "2026-03-05T10:48:59.566Z" }, + { url = "https://files.pythonhosted.org/packages/d1/a7/e61ec397b34dc3c9e91572f45e41617f429d5c524d38a4e1aa2316ee1b5e/mlflow_skinny-3.11.1-py3-none-any.whl", hash = "sha256:82ffd5f6980320b4ac19f741e7a754faa1d01707e632b002ea68e04fd25a0535", size = 3171551, upload-time = "2026-04-07T14:26:41.762Z" }, ] [[package]] @@ -3415,7 +3793,7 @@ wheels = [ [[package]] name = "mypy" -version = "1.20.0" +version = "1.20.1" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "librt", marker = "platform_python_implementation != 'PyPy'" }, @@ -3423,16 +3801,16 @@ dependencies = [ { name = "pathspec" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/f8/5c/b0089fe7fef0a994ae5ee07029ced0526082c6cfaaa4c10d40a10e33b097/mypy-1.20.0.tar.gz", hash = "sha256:eb96c84efcc33f0b5e0e04beacf00129dd963b67226b01c00b9dfc8affb464c3", size = 3815028, upload-time = "2026-03-31T16:55:14.959Z" } +sdist = { url = "https://files.pythonhosted.org/packages/0b/3d/5b373635b3146264eb7a68d09e5ca11c305bbb058dfffbb47c47daf4f632/mypy-1.20.1.tar.gz", hash = "sha256:6fc3f4ecd52de81648fed1945498bf42fa2993ddfad67c9056df36ae5757f804", size = 3815892, upload-time = "2026-04-13T02:46:51.474Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/be/dd/3afa29b58c2e57c79116ed55d700721c3c3b15955e2b6251dd165d377c0e/mypy-1.20.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:002b613ae19f4ac7d18b7e168ffe1cb9013b37c57f7411984abbd3b817b0a214", size = 14509525, upload-time = "2026-03-31T16:55:01.824Z" }, - { url = "https://files.pythonhosted.org/packages/54/eb/227b516ab8cad9f2a13c5e7a98d28cd6aa75e9c83e82776ae6c1c4c046c7/mypy-1.20.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a9336b5e6712f4adaf5afc3203a99a40b379049104349d747eb3e5a3aa23ac2e", size = 13326469, upload-time = "2026-03-31T16:51:41.23Z" }, - { url = "https://files.pythonhosted.org/packages/57/d4/1ddb799860c1b5ac6117ec307b965f65deeb47044395ff01ab793248a591/mypy-1.20.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f13b3e41bce9d257eded794c0f12878af3129d80aacd8a3ee0dee51f3a978651", size = 13705953, upload-time = "2026-03-31T16:48:55.69Z" }, - { url = "https://files.pythonhosted.org/packages/c5/b7/54a720f565a87b893182a2a393370289ae7149e4715859e10e1c05e49154/mypy-1.20.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9804c3ad27f78e54e58b32e7cb532d128b43dbfb9f3f9f06262b821a0f6bd3f5", size = 14710363, upload-time = "2026-03-31T16:53:26.948Z" }, - { url = "https://files.pythonhosted.org/packages/b2/2a/74810274848d061f8a8ea4ac23aaad43bd3d8c1882457999c2e568341c57/mypy-1.20.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:697f102c5c1d526bdd761a69f17c6070f9892eebcb94b1a5963d679288c09e78", size = 14947005, upload-time = "2026-03-31T16:50:17.591Z" }, - { url = "https://files.pythonhosted.org/packages/77/91/21b8ba75f958bcda75690951ce6fa6b7138b03471618959529d74b8544e2/mypy-1.20.0-cp312-cp312-win_amd64.whl", hash = "sha256:0ecd63f75fdd30327e4ad8b5704bd6d91fc6c1b2e029f8ee14705e1207212489", size = 10880616, upload-time = "2026-03-31T16:52:19.986Z" }, - { url = "https://files.pythonhosted.org/packages/8a/15/3d8198ef97c1ca03aea010cce4f1d4f3bc5d9849e8c0140111ca2ead9fdd/mypy-1.20.0-cp312-cp312-win_arm64.whl", hash = "sha256:f194db59657c58593a3c47c6dfd7bad4ef4ac12dbc94d01b3a95521f78177e33", size = 9813091, upload-time = "2026-03-31T16:53:44.385Z" }, - { url = "https://files.pythonhosted.org/packages/21/66/4d734961ce167f0fd8380769b3b7c06dbdd6ff54c2190f3f2ecd22528158/mypy-1.20.0-py3-none-any.whl", hash = "sha256:a6e0641147cbfa7e4e94efdb95c2dab1aff8cfc159ded13e07f308ddccc8c48e", size = 2636365, upload-time = "2026-03-31T16:51:44.911Z" }, + { url = "https://files.pythonhosted.org/packages/69/1b/75a7c825a02781ca10bc2f2f12fba2af5202f6d6005aad8d2d1f264d8d78/mypy-1.20.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:36ee2b9c6599c230fea89bbd79f401f9f9f8e9fcf0c777827789b19b7da90f51", size = 14494077, upload-time = "2026-04-13T02:45:55.085Z" }, + { url = "https://files.pythonhosted.org/packages/b0/54/5e5a569ea5c2b4d48b729fb32aa936eeb4246e4fc3e6f5b3d36a2dfbefb9/mypy-1.20.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:fba3fb0968a7b48806b0c90f38d39296f10766885a94c83bd21399de1e14eb28", size = 13319495, upload-time = "2026-04-13T02:45:29.674Z" }, + { url = "https://files.pythonhosted.org/packages/6f/a4/a1945b19f33e91721b59deee3abb484f2fa5922adc33bb166daf5325d76d/mypy-1.20.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ef1415a637cd3627d6304dfbeddbadd21079dafc2a8a753c477ce4fc0c2af54f", size = 13696948, upload-time = "2026-04-13T02:46:15.006Z" }, + { url = "https://files.pythonhosted.org/packages/b2/c6/75e969781c2359b2f9c15b061f28ec6d67c8b61865ceda176e85c8e7f2de/mypy-1.20.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ef3461b1ad5cd446e540016e90b5984657edda39f982f4cc45ca317b628f5a37", size = 14706744, upload-time = "2026-04-13T02:46:00.482Z" }, + { url = "https://files.pythonhosted.org/packages/a8/6e/b221b1de981fc4262fe3e0bf9ec272d292dfe42394a689c2d49765c144c4/mypy-1.20.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:542dd63c9e1339b6092eb25bd515f3a32a1453aee8c9521d2ddb17dacd840237", size = 14949035, upload-time = "2026-04-13T02:45:06.021Z" }, + { url = "https://files.pythonhosted.org/packages/ca/4b/298ba2de0aafc0da3ff2288da06884aae7ba6489bc247c933f87847c41b3/mypy-1.20.1-cp312-cp312-win_amd64.whl", hash = "sha256:1d55c7cd8ca22e31f93af2a01160a9e95465b5878de23dba7e48116052f20a8d", size = 10883216, upload-time = "2026-04-13T02:45:47.232Z" }, + { url = "https://files.pythonhosted.org/packages/c7/f9/5e25b8f0b8cb92f080bfed9c21d3279b2a0b6a601cdca369a039ba84789d/mypy-1.20.1-cp312-cp312-win_arm64.whl", hash = "sha256:f5b84a79070586e0d353ee07b719d9d0a4aa7c8ee90c0ea97747e98cbe193019", size = 9814299, upload-time = "2026-04-13T02:45:21.934Z" }, + { url = "https://files.pythonhosted.org/packages/d8/28/926bd972388e65a39ee98e188ccf67e81beb3aacfd5d6b310051772d974b/mypy-1.20.1-py3-none-any.whl", hash = "sha256:1aae28507f253fe82d883790d1c0a0d35798a810117c88184097fe8881052f06", size = 2636553, upload-time = "2026-04-13T02:46:30.45Z" }, ] [[package]] @@ -3509,66 +3887,49 @@ wheels = [ [[package]] name = "numba" -version = "0.62.1" +version = "0.65.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "llvmlite" }, { name = "numpy" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/a3/20/33dbdbfe60e5fd8e3dbfde299d106279a33d9f8308346022316781368591/numba-0.62.1.tar.gz", hash = "sha256:7b774242aa890e34c21200a1fc62e5b5757d5286267e71103257f4e2af0d5161", size = 2749817, upload-time = "2025-09-29T10:46:31.551Z" } +sdist = { url = "https://files.pythonhosted.org/packages/49/61/7299643b9c18d669e04be7c5bcb64d985070d07553274817b45b049e7bfe/numba-0.65.0.tar.gz", hash = "sha256:edad0d9f6682e93624c00125a471ae4df186175d71fd604c983c377cdc03e68b", size = 2764131, upload-time = "2026-04-01T03:52:01.946Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/5e/fa/30fa6873e9f821c0ae755915a3ca444e6ff8d6a7b6860b669a3d33377ac7/numba-0.62.1-cp312-cp312-macosx_10_15_x86_64.whl", hash = "sha256:1b743b32f8fa5fff22e19c2e906db2f0a340782caf024477b97801b918cf0494", size = 2685346, upload-time = "2025-09-29T10:43:43.677Z" }, - { url = "https://files.pythonhosted.org/packages/a9/d5/504ce8dc46e0dba2790c77e6b878ee65b60fe3e7d6d0006483ef6fde5a97/numba-0.62.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:90fa21b0142bcf08ad8e32a97d25d0b84b1e921bc9423f8dda07d3652860eef6", size = 2688139, upload-time = "2025-09-29T10:44:04.894Z" }, - { url = "https://files.pythonhosted.org/packages/50/5f/6a802741176c93f2ebe97ad90751894c7b0c922b52ba99a4395e79492205/numba-0.62.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:6ef84d0ac19f1bf80431347b6f4ce3c39b7ec13f48f233a48c01e2ec06ecbc59", size = 3796453, upload-time = "2025-09-29T10:42:52.771Z" }, - { url = "https://files.pythonhosted.org/packages/7e/df/efd21527d25150c4544eccc9d0b7260a5dec4b7e98b5a581990e05a133c0/numba-0.62.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9315cc5e441300e0ca07c828a627d92a6802bcbf27c5487f31ae73783c58da53", size = 3496451, upload-time = "2025-09-29T10:43:19.279Z" }, - { url = "https://files.pythonhosted.org/packages/80/44/79bfdab12a02796bf4f1841630355c82b5a69933b1d50eb15c7fa37dabe8/numba-0.62.1-cp312-cp312-win_amd64.whl", hash = "sha256:44e3aa6228039992f058f5ebfcfd372c83798e9464297bdad8cc79febcf7891e", size = 2745552, upload-time = "2025-09-29T10:44:26.399Z" }, -] - -[[package]] -name = "numexpr" -version = "2.14.1" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "numpy" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/cb/2f/fdba158c9dbe5caca9c3eca3eaffffb251f2fb8674bf8e2d0aed5f38d319/numexpr-2.14.1.tar.gz", hash = "sha256:4be00b1086c7b7a5c32e31558122b7b80243fe098579b170967da83f3152b48b", size = 119400, upload-time = "2025-10-13T16:17:27.351Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/9d/20/c473fc04a371f5e2f8c5749e04505c13e7a8ede27c09e9f099b2ad6f43d6/numexpr-2.14.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:91ebae0ab18c799b0e6b8c5a8d11e1fa3848eb4011271d99848b297468a39430", size = 162790, upload-time = "2025-10-13T16:16:34.903Z" }, - { url = "https://files.pythonhosted.org/packages/45/93/b6760dd1904c2a498e5f43d1bb436f59383c3ddea3815f1461dfaa259373/numexpr-2.14.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:47041f2f7b9e69498fb311af672ba914a60e6e6d804011caacb17d66f639e659", size = 152196, upload-time = "2025-10-13T16:16:36.593Z" }, - { url = "https://files.pythonhosted.org/packages/72/94/cc921e35593b820521e464cbbeaf8212bbdb07f16dc79fe283168df38195/numexpr-2.14.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d686dfb2c1382d9e6e0ee0b7647f943c1886dba3adbf606c625479f35f1956c1", size = 452468, upload-time = "2025-10-13T16:13:29.531Z" }, - { url = "https://files.pythonhosted.org/packages/d9/43/560e9ba23c02c904b5934496486d061bcb14cd3ebba2e3cf0e2dccb6c22b/numexpr-2.14.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:eee6d4fbbbc368e6cdd0772734d6249128d957b3b8ad47a100789009f4de7083", size = 443631, upload-time = "2025-10-13T16:15:02.473Z" }, - { url = "https://files.pythonhosted.org/packages/7b/6c/78f83b6219f61c2c22d71ab6e6c2d4e5d7381334c6c29b77204e59edb039/numexpr-2.14.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3a2839efa25f3c8d4133252ea7342d8f81226c7c4dda81f97a57e090b9d87a48", size = 1417670, upload-time = "2025-10-13T16:13:33.464Z" }, - { url = "https://files.pythonhosted.org/packages/0e/bb/1ccc9dcaf46281568ce769888bf16294c40e98a5158e4b16c241de31d0d3/numexpr-2.14.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:9f9137f1351b310436662b5dc6f4082a245efa8950c3b0d9008028df92fefb9b", size = 1466212, upload-time = "2025-10-13T16:15:12.828Z" }, - { url = "https://files.pythonhosted.org/packages/31/9f/203d82b9e39dadd91d64bca55b3c8ca432e981b822468dcef41a4418626b/numexpr-2.14.1-cp312-cp312-win32.whl", hash = "sha256:36f8d5c1bd1355df93b43d766790f9046cccfc1e32b7c6163f75bcde682cda07", size = 166996, upload-time = "2025-10-13T16:17:10.369Z" }, - { url = "https://files.pythonhosted.org/packages/1f/67/ffe750b5452eb66de788c34e7d21ec6d886abb4d7c43ad1dc88ceb3d998f/numexpr-2.14.1-cp312-cp312-win_amd64.whl", hash = "sha256:fdd886f4b7dbaf167633ee396478f0d0aa58ea2f9e7ccc3c6431019623e8d68f", size = 160187, upload-time = "2025-10-13T16:17:11.974Z" }, + { url = "https://files.pythonhosted.org/packages/6c/2f/8bd31a1ea43c01ac215283d83aa5f8d5acbe7a36c85b82f1757bfe9ccb31/numba-0.65.0-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:b27ee4847e1bfb17e9604d100417ee7c1d10f15a6711c6213404b3da13a0b2aa", size = 2680705, upload-time = "2026-04-01T03:51:32.597Z" }, + { url = "https://files.pythonhosted.org/packages/73/36/88406bd58600cc696417b8e5dd6a056478da808f3eaf48d18e2421e0c2d9/numba-0.65.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:a52d92ffd297c10364bce60cd1fcb88f99284ab5df085f2c6bcd1cb33b529a6f", size = 3801411, upload-time = "2026-04-01T03:51:34.321Z" }, + { url = "https://files.pythonhosted.org/packages/0c/61/ce753a1d7646dd477e16d15e89473703faebb8995d2f71d7ad69a540b565/numba-0.65.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:da8e371e328c06d0010c3d8b44b21858652831b85bcfba78cb22c042e22dbd8e", size = 3501622, upload-time = "2026-04-01T03:51:36.348Z" }, + { url = "https://files.pythonhosted.org/packages/7d/86/db87a5393f1b1fabef53ac3ba4e6b938bb27e40a04ad7cc512098fcae032/numba-0.65.0-cp312-cp312-win_amd64.whl", hash = "sha256:59bb9f2bb9f1238dfd8e927ba50645c18ae769fef4f3d58ea0ea22a2683b91f5", size = 2749979, upload-time = "2026-04-01T03:51:37.88Z" }, ] [[package]] name = "numpy" -version = "1.26.4" +version = "2.4.4" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/65/6e/09db70a523a96d25e115e71cc56a6f9031e7b8cd166c1ac8438307c14058/numpy-1.26.4.tar.gz", hash = "sha256:2a02aba9ed12e4ac4eb3ea9421c420301a0c6460d9830d74a9df87efa4912010", size = 15786129, upload-time = "2024-02-06T00:26:44.495Z" } +sdist = { url = "https://files.pythonhosted.org/packages/d7/9f/b8cef5bffa569759033adda9481211426f12f53299629b410340795c2514/numpy-2.4.4.tar.gz", hash = "sha256:2d390634c5182175533585cc89f3608a4682ccb173cc9bb940b2881c8d6f8fa0", size = 20731587, upload-time = "2026-03-29T13:22:01.298Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/95/12/8f2020a8e8b8383ac0177dc9570aad031a3beb12e38847f7129bacd96228/numpy-1.26.4-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:b3ce300f3644fb06443ee2222c2201dd3a89ea6040541412b8fa189341847218", size = 20335901, upload-time = "2024-02-05T23:55:32.801Z" }, - { url = "https://files.pythonhosted.org/packages/75/5b/ca6c8bd14007e5ca171c7c03102d17b4f4e0ceb53957e8c44343a9546dcc/numpy-1.26.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:03a8c78d01d9781b28a6989f6fa1bb2c4f2d51201cf99d3dd875df6fbd96b23b", size = 13685868, upload-time = "2024-02-05T23:55:56.28Z" }, - { url = "https://files.pythonhosted.org/packages/79/f8/97f10e6755e2a7d027ca783f63044d5b1bc1ae7acb12afe6a9b4286eac17/numpy-1.26.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9fad7dcb1aac3c7f0584a5a8133e3a43eeb2fe127f47e3632d43d677c66c102b", size = 13925109, upload-time = "2024-02-05T23:56:20.368Z" }, - { url = "https://files.pythonhosted.org/packages/0f/50/de23fde84e45f5c4fda2488c759b69990fd4512387a8632860f3ac9cd225/numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:675d61ffbfa78604709862923189bad94014bef562cc35cf61d3a07bba02a7ed", size = 17950613, upload-time = "2024-02-05T23:56:56.054Z" }, - { url = "https://files.pythonhosted.org/packages/4c/0c/9c603826b6465e82591e05ca230dfc13376da512b25ccd0894709b054ed0/numpy-1.26.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:ab47dbe5cc8210f55aa58e4805fe224dac469cde56b9f731a4c098b91917159a", size = 13572172, upload-time = "2024-02-05T23:57:21.56Z" }, - { url = "https://files.pythonhosted.org/packages/76/8c/2ba3902e1a0fc1c74962ea9bb33a534bb05984ad7ff9515bf8d07527cadd/numpy-1.26.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:1dda2e7b4ec9dd512f84935c5f126c8bd8b9f2fc001e9f54af255e8c5f16b0e0", size = 17786643, upload-time = "2024-02-05T23:57:56.585Z" }, - { url = "https://files.pythonhosted.org/packages/28/4a/46d9e65106879492374999e76eb85f87b15328e06bd1550668f79f7b18c6/numpy-1.26.4-cp312-cp312-win32.whl", hash = "sha256:50193e430acfc1346175fcbdaa28ffec49947a06918b7b92130744e81e640110", size = 5677803, upload-time = "2024-02-05T23:58:08.963Z" }, - { url = "https://files.pythonhosted.org/packages/16/2e/86f24451c2d530c88daf997cb8d6ac622c1d40d19f5a031ed68a4b73a374/numpy-1.26.4-cp312-cp312-win_amd64.whl", hash = "sha256:08beddf13648eb95f8d867350f6a018a4be2e5ad54c8d8caed89ebca558b2818", size = 15517754, upload-time = "2024-02-05T23:58:36.364Z" }, + { url = "https://files.pythonhosted.org/packages/28/05/32396bec30fb2263770ee910142f49c1476d08e8ad41abf8403806b520ce/numpy-2.4.4-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:15716cfef24d3a9762e3acdf87e27f58dc823d1348f765bbea6bef8c639bfa1b", size = 16689272, upload-time = "2026-03-29T13:18:49.223Z" }, + { url = "https://files.pythonhosted.org/packages/c5/f3/a983d28637bfcd763a9c7aafdb6d5c0ebf3d487d1e1459ffdb57e2f01117/numpy-2.4.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:23cbfd4c17357c81021f21540da84ee282b9c8fba38a03b7b9d09ba6b951421e", size = 14699573, upload-time = "2026-03-29T13:18:52.629Z" }, + { url = "https://files.pythonhosted.org/packages/9b/fd/e5ecca1e78c05106d98028114f5c00d3eddb41207686b2b7de3e477b0e22/numpy-2.4.4-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:8b3b60bb7cba2c8c81837661c488637eee696f59a877788a396d33150c35d842", size = 5204782, upload-time = "2026-03-29T13:18:55.579Z" }, + { url = "https://files.pythonhosted.org/packages/de/2f/702a4594413c1a8632092beae8aba00f1d67947389369b3777aed783fdca/numpy-2.4.4-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:e4a010c27ff6f210ff4c6ef34394cd61470d01014439b192ec22552ee867f2a8", size = 6552038, upload-time = "2026-03-29T13:18:57.769Z" }, + { url = "https://files.pythonhosted.org/packages/7f/37/eed308a8f56cba4d1fdf467a4fc67ef4ff4bf1c888f5fc980481890104b1/numpy-2.4.4-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f9e75681b59ddaa5e659898085ae0eaea229d054f2ac0c7e563a62205a700121", size = 15670666, upload-time = "2026-03-29T13:19:00.341Z" }, + { url = "https://files.pythonhosted.org/packages/0a/0d/0e3ecece05b7a7e87ab9fb587855548da437a061326fff64a223b6dcb78a/numpy-2.4.4-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:81f4a14bee47aec54f883e0cad2d73986640c1590eb9bfaaba7ad17394481e6e", size = 16645480, upload-time = "2026-03-29T13:19:03.63Z" }, + { url = "https://files.pythonhosted.org/packages/34/49/f2312c154b82a286758ee2f1743336d50651f8b5195db18cdb63675ff649/numpy-2.4.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:62d6b0f03b694173f9fcb1fb317f7222fd0b0b103e784c6549f5e53a27718c44", size = 17020036, upload-time = "2026-03-29T13:19:07.428Z" }, + { url = "https://files.pythonhosted.org/packages/7b/e9/736d17bd77f1b0ec4f9901aaec129c00d59f5d84d5e79bba540ef12c2330/numpy-2.4.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:fbc356aae7adf9e6336d336b9c8111d390a05df88f1805573ebb0807bd06fd1d", size = 18368643, upload-time = "2026-03-29T13:19:10.775Z" }, + { url = "https://files.pythonhosted.org/packages/63/f6/d417977c5f519b17c8a5c3bc9e8304b0908b0e21136fe43bf628a1343914/numpy-2.4.4-cp312-cp312-win32.whl", hash = "sha256:0d35aea54ad1d420c812bfa0385c71cd7cc5bcf7c65fed95fc2cd02fe8c79827", size = 5961117, upload-time = "2026-03-29T13:19:13.464Z" }, + { url = "https://files.pythonhosted.org/packages/2d/5b/e1deebf88ff431b01b7406ca3583ab2bbb90972bbe1c568732e49c844f7e/numpy-2.4.4-cp312-cp312-win_amd64.whl", hash = "sha256:b5f0362dc928a6ecd9db58868fca5e48485205e3855957bdedea308f8672ea4a", size = 12320584, upload-time = "2026-03-29T13:19:16.155Z" }, + { url = "https://files.pythonhosted.org/packages/58/89/e4e856ac82a68c3ed64486a544977d0e7bdd18b8da75b78a577ca31c4395/numpy-2.4.4-cp312-cp312-win_arm64.whl", hash = "sha256:846300f379b5b12cc769334464656bc882e0735d27d9726568bc932fdc49d5ec", size = 10221450, upload-time = "2026-03-29T13:19:18.994Z" }, ] [[package]] name = "numpy-typing-compat" -version = "20250818.1.25" +version = "20251206.2.4" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "numpy" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/ff/a7/780dc00f4fed2f2b653f76a196b3a6807c7c667f30ae95a7fd082c1081d8/numpy_typing_compat-20250818.1.25.tar.gz", hash = "sha256:8ff461725af0b436e9b0445d07712f1e6e3a97540a3542810f65f936dcc587a5", size = 5027, upload-time = "2025-08-18T23:46:39.062Z" } +sdist = { url = "https://files.pythonhosted.org/packages/42/5f/29fd5f29b0a5d96e2def96ecba3112fc330ecd16e8c97c2b332563c5e201/numpy_typing_compat-20251206.2.4.tar.gz", hash = "sha256:59882d23aaff054a2536da80564012cdce33487657be4d79c5925bb8705fcabc", size = 5011, upload-time = "2025-12-06T20:02:04.942Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/1e/71/30e8d317b6896acbc347d3089764b6209ba299095550773e14d27dcf035f/numpy_typing_compat-20250818.1.25-py3-none-any.whl", hash = "sha256:4f91427369583074b236c804dd27559134f08ec4243485034c8e7d258cbd9cd3", size = 6355, upload-time = "2025-08-18T23:46:30.927Z" }, + { url = "https://files.pythonhosted.org/packages/63/7c/5c2892e6bc0628a2ccf4e938e1e2db22794657ccb374672d66e20d73839e/numpy_typing_compat-20251206.2.4-py3-none-any.whl", hash = "sha256:a82e723bd20efaa4cf2886709d4264c144f1f2b609bda83d1545113b7e47a5b5", size = 6300, upload-time = "2025-12-06T20:01:57.578Z" }, ] [[package]] @@ -3720,59 +4081,59 @@ wheels = [ [[package]] name = "opentelemetry-api" -version = "1.40.0" +version = "1.41.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "importlib-metadata" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/2c/1d/4049a9e8698361cc1a1aa03a6c59e4fa4c71e0c0f94a30f988a6876a2ae6/opentelemetry_api-1.40.0.tar.gz", hash = "sha256:159be641c0b04d11e9ecd576906462773eb97ae1b657730f0ecf64d32071569f", size = 70851, upload-time = "2026-03-04T14:17:21.555Z" } +sdist = { url = "https://files.pythonhosted.org/packages/47/8e/3778a7e87801d994869a9396b9fc2a289e5f9be91ff54a27d41eace494b0/opentelemetry_api-1.41.0.tar.gz", hash = "sha256:9421d911326ec12dee8bc933f7839090cad7a3f13fcfb0f9e82f8174dc003c09", size = 71416, upload-time = "2026-04-09T14:38:34.544Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/5f/bf/93795954016c522008da367da292adceed71cca6ee1717e1d64c83089099/opentelemetry_api-1.40.0-py3-none-any.whl", hash = "sha256:82dd69331ae74b06f6a874704be0cfaa49a1650e1537d4a813b86ecef7d0ecf9", size = 68676, upload-time = "2026-03-04T14:17:01.24Z" }, + { url = "https://files.pythonhosted.org/packages/58/ee/99ab786653b3bda9c37ade7e24a7b607a1b1f696063172768417539d876d/opentelemetry_api-1.41.0-py3-none-any.whl", hash = "sha256:0e77c806e6a89c9e4f8d372034622f3e1418a11bdbe1c80a50b3d3397ad0fa4f", size = 69007, upload-time = "2026-04-09T14:38:11.833Z" }, ] [[package]] name = "opentelemetry-distro" -version = "0.61b0" +version = "0.62b0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-api" }, { name = "opentelemetry-instrumentation" }, { name = "opentelemetry-sdk" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/f5/00/1f8acc51326956a596fefaf67751380001af36029132a7a07d4debce3c06/opentelemetry_distro-0.61b0.tar.gz", hash = "sha256:975b845f50181ad53753becf4fd4b123b54fa04df5a9d78812264436d6518981", size = 2590, upload-time = "2026-03-04T14:20:12.453Z" } +sdist = { url = "https://files.pythonhosted.org/packages/72/c6/52b0dbcc8fbdecf179047921940516cbb8aaf05f6b737faa526ad76fec51/opentelemetry_distro-0.62b0.tar.gz", hash = "sha256:aa0308fbe50ad8f17d4446982dbf26870e20b8031ba38d8e1224ecf7aedd3184", size = 2611, upload-time = "2026-04-09T14:40:20.404Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/56/2c/efcc995cd7484e6e55b1d26bd7fa6c55ca96bd415ff94310b52c19f330b0/opentelemetry_distro-0.61b0-py3-none-any.whl", hash = "sha256:f21d1ac0627549795d75e332006dd068877f00e461b1b2e8fe4568d6eb7b9590", size = 3349, upload-time = "2026-03-04T14:18:57.788Z" }, + { url = "https://files.pythonhosted.org/packages/b3/7e/5858bba1c7ed880c7b0fe7d9a1ea40ab8affd18c9ebc1e16c2d69c501da1/opentelemetry_distro-0.62b0-py3-none-any.whl", hash = "sha256:23e9065a35cef12868ad5efb18ce9c88a9103800256b318dec4c9c850c6c78c1", size = 3348, upload-time = "2026-04-09T14:39:17.406Z" }, ] [[package]] name = "opentelemetry-exporter-otlp" -version = "1.40.0" +version = "1.41.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-exporter-otlp-proto-grpc" }, { name = "opentelemetry-exporter-otlp-proto-http" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/d0/37/b6708e0eff5c5fb9aba2e0ea09f7f3bcbfd12a592d2a780241b5f6014df7/opentelemetry_exporter_otlp-1.40.0.tar.gz", hash = "sha256:7caa0870b95e2fcb59d64e16e2b639ecffb07771b6cd0000b5d12e5e4fef765a", size = 6152, upload-time = "2026-03-04T14:17:23.235Z" } +sdist = { url = "https://files.pythonhosted.org/packages/65/b7/845565a2ab5d22c1486bc7729a06b05cd0964c61539d766e1f107c9eea0c/opentelemetry_exporter_otlp-1.41.0.tar.gz", hash = "sha256:97ff847321f8d4c919032a67d20d3137fb7b34eac0c47f13f71112858927fc5b", size = 6152, upload-time = "2026-04-09T14:38:35.895Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/2d/fc/aea77c28d9f3ffef2fdafdc3f4a235aee4091d262ddabd25882f47ce5c5f/opentelemetry_exporter_otlp-1.40.0-py3-none-any.whl", hash = "sha256:48c87e539ec9afb30dc443775a1334cc5487de2f72a770a4c00b1610bf6c697d", size = 7023, upload-time = "2026-03-04T14:17:03.612Z" }, + { url = "https://files.pythonhosted.org/packages/e0/f2/f1076fff152858773f22cda146713f9ae3661795af6bacd411a76f2151ac/opentelemetry_exporter_otlp-1.41.0-py3-none-any.whl", hash = "sha256:443b6a45c990ae4c55e147f97049a86c5f5b704f3d78b48b44a073a886ec4d6e", size = 7022, upload-time = "2026-04-09T14:38:13.934Z" }, ] [[package]] name = "opentelemetry-exporter-otlp-proto-common" -version = "1.40.0" +version = "1.41.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-proto" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/51/bc/1559d46557fe6eca0b46c88d4c2676285f1f3be2e8d06bb5d15fbffc814a/opentelemetry_exporter_otlp_proto_common-1.40.0.tar.gz", hash = "sha256:1cbee86a4064790b362a86601ee7934f368b81cd4cc2f2e163902a6e7818a0fa", size = 20416, upload-time = "2026-03-04T14:17:23.801Z" } +sdist = { url = "https://files.pythonhosted.org/packages/8c/28/e8eca94966fe9a1465f6094dc5ddc5398473682180279c94020bc23b4906/opentelemetry_exporter_otlp_proto_common-1.41.0.tar.gz", hash = "sha256:966bbce537e9edb166154779a7c4f8ab6b8654a03a28024aeaf1a3eacb07d6ee", size = 20411, upload-time = "2026-04-09T14:38:36.572Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/8b/ca/8f122055c97a932311a3f640273f084e738008933503d0c2563cd5d591fc/opentelemetry_exporter_otlp_proto_common-1.40.0-py3-none-any.whl", hash = "sha256:7081ff453835a82417bf38dccf122c827c3cbc94f2079b03bba02a3165f25149", size = 18369, upload-time = "2026-03-04T14:17:04.796Z" }, + { url = "https://files.pythonhosted.org/packages/26/c4/78b9bf2d9c1d5e494f44932988d9d91c51a66b9a7b48adf99b62f7c65318/opentelemetry_exporter_otlp_proto_common-1.41.0-py3-none-any.whl", hash = "sha256:7a99177bf61f85f4f9ed2072f54d676364719c066f6d11f515acc6c745c7acf0", size = 18366, upload-time = "2026-04-09T14:38:15.135Z" }, ] [[package]] name = "opentelemetry-exporter-otlp-proto-grpc" -version = "1.40.0" +version = "1.41.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "googleapis-common-protos" }, @@ -3783,14 +4144,14 @@ dependencies = [ { name = "opentelemetry-sdk" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/8f/7f/b9e60435cfcc7590fa87436edad6822240dddbc184643a2a005301cc31f4/opentelemetry_exporter_otlp_proto_grpc-1.40.0.tar.gz", hash = "sha256:bd4015183e40b635b3dab8da528b27161ba83bf4ef545776b196f0fb4ec47740", size = 25759, upload-time = "2026-03-04T14:17:24.4Z" } +sdist = { url = "https://files.pythonhosted.org/packages/42/46/d75a3f8c91915f2e58f61d0a2e4ada63891e7c7a37a20ff7949ba184a6b2/opentelemetry_exporter_otlp_proto_grpc-1.41.0.tar.gz", hash = "sha256:f704201251c6f65772b11bddea1c948000554459101bdbb0116e0a01b70592f6", size = 25754, upload-time = "2026-04-09T14:38:37.423Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/96/6f/7ee0980afcbdcd2d40362da16f7f9796bd083bf7f0b8e038abfbc0300f5d/opentelemetry_exporter_otlp_proto_grpc-1.40.0-py3-none-any.whl", hash = "sha256:2aa0ca53483fe0cf6405087a7491472b70335bc5c7944378a0a8e72e86995c52", size = 20304, upload-time = "2026-03-04T14:17:05.942Z" }, + { url = "https://files.pythonhosted.org/packages/81/f6/b09e2e0c9f0b5750cebc6eaf31527b910821453cef40a5a0fe93550422b2/opentelemetry_exporter_otlp_proto_grpc-1.41.0-py3-none-any.whl", hash = "sha256:3a1a86bd24806ccf136ec9737dbfa4c09b069f9130ff66b0acb014f9c5255fd1", size = 20299, upload-time = "2026-04-09T14:38:17.01Z" }, ] [[package]] name = "opentelemetry-exporter-otlp-proto-http" -version = "1.40.0" +version = "1.41.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "googleapis-common-protos" }, @@ -3801,14 +4162,14 @@ dependencies = [ { name = "requests" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/2e/fa/73d50e2c15c56be4d000c98e24221d494674b0cc95524e2a8cb3856d95a4/opentelemetry_exporter_otlp_proto_http-1.40.0.tar.gz", hash = "sha256:db48f5e0f33217588bbc00274a31517ba830da576e59503507c839b38fa0869c", size = 17772, upload-time = "2026-03-04T14:17:25.324Z" } +sdist = { url = "https://files.pythonhosted.org/packages/19/63/d9f43cd75f3fabb7e01148c89cfa9491fc18f6580a6764c554ff7c953c46/opentelemetry_exporter_otlp_proto_http-1.41.0.tar.gz", hash = "sha256:dcd6e0686f56277db4eecbadd5262124e8f2cc739cadbc3fae3d08a12c976cf5", size = 24139, upload-time = "2026-04-09T14:38:38.128Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/a0/3a/8865d6754e61c9fb170cdd530a124a53769ee5f740236064816eb0ca7301/opentelemetry_exporter_otlp_proto_http-1.40.0-py3-none-any.whl", hash = "sha256:a8d1dab28f504c5d96577d6509f80a8150e44e8f45f82cdbe0e34c99ab040069", size = 19960, upload-time = "2026-03-04T14:17:07.153Z" }, + { url = "https://files.pythonhosted.org/packages/64/b5/a214cd907eedc17699d1c2d602288ae17cb775526df04db3a3b3585329d2/opentelemetry_exporter_otlp_proto_http-1.41.0-py3-none-any.whl", hash = "sha256:a9c4ee69cce9c3f4d7ee736ad1b44e3c9654002c0816900abbafd9f3cf289751", size = 22673, upload-time = "2026-04-09T14:38:18.349Z" }, ] [[package]] name = "opentelemetry-instrumentation" -version = "0.61b0" +version = "0.62b0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-api" }, @@ -3816,14 +4177,14 @@ dependencies = [ { name = "packaging" }, { name = "wrapt" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/da/37/6bf8e66bfcee5d3c6515b79cb2ee9ad05fe573c20f7ceb288d0e7eeec28c/opentelemetry_instrumentation-0.61b0.tar.gz", hash = "sha256:cb21b48db738c9de196eba6b805b4ff9de3b7f187e4bbf9a466fa170514f1fc7", size = 32606, upload-time = "2026-03-04T14:20:16.825Z" } +sdist = { url = "https://files.pythonhosted.org/packages/f9/fd/b8e90bb340957f059084376f94cff336b0e871a42feba7d3f7342365e987/opentelemetry_instrumentation-0.62b0.tar.gz", hash = "sha256:aa1b0b9ab2e1722c2a8a5384fb016fc28d30bba51826676c8036074790d2861e", size = 34042, upload-time = "2026-04-09T14:40:22.843Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/d8/3e/f6f10f178b6316de67f0dfdbbb699a24fbe8917cf1743c1595fb9dcdd461/opentelemetry_instrumentation-0.61b0-py3-none-any.whl", hash = "sha256:92a93a280e69788e8f88391247cc530fd81f16f2b011979d4d6398f805cfbc63", size = 33448, upload-time = "2026-03-04T14:19:02.447Z" }, + { url = "https://files.pythonhosted.org/packages/00/b6/3356d2e335e3c449c5183e9b023f30f04f1b7073a6583c68745ea2e704b1/opentelemetry_instrumentation-0.62b0-py3-none-any.whl", hash = "sha256:30d4e76486eae64fb095264a70c2c809c4bed17b73373e53091470661f7d477c", size = 34158, upload-time = "2026-04-09T14:39:21.428Z" }, ] [[package]] name = "opentelemetry-instrumentation-asgi" -version = "0.61b0" +version = "0.62b0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "asgiref" }, @@ -3832,28 +4193,28 @@ dependencies = [ { name = "opentelemetry-semantic-conventions" }, { name = "opentelemetry-util-http" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/00/3e/143cf5c034e58037307e6a24f06e0dd64b2c49ae60a965fc580027581931/opentelemetry_instrumentation_asgi-0.61b0.tar.gz", hash = "sha256:9d08e127244361dc33976d39dd4ca8f128b5aa5a7ae425208400a80a095019b5", size = 26691, upload-time = "2026-03-04T14:20:21.038Z" } +sdist = { url = "https://files.pythonhosted.org/packages/f1/38/999bf777774878971c2716de4b7a03cd57a7decb4af25090e703b79fa0e5/opentelemetry_instrumentation_asgi-0.62b0.tar.gz", hash = "sha256:93cde8c62e5918a3c1ff9ba020518127300e5e0816b7e8b14baf46a26ba619fc", size = 26779, upload-time = "2026-04-09T14:40:26.566Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/19/78/154470cf9d741a7487fbb5067357b87386475bbb77948a6707cae982e158/opentelemetry_instrumentation_asgi-0.61b0-py3-none-any.whl", hash = "sha256:e4b3ce6b66074e525e717efff20745434e5efd5d9df6557710856fba356da7a4", size = 16980, upload-time = "2026-03-04T14:19:10.894Z" }, + { url = "https://files.pythonhosted.org/packages/25/cf/29df82f5870178143bdb5c9a7be044b9f78c71e1c5dcf995242e86d80158/opentelemetry_instrumentation_asgi-0.62b0-py3-none-any.whl", hash = "sha256:89b62a6f996b260b162f515c25e6d78e39286e4cbe2f935899e51b32f31027e2", size = 17011, upload-time = "2026-04-09T14:39:27.305Z" }, ] [[package]] name = "opentelemetry-instrumentation-celery" -version = "0.61b0" +version = "0.62b0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-api" }, { name = "opentelemetry-instrumentation" }, { name = "opentelemetry-semantic-conventions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/8d/43/e79108a804d16b1dc8ff28edd0e94ac393cf6359a5adcd7cdd2ec4be85f4/opentelemetry_instrumentation_celery-0.61b0.tar.gz", hash = "sha256:0e352a567dc89ed8bc083fc635035ce3c5b96bbbd92831ffd676e93b87f8e94f", size = 14780, upload-time = "2026-03-04T14:20:27.776Z" } +sdist = { url = "https://files.pythonhosted.org/packages/01/b4/20a3c8c669dc45aa3703c0370041d67e8be613f1829523cdaf634a5f9626/opentelemetry_instrumentation_celery-0.62b0.tar.gz", hash = "sha256:55e8fa48e5b886bcca448fa32e28a6cc2165157745e8328de479a826d3903095", size = 14808, upload-time = "2026-04-09T14:40:31.603Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/a2/ed/c05f3c84b455654eb6c047474ffde61ed92efc24030f64213c98bca9d44b/opentelemetry_instrumentation_celery-0.61b0-py3-none-any.whl", hash = "sha256:01235733ff0cdf571cb03b270645abb14b9c8d830313dc5842097ec90146320b", size = 13856, upload-time = "2026-03-04T14:19:20.98Z" }, + { url = "https://files.pythonhosted.org/packages/f6/60/cf951e6bd6ec62ec55bd2384e0ba9841ea38f2d128c773d85dc60da97172/opentelemetry_instrumentation_celery-0.62b0-py3-none-any.whl", hash = "sha256:cadfd3e65287a36099dce5ba7e05d98e4c5f9479a455241e01d140ecc5c10935", size = 13864, upload-time = "2026-04-09T14:39:35.009Z" }, ] [[package]] name = "opentelemetry-instrumentation-fastapi" -version = "0.61b0" +version = "0.62b0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-api" }, @@ -3862,14 +4223,14 @@ dependencies = [ { name = "opentelemetry-semantic-conventions" }, { name = "opentelemetry-util-http" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/37/35/aa727bb6e6ef930dcdc96a617b83748fece57b43c47d83ba8d83fbeca657/opentelemetry_instrumentation_fastapi-0.61b0.tar.gz", hash = "sha256:3a24f35b07c557ae1bbc483bf8412221f25d79a405f8b047de8b670722e2fa9f", size = 24800, upload-time = "2026-03-04T14:20:32.759Z" } +sdist = { url = "https://files.pythonhosted.org/packages/37/09/92740c6d114d1bef392557a03ae6de64065c83c1b331dae9b57fe718497c/opentelemetry_instrumentation_fastapi-0.62b0.tar.gz", hash = "sha256:e4748e4e575077e08beaf2c5d2f369da63dd90882d89d73c4192a97356637dec", size = 25056, upload-time = "2026-04-09T14:40:36.438Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/91/05/acfeb2cccd434242a0a7d0ea29afaf077e04b42b35b485d89aee4e0d9340/opentelemetry_instrumentation_fastapi-0.61b0-py3-none-any.whl", hash = "sha256:a1a844d846540d687d377516b2ff698b51d87c781b59f47c214359c4a241047c", size = 13485, upload-time = "2026-03-04T14:19:30.351Z" }, + { url = "https://files.pythonhosted.org/packages/64/bb/186ffe0fde0ad33ceb50e1d3596cc849b732d3b825592a6a507a40c8c49b/opentelemetry_instrumentation_fastapi-0.62b0-py3-none-any.whl", hash = "sha256:06d3272ad15f9daea5a0a27c32831aff376110a4b0394197120256ef6d610e6e", size = 13482, upload-time = "2026-04-09T14:39:43.446Z" }, ] [[package]] name = "opentelemetry-instrumentation-flask" -version = "0.61b0" +version = "0.62b0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-api" }, @@ -3879,14 +4240,14 @@ dependencies = [ { name = "opentelemetry-util-http" }, { name = "packaging" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/d9/33/d6852d8f2c3eef86f2f8c858d6f5315983c7063e07e595519e96d4c31c06/opentelemetry_instrumentation_flask-0.61b0.tar.gz", hash = "sha256:e9faf58dfd9860a1868442d180142645abdafc1a652dd73d469a5efd106a7d49", size = 24071, upload-time = "2026-03-04T14:20:33.437Z" } +sdist = { url = "https://files.pythonhosted.org/packages/8e/86/522294f6a80d59560d8f722da59513d2ed2d53c6178fa109789dacc5dd50/opentelemetry_instrumentation_flask-0.62b0.tar.gz", hash = "sha256:330e903c0e92b06aae32f9eb7b8a923599d7a29440f50841a59dbba34ec6dd9f", size = 24100, upload-time = "2026-04-09T14:40:37.111Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/3e/41/619f3530324a58491f2d20f216a10dd7393629b29db4610dda642a27f4ed/opentelemetry_instrumentation_flask-0.61b0-py3-none-any.whl", hash = "sha256:e8ce474d7ce543bfbbb3e93f8a6f8263348af9d7b45502f387420cf3afa71253", size = 15996, upload-time = "2026-03-04T14:19:31.304Z" }, + { url = "https://files.pythonhosted.org/packages/bc/c8/9f3bb38281bcb50c93c3d2358b303645f6917bf972c167484c09f9a97ff1/opentelemetry_instrumentation_flask-0.62b0-py3-none-any.whl", hash = "sha256:8c1f8986ec3887d08899d2eb654625252c929105174911b3b50dcf12b1001807", size = 16006, upload-time = "2026-04-09T14:39:44.401Z" }, ] [[package]] name = "opentelemetry-instrumentation-httpx" -version = "0.61b0" +version = "0.62b0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-api" }, @@ -3895,14 +4256,14 @@ dependencies = [ { name = "opentelemetry-util-http" }, { name = "wrapt" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/cd/2a/e2becd55e33c29d1d9ef76e2579040ed1951cb33bacba259f6aff2fdd2a6/opentelemetry_instrumentation_httpx-0.61b0.tar.gz", hash = "sha256:6569ec097946c5551c2a4252f74c98666addd1bf047c1dde6b4ef426719ff8dd", size = 24104, upload-time = "2026-03-04T14:20:34.752Z" } +sdist = { url = "https://files.pythonhosted.org/packages/77/a7/63e2c6325c8e99cd9b8e0229a8b61c37520ee537214a2c8d514e84486a94/opentelemetry_instrumentation_httpx-0.62b0.tar.gz", hash = "sha256:d865398db3f3c289ba226e355bf4d94460a4301c0c8916e3136caea55ae18000", size = 24182, upload-time = "2026-04-09T14:40:38.719Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/af/88/dde310dce56e2d85cf1a09507f5888544955309edc4b8d22971d6d3d1417/opentelemetry_instrumentation_httpx-0.61b0-py3-none-any.whl", hash = "sha256:dee05c93a6593a5dc3ae5d9d5c01df8b4e2c5d02e49275e5558534ee46343d5e", size = 17198, upload-time = "2026-03-04T14:19:33.585Z" }, + { url = "https://files.pythonhosted.org/packages/c0/5e/7d5fc28487637871b015128cd5dbb3c36f6d343a9098b893bd803d5a9cca/opentelemetry_instrumentation_httpx-0.62b0-py3-none-any.whl", hash = "sha256:c7660b939c12608fec67743126e9b4dc23dceef0ed631c415924966b0d1579e3", size = 17200, upload-time = "2026-04-09T14:39:46.618Z" }, ] [[package]] name = "opentelemetry-instrumentation-redis" -version = "0.61b0" +version = "0.62b0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-api" }, @@ -3910,14 +4271,14 @@ dependencies = [ { name = "opentelemetry-semantic-conventions" }, { name = "wrapt" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/cf/21/26205f89358a5f2be3ee5512d3d3bce16b622977f64aeaa9d3fa8887dd39/opentelemetry_instrumentation_redis-0.61b0.tar.gz", hash = "sha256:ae0fbb56be9a641e621d55b02a7d62977a2c77c5ee760addd79b9b266e46e523", size = 14781, upload-time = "2026-03-04T14:20:45.694Z" } +sdist = { url = "https://files.pythonhosted.org/packages/55/7d/5acdb4e4e36c522f9393cfa91f7a431ee089663c77855e524bc97f993020/opentelemetry_instrumentation_redis-0.62b0.tar.gz", hash = "sha256:513bc6679ee251436f0aff7be7ddab6186637dde09a795a8dc9659103f103bef", size = 14796, upload-time = "2026-04-09T14:40:48.391Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/a5/e1/8f4c8e4194291dbe828aeabe779050a8497b379ad90040a5a0a7074b1d08/opentelemetry_instrumentation_redis-0.61b0-py3-none-any.whl", hash = "sha256:8d4e850bbb5f8eeafa44c0eac3a007990c7125de187bc9c3659e29ff7e091172", size = 15506, upload-time = "2026-03-04T14:19:48.588Z" }, + { url = "https://files.pythonhosted.org/packages/de/42/a13a7da074c972a51c14277e7f747e90037b9d815515c73b802e95897690/opentelemetry_instrumentation_redis-0.62b0-py3-none-any.whl", hash = "sha256:92ada3d7bdf395785f660549b0e6e8e5bac7cab80e7f1369a7d02228b27684c3", size = 15501, upload-time = "2026-04-09T14:40:00.69Z" }, ] [[package]] name = "opentelemetry-instrumentation-sqlalchemy" -version = "0.61b0" +version = "0.62b0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-api" }, @@ -3926,14 +4287,14 @@ dependencies = [ { name = "packaging" }, { name = "wrapt" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/9e/4f/3a325b180944610697a0a926d49d782b41a86120050d44fefb2715b630ac/opentelemetry_instrumentation_sqlalchemy-0.61b0.tar.gz", hash = "sha256:13a3a159a2043a52f0180b3757fbaa26741b0e08abb50deddce4394c118956e6", size = 15343, upload-time = "2026-03-04T14:20:47.648Z" } +sdist = { url = "https://files.pythonhosted.org/packages/2a/3d/40adc8c38e5be017ceb230a28ca57ca81981d4dc0c4b902cc930c77fd14f/opentelemetry_instrumentation_sqlalchemy-0.62b0.tar.gz", hash = "sha256:d02f85b83f349e9ef70a34cb3f4c3a3481fa15b11747f09209818663e161cac4", size = 18539, upload-time = "2026-04-09T14:40:50.251Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/1f/97/b906a930c6a1a20c53ecc8b58cabc2cdd0ce560a2b5d44259084ffe4333e/opentelemetry_instrumentation_sqlalchemy-0.61b0-py3-none-any.whl", hash = "sha256:f115e0be54116ba4c327b8d7b68db4045ee18d44439d888ab8130a549c50d1c1", size = 14547, upload-time = "2026-03-04T14:19:53.088Z" }, + { url = "https://files.pythonhosted.org/packages/e7/e0/77954ac593f34740dc32e28a15fe7170e90f6ba6398eaaa5c88b34c05ed1/opentelemetry_instrumentation_sqlalchemy-0.62b0-py3-none-any.whl", hash = "sha256:ec576e0660080d9d15ce4fa44d2a07fff8cb4b796a84344cb0f2c9e5d6e26f79", size = 15534, upload-time = "2026-04-09T14:40:03.957Z" }, ] [[package]] name = "opentelemetry-instrumentation-wsgi" -version = "0.61b0" +version = "0.62b0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-api" }, @@ -3941,75 +4302,75 @@ dependencies = [ { name = "opentelemetry-semantic-conventions" }, { name = "opentelemetry-util-http" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/89/e5/189f2845362cfe78e356ba127eab21456309def411c6874aa4800c3de816/opentelemetry_instrumentation_wsgi-0.61b0.tar.gz", hash = "sha256:380f2ae61714e5303275a80b2e14c58571573cd1fddf496d8c39fb9551c5e532", size = 19898, upload-time = "2026-03-04T14:20:54.068Z" } +sdist = { url = "https://files.pythonhosted.org/packages/b7/5c/ed45ff053d76c94c59173f2bcde3d61052adb10214f70f028f760aa56625/opentelemetry_instrumentation_wsgi-0.62b0.tar.gz", hash = "sha256:d179f969ecce0c29a15ffd4d982580dfae57c8ff2fd4d9366e299a6d4815e668", size = 19922, upload-time = "2026-04-09T14:40:56.227Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/96/75/d6b42ba26f3c921be6d01b16561b7bb863f843bad7ac3a5011f62617bcab/opentelemetry_instrumentation_wsgi-0.61b0-py3-none-any.whl", hash = "sha256:bd33b0824166f24134a3400648805e8d2e6a7951f070241294e8b8866611d7fa", size = 14628, upload-time = "2026-03-04T14:20:03.934Z" }, + { url = "https://files.pythonhosted.org/packages/f6/cb/753dbbe624df88594fa35a3ff26302fea22623385ed64462f6c8ee7c81eb/opentelemetry_instrumentation_wsgi-0.62b0-py3-none-any.whl", hash = "sha256:2714ab5ab2f35e67dc181ffa3a43fa15313c85c09b4d024c36d72cf1efa29c9a", size = 14628, upload-time = "2026-04-09T14:40:13.529Z" }, ] [[package]] name = "opentelemetry-propagator-b3" -version = "1.40.0" +version = "1.41.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-api" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/eb/fe/e0c84af5c654ec42165ba57af83c7f67e4b8af77f836ddc29dee59ff73c6/opentelemetry_propagator_b3-1.40.0.tar.gz", hash = "sha256:59b6925498947c08a1b7e0dd38193ff97e5009bec74ec23824300c2e32f77bcf", size = 9587, upload-time = "2026-03-04T14:17:30.079Z" } +sdist = { url = "https://files.pythonhosted.org/packages/ef/43/cea77e171c014324876104cf2a17c78f5e931408b977b9e64979f950912c/opentelemetry_propagator_b3-1.41.0.tar.gz", hash = "sha256:ef98b715b3a05e8b0b03ebaea1bf295b4ad61a0e306e2d1da81d32af7395e6ad", size = 9588, upload-time = "2026-04-09T14:38:43.328Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/8f/84/8654cc0539b5145046b2e60d058cebad401a600dd0b1240f1711c6788643/opentelemetry_propagator_b3-1.40.0-py3-none-any.whl", hash = "sha256:cb72a1698fd1d1b434f70dc90c1de62da8ade1dd84850d1f040eccf6a420fa7b", size = 8922, upload-time = "2026-03-04T14:17:14.732Z" }, + { url = "https://files.pythonhosted.org/packages/50/c1/11345c06774ec6ed6d89e3994dd1f62ad2ab41dfeb312eacd6b2a2323280/opentelemetry_propagator_b3-1.41.0-py3-none-any.whl", hash = "sha256:0b085c26ba59fcb66771226f967e91886bdeef998b3b5f2e9da6a604918c6f90", size = 8923, upload-time = "2026-04-09T14:38:26.865Z" }, ] [[package]] name = "opentelemetry-proto" -version = "1.40.0" +version = "1.41.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "protobuf" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/4c/77/dd38991db037fdfce45849491cb61de5ab000f49824a00230afb112a4392/opentelemetry_proto-1.40.0.tar.gz", hash = "sha256:03f639ca129ba513f5819810f5b1f42bcb371391405d99c168fe6937c62febcd", size = 45667, upload-time = "2026-03-04T14:17:31.194Z" } +sdist = { url = "https://files.pythonhosted.org/packages/e0/d9/08e3dc6156878713e8c811682bc76151f5fe1a3cb7f3abda3966fd56e71e/opentelemetry_proto-1.41.0.tar.gz", hash = "sha256:95d2e576f9fb1800473a3e4cfcca054295d06bdb869fda4dc9f4f779dc68f7b6", size = 45669, upload-time = "2026-04-09T14:38:45.978Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/b9/b2/189b2577dde745b15625b3214302605b1353436219d42b7912e77fa8dc24/opentelemetry_proto-1.40.0-py3-none-any.whl", hash = "sha256:266c4385d88923a23d63e353e9761af0f47a6ed0d486979777fe4de59dc9b25f", size = 72073, upload-time = "2026-03-04T14:17:16.673Z" }, + { url = "https://files.pythonhosted.org/packages/49/8c/65ef7a9383a363864772022e822b5d5c6988e6f9dabeebb9278f5b86ebc3/opentelemetry_proto-1.41.0-py3-none-any.whl", hash = "sha256:b970ab537309f9eed296be482c3e7cca05d8aca8165346e929f658dbe153b247", size = 72074, upload-time = "2026-04-09T14:38:29.38Z" }, ] [[package]] name = "opentelemetry-sdk" -version = "1.40.0" +version = "1.41.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-api" }, { name = "opentelemetry-semantic-conventions" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/58/fd/3c3125b20ba18ce2155ba9ea74acb0ae5d25f8cd39cfd37455601b7955cc/opentelemetry_sdk-1.40.0.tar.gz", hash = "sha256:18e9f5ec20d859d268c7cb3c5198c8d105d073714db3de50b593b8c1345a48f2", size = 184252, upload-time = "2026-03-04T14:17:31.87Z" } +sdist = { url = "https://files.pythonhosted.org/packages/f8/0e/a586df1186f9f56b5a0879d52653effc40357b8e88fc50fe300038c3c08b/opentelemetry_sdk-1.41.0.tar.gz", hash = "sha256:7bddf3961131b318fc2d158947971a8e37e38b1cd23470cfb72b624e7cc108bd", size = 230181, upload-time = "2026-04-09T14:38:47.225Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/2c/c5/6a852903d8bfac758c6dc6e9a68b015d3c33f2f1be5e9591e0f4b69c7e0a/opentelemetry_sdk-1.40.0-py3-none-any.whl", hash = "sha256:787d2154a71f4b3d81f20524a8ce061b7db667d24e46753f32a7bc48f1c1f3f1", size = 141951, upload-time = "2026-03-04T14:17:17.961Z" }, + { url = "https://files.pythonhosted.org/packages/2c/13/a7825118208cb32e6a4edcd0a99f925cbef81e77b3b0aedfd9125583c543/opentelemetry_sdk-1.41.0-py3-none-any.whl", hash = "sha256:a596f5687964a3e0d7f8edfdcf5b79cbca9c93c7025ebf5fb00f398a9443b0bd", size = 180214, upload-time = "2026-04-09T14:38:30.657Z" }, ] [[package]] name = "opentelemetry-semantic-conventions" -version = "0.61b0" +version = "0.62b0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-api" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/6d/c0/4ae7973f3c2cfd2b6e321f1675626f0dab0a97027cc7a297474c9c8f3d04/opentelemetry_semantic_conventions-0.61b0.tar.gz", hash = "sha256:072f65473c5d7c6dc0355b27d6c9d1a679d63b6d4b4b16a9773062cb7e31192a", size = 145755, upload-time = "2026-03-04T14:17:32.664Z" } +sdist = { url = "https://files.pythonhosted.org/packages/a3/b0/c14f723e86c049b7bf8ff431160d982519b97a7be2857ed2247377397a24/opentelemetry_semantic_conventions-0.62b0.tar.gz", hash = "sha256:cbfb3c8fc259575cf68a6e1b94083cc35adc4a6b06e8cf431efa0d62606c0097", size = 145753, upload-time = "2026-04-09T14:38:48.274Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/b2/37/cc6a55e448deaa9b27377d087da8615a3416d8ad523d5960b78dbeadd02a/opentelemetry_semantic_conventions-0.61b0-py3-none-any.whl", hash = "sha256:fa530a96be229795f8cef353739b618148b0fe2b4b3f005e60e262926c4d38e2", size = 231621, upload-time = "2026-03-04T14:17:19.33Z" }, + { url = "https://files.pythonhosted.org/packages/58/6c/5e86fa1759a525ef91c2d8b79d668574760ff3f900d114297765eb8786cb/opentelemetry_semantic_conventions-0.62b0-py3-none-any.whl", hash = "sha256:0ddac1ce59eaf1a827d9987ab60d9315fb27aea23304144242d1fcad9e16b489", size = 231619, upload-time = "2026-04-09T14:38:32.394Z" }, ] [[package]] name = "opentelemetry-util-http" -version = "0.61b0" +version = "0.62b0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/57/3c/f0196223efc5c4ca19f8fad3d5462b171ac6333013335ce540c01af419e9/opentelemetry_util_http-0.61b0.tar.gz", hash = "sha256:1039cb891334ad2731affdf034d8fb8b48c239af9b6dd295e5fabd07f1c95572", size = 11361, upload-time = "2026-03-04T14:20:57.01Z" } +sdist = { url = "https://files.pythonhosted.org/packages/9b/e7/830f7c57135158eb8a8efd3f94ab191a89e3b8a49bed314a35ee501da3f2/opentelemetry_util_http-0.62b0.tar.gz", hash = "sha256:a62e4b19b8a432c0de657f167dee3455516136bb9c6ed463ca8063019970d835", size = 11393, upload-time = "2026-04-09T14:40:59.442Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/0d/e5/c08aaaf2f64288d2b6ef65741d2de5454e64af3e050f34285fb1907492fe/opentelemetry_util_http-0.61b0-py3-none-any.whl", hash = "sha256:8e715e848233e9527ea47e275659ea60a57a75edf5206a3b937e236a6da5fc33", size = 9281, upload-time = "2026-03-04T14:20:08.364Z" }, + { url = "https://files.pythonhosted.org/packages/3d/7f/5c1b7d4385852b9e5eacd4e7f9d8b565d3d351d17463b24916ad098adf1a/opentelemetry_util_http-0.62b0-py3-none-any.whl", hash = "sha256:c20462808d8cc95b69b0dc4a3e02a9d36beb663347e96c931f51ffd78bd318ad", size = 9294, upload-time = "2026-04-09T14:40:19.014Z" }, ] [[package]] name = "opik" -version = "1.10.58" +version = "1.11.2" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "boto3-stubs", extra = ["bedrock-runtime"] }, @@ -4027,22 +4388,23 @@ dependencies = [ { name = "tenacity" }, { name = "tqdm" }, { name = "uuid6" }, + { name = "watchfiles" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/52/bc/54673138cf374226ab9fcdd5685e92442c0d5a95775ff22b870c767387e6/opik-1.10.58.tar.gz", hash = "sha256:058f8b3e3171a1f5e75f25cf1fea392b8f2e0ddba18765fafd24cd756783002b", size = 833671, upload-time = "2026-04-01T11:43:21.571Z" } +sdist = { url = "https://files.pythonhosted.org/packages/a3/b9/f6c7e41cb6c02f6e68fde9b6dacf377dcf42079cdbaf891f9fecf4dc958b/opik-1.11.2.tar.gz", hash = "sha256:79e054595b29e1ca8a4fd67d023249f0cf355ea9efbe3e00c28f51628d053d63", size = 871557, upload-time = "2026-04-10T10:48:14.965Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/33/9a/99cf048209f10f8444544202b007d5fbe0a6104465d29038b25932b1c79f/opik-1.10.58-py3-none-any.whl", hash = "sha256:29be9d7f846f3229a027250997195e583da840179ad03f3d28b1d613687963e3", size = 1400658, upload-time = "2026-04-01T11:43:20.096Z" }, + { url = "https://files.pythonhosted.org/packages/99/2d/e5536a2a1b6fdd920d995e09315523be53bde5fe01f104894d9ba7421a8c/opik-1.11.2-py3-none-any.whl", hash = "sha256:1016b6db7563d847e50e463a2ae09e595b6921372dd52edeada660b82036e1b2", size = 1451056, upload-time = "2026-04-10T10:48:12.927Z" }, ] [[package]] name = "optype" -version = "0.14.0" +version = "0.17.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/94/ca/d3a2abcf12cc8c18ccac1178ef87ab50a235bf386d2401341776fdad18aa/optype-0.14.0.tar.gz", hash = "sha256:925cf060b7d1337647f880401f6094321e7d8e837533b8e159b9a92afa3157c6", size = 100880, upload-time = "2025-10-01T04:49:56.232Z" } +sdist = { url = "https://files.pythonhosted.org/packages/81/9f/3b13bab05debf685678b8af004e46b8c67c6f98ffa08eaf5d33bcf162c16/optype-0.17.0.tar.gz", hash = "sha256:31351a1e64d9eba7bf67e14deefb286e85c66458db63c67dd5e26dd72e4664e5", size = 53484, upload-time = "2026-03-08T23:03:12.594Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/84/a6/11b0eb65eeafa87260d36858b69ec4e0072d09e37ea6714280960030bc93/optype-0.14.0-py3-none-any.whl", hash = "sha256:50d02edafd04edf2e5e27d6249760a51b2198adb9f6ffd778030b3d2806b026b", size = 89465, upload-time = "2025-10-01T04:49:54.674Z" }, + { url = "https://files.pythonhosted.org/packages/6b/44/dca78187415947d1bb90b2ee2a58e47d9573528331e8dc6196996b53612a/optype-0.17.0-py3-none-any.whl", hash = "sha256:8c2d88ff13149454bcf6eb47502f80d288bc542e7238fcc412ac4d222c439397", size = 65854, upload-time = "2026-03-08T23:03:11.425Z" }, ] [package.optional-dependencies] @@ -4116,32 +4478,32 @@ wheels = [ [[package]] name = "packaging" -version = "23.2" +version = "26.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/fb/2b/9b9c33ffed44ee921d0967086d653047286054117d584f1b1a7c22ceaf7b/packaging-23.2.tar.gz", hash = "sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5", size = 146714, upload-time = "2023-10-01T13:50:05.279Z" } +sdist = { url = "https://files.pythonhosted.org/packages/65/ee/299d360cdc32edc7d2cf530f3accf79c4fca01e96ffc950d8a52213bd8e4/packaging-26.0.tar.gz", hash = "sha256:00243ae351a257117b6a241061796684b084ed1c516a08c48a3f7e147a9d80b4", size = 143416, upload-time = "2026-01-21T20:50:39.064Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/ec/1a/610693ac4ee14fcdf2d9bf3c493370e4f2ef7ae2e19217d7a237ff42367d/packaging-23.2-py3-none-any.whl", hash = "sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7", size = 53011, upload-time = "2023-10-01T13:50:03.745Z" }, + { url = "https://files.pythonhosted.org/packages/b7/b9/c538f279a4e237a006a2c98387d081e9eb060d203d8ed34467cc0f0b9b53/packaging-26.0-py3-none-any.whl", hash = "sha256:b36f1fef9334a5588b4166f8bcd26a14e521f2b55e6b9de3aaa80d3ff7a37529", size = 74366, upload-time = "2026-01-21T20:50:37.788Z" }, ] [[package]] name = "pandas" -version = "3.0.1" +version = "3.0.2" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "numpy" }, { name = "python-dateutil" }, { name = "tzdata", marker = "sys_platform == 'emscripten' or sys_platform == 'win32'" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/2e/0c/b28ed414f080ee0ad153f848586d61d1878f91689950f037f976ce15f6c8/pandas-3.0.1.tar.gz", hash = "sha256:4186a699674af418f655dbd420ed87f50d56b4cd6603784279d9eef6627823c8", size = 4641901, upload-time = "2026-02-17T22:20:16.434Z" } +sdist = { url = "https://files.pythonhosted.org/packages/da/99/b342345300f13440fe9fe385c3c481e2d9a595ee3bab4d3219247ac94e9a/pandas-3.0.2.tar.gz", hash = "sha256:f4753e73e34c8d83221ba58f232433fca2748be8b18dbca02d242ed153945043", size = 4645855, upload-time = "2026-03-31T06:48:30.816Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/37/51/b467209c08dae2c624873d7491ea47d2b47336e5403309d433ea79c38571/pandas-3.0.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:476f84f8c20c9f5bc47252b66b4bb25e1a9fc2fa98cead96744d8116cb85771d", size = 10344357, upload-time = "2026-02-17T22:18:38.262Z" }, - { url = "https://files.pythonhosted.org/packages/7c/f1/e2567ffc8951ab371db2e40b2fe068e36b81d8cf3260f06ae508700e5504/pandas-3.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0ab749dfba921edf641d4036c4c21c0b3ea70fea478165cb98a998fb2a261955", size = 9884543, upload-time = "2026-02-17T22:18:41.476Z" }, - { url = "https://files.pythonhosted.org/packages/d7/39/327802e0b6d693182403c144edacbc27eb82907b57062f23ef5a4c4a5ea7/pandas-3.0.1-cp312-cp312-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b8e36891080b87823aff3640c78649b91b8ff6eea3c0d70aeabd72ea43ab069b", size = 10396030, upload-time = "2026-02-17T22:18:43.822Z" }, - { url = "https://files.pythonhosted.org/packages/3d/fe/89d77e424365280b79d99b3e1e7d606f5165af2f2ecfaf0c6d24c799d607/pandas-3.0.1-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:532527a701281b9dd371e2f582ed9094f4c12dd9ffb82c0c54ee28d8ac9520c4", size = 10876435, upload-time = "2026-02-17T22:18:45.954Z" }, - { url = "https://files.pythonhosted.org/packages/b5/a6/2a75320849dd154a793f69c951db759aedb8d1dd3939eeacda9bdcfa1629/pandas-3.0.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:356e5c055ed9b0da1580d465657bc7d00635af4fd47f30afb23025352ba764d1", size = 11405133, upload-time = "2026-02-17T22:18:48.533Z" }, - { url = "https://files.pythonhosted.org/packages/58/53/1d68fafb2e02d7881df66aa53be4cd748d25cbe311f3b3c85c93ea5d30ca/pandas-3.0.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:9d810036895f9ad6345b8f2a338dd6998a74e8483847403582cab67745bff821", size = 11932065, upload-time = "2026-02-17T22:18:50.837Z" }, - { url = "https://files.pythonhosted.org/packages/75/08/67cc404b3a966b6df27b38370ddd96b3b023030b572283d035181854aac5/pandas-3.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:536232a5fe26dd989bd633e7a0c450705fdc86a207fec7254a55e9a22950fe43", size = 9741627, upload-time = "2026-02-17T22:18:53.905Z" }, - { url = "https://files.pythonhosted.org/packages/86/4f/caf9952948fb00d23795f09b893d11f1cacb384e666854d87249530f7cbe/pandas-3.0.1-cp312-cp312-win_arm64.whl", hash = "sha256:0f463ebfd8de7f326d38037c7363c6dacb857c5881ab8961fb387804d6daf2f7", size = 9052483, upload-time = "2026-02-17T22:18:57.31Z" }, + { url = "https://files.pythonhosted.org/packages/f3/b0/c20bd4d6d3f736e6bd6b55794e9cd0a617b858eaad27c8f410ea05d953b7/pandas-3.0.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:232a70ebb568c0c4d2db4584f338c1577d81e3af63292208d615907b698a0f18", size = 10347921, upload-time = "2026-03-31T06:46:33.36Z" }, + { url = "https://files.pythonhosted.org/packages/35/d0/4831af68ce30cc2d03c697bea8450e3225a835ef497d0d70f31b8cdde965/pandas-3.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:970762605cff1ca0d3f71ed4f3a769ea8f85fc8e6348f6e110b8fea7e6eb5a14", size = 9888127, upload-time = "2026-03-31T06:46:36.253Z" }, + { url = "https://files.pythonhosted.org/packages/61/a9/16ea9346e1fc4a96e2896242d9bc674764fb9049b0044c0132502f7a771e/pandas-3.0.2-cp312-cp312-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:aff4e6f4d722e0652707d7bcb190c445fe58428500c6d16005b02401764b1b3d", size = 10399577, upload-time = "2026-03-31T06:46:39.224Z" }, + { url = "https://files.pythonhosted.org/packages/c4/a8/3a61a721472959ab0ce865ef05d10b0d6bfe27ce8801c99f33d4fa996e65/pandas-3.0.2-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ef8b27695c3d3dc78403c9a7d5e59a62d5464a7e1123b4e0042763f7104dc74f", size = 10880030, upload-time = "2026-03-31T06:46:42.412Z" }, + { url = "https://files.pythonhosted.org/packages/da/65/7225c0ea4d6ce9cb2160a7fb7f39804871049f016e74782e5dade4d14109/pandas-3.0.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:f8d68083e49e16b84734eb1a4dcae4259a75c90fb6e2251ab9a00b61120c06ab", size = 11409468, upload-time = "2026-03-31T06:46:45.2Z" }, + { url = "https://files.pythonhosted.org/packages/fa/5b/46e7c76032639f2132359b5cf4c785dd8cf9aea5ea64699eac752f02b9db/pandas-3.0.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:32cc41f310ebd4a296d93515fcac312216adfedb1894e879303987b8f1e2b97d", size = 11936381, upload-time = "2026-03-31T06:46:48.293Z" }, + { url = "https://files.pythonhosted.org/packages/7b/8b/721a9cff6fa6a91b162eb51019c6243b82b3226c71bb6c8ef4a9bd65cbc6/pandas-3.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:a4785e1d6547d8427c5208b748ae2efb64659a21bd82bf440d4262d02bfa02a4", size = 9744993, upload-time = "2026-03-31T06:46:51.488Z" }, + { url = "https://files.pythonhosted.org/packages/d5/18/7f0bd34ae27b28159aa80f2a6799f47fda34f7fb938a76e20c7b7fe3b200/pandas-3.0.2-cp312-cp312-win_arm64.whl", hash = "sha256:08504503f7101300107ecdc8df73658e4347586db5cfdadabc1592e9d7e7a0fd", size = 9056118, upload-time = "2026-03-31T06:46:54.548Z" }, ] [package.optional-dependencies] @@ -4153,15 +4515,6 @@ excel = [ { name = "xlrd" }, { name = "xlsxwriter" }, ] -output-formatting = [ - { name = "jinja2" }, - { name = "tabulate" }, -] -performance = [ - { name = "bottleneck" }, - { name = "numba" }, - { name = "numexpr" }, -] [[package]] name = "pandas-stubs" @@ -4229,21 +4582,21 @@ wheels = [ [[package]] name = "pillow" -version = "12.1.1" +version = "12.2.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/1f/42/5c74462b4fd957fcd7b13b04fb3205ff8349236ea74c7c375766d6c82288/pillow-12.1.1.tar.gz", hash = "sha256:9ad8fa5937ab05218e2b6a4cff30295ad35afd2f83ac592e68c0d871bb0fdbc4", size = 46980264, upload-time = "2026-02-11T04:23:07.146Z" } +sdist = { url = "https://files.pythonhosted.org/packages/8c/21/c2bcdd5906101a30244eaffc1b6e6ce71a31bd0742a01eb89e660ebfac2d/pillow-12.2.0.tar.gz", hash = "sha256:a830b1a40919539d07806aa58e1b114df53ddd43213d9c8b75847eee6c0182b5", size = 46987819, upload-time = "2026-04-01T14:46:17.687Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/07/d3/8df65da0d4df36b094351dce696f2989bec731d4f10e743b1c5f4da4d3bf/pillow-12.1.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:ab323b787d6e18b3d91a72fc99b1a2c28651e4358749842b8f8dfacd28ef2052", size = 5262803, upload-time = "2026-02-11T04:20:47.653Z" }, - { url = "https://files.pythonhosted.org/packages/d6/71/5026395b290ff404b836e636f51d7297e6c83beceaa87c592718747e670f/pillow-12.1.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:adebb5bee0f0af4909c30db0d890c773d1a92ffe83da908e2e9e720f8edf3984", size = 4657601, upload-time = "2026-02-11T04:20:49.328Z" }, - { url = "https://files.pythonhosted.org/packages/b1/2e/1001613d941c67442f745aff0f7cc66dd8df9a9c084eb497e6a543ee6f7e/pillow-12.1.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:bb66b7cc26f50977108790e2456b7921e773f23db5630261102233eb355a3b79", size = 6234995, upload-time = "2026-02-11T04:20:51.032Z" }, - { url = "https://files.pythonhosted.org/packages/07/26/246ab11455b2549b9233dbd44d358d033a2f780fa9007b61a913c5b2d24e/pillow-12.1.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:aee2810642b2898bb187ced9b349e95d2a7272930796e022efaf12e99dccd293", size = 8045012, upload-time = "2026-02-11T04:20:52.882Z" }, - { url = "https://files.pythonhosted.org/packages/b2/8b/07587069c27be7535ac1fe33874e32de118fbd34e2a73b7f83436a88368c/pillow-12.1.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a0b1cd6232e2b618adcc54d9882e4e662a089d5768cd188f7c245b4c8c44a397", size = 6349638, upload-time = "2026-02-11T04:20:54.444Z" }, - { url = "https://files.pythonhosted.org/packages/ff/79/6df7b2ee763d619cda2fb4fea498e5f79d984dae304d45a8999b80d6cf5c/pillow-12.1.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7aac39bcf8d4770d089588a2e1dd111cbaa42df5a94be3114222057d68336bd0", size = 7041540, upload-time = "2026-02-11T04:20:55.97Z" }, - { url = "https://files.pythonhosted.org/packages/2c/5e/2ba19e7e7236d7529f4d873bdaf317a318896bac289abebd4bb00ef247f0/pillow-12.1.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:ab174cd7d29a62dd139c44bf74b698039328f45cb03b4596c43473a46656b2f3", size = 6462613, upload-time = "2026-02-11T04:20:57.542Z" }, - { url = "https://files.pythonhosted.org/packages/03/03/31216ec124bb5c3dacd74ce8efff4cc7f52643653bad4825f8f08c697743/pillow-12.1.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:339ffdcb7cbeaa08221cd401d517d4b1fe7a9ed5d400e4a8039719238620ca35", size = 7166745, upload-time = "2026-02-11T04:20:59.196Z" }, - { url = "https://files.pythonhosted.org/packages/1f/e7/7c4552d80052337eb28653b617eafdef39adfb137c49dd7e831b8dc13bc5/pillow-12.1.1-cp312-cp312-win32.whl", hash = "sha256:5d1f9575a12bed9e9eedd9a4972834b08c97a352bd17955ccdebfeca5913fa0a", size = 6328823, upload-time = "2026-02-11T04:21:01.385Z" }, - { url = "https://files.pythonhosted.org/packages/3d/17/688626d192d7261bbbf98846fc98995726bddc2c945344b65bec3a29d731/pillow-12.1.1-cp312-cp312-win_amd64.whl", hash = "sha256:21329ec8c96c6e979cd0dfd29406c40c1d52521a90544463057d2aaa937d66a6", size = 7033367, upload-time = "2026-02-11T04:21:03.536Z" }, - { url = "https://files.pythonhosted.org/packages/ed/fe/a0ef1f73f939b0eca03ee2c108d0043a87468664770612602c63266a43c4/pillow-12.1.1-cp312-cp312-win_arm64.whl", hash = "sha256:af9a332e572978f0218686636610555ae3defd1633597be015ed50289a03c523", size = 2453811, upload-time = "2026-02-11T04:21:05.116Z" }, + { url = "https://files.pythonhosted.org/packages/58/be/7482c8a5ebebbc6470b3eb791812fff7d5e0216c2be3827b30b8bb6603ed/pillow-12.2.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:2d192a155bbcec180f8564f693e6fd9bccff5a7af9b32e2e4bf8c9c69dbad6b5", size = 5308279, upload-time = "2026-04-01T14:43:13.246Z" }, + { url = "https://files.pythonhosted.org/packages/d8/95/0a351b9289c2b5cbde0bacd4a83ebc44023e835490a727b2a3bd60ddc0f4/pillow-12.2.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f3f40b3c5a968281fd507d519e444c35f0ff171237f4fdde090dd60699458421", size = 4695490, upload-time = "2026-04-01T14:43:15.584Z" }, + { url = "https://files.pythonhosted.org/packages/de/af/4e8e6869cbed569d43c416fad3dc4ecb944cb5d9492defaed89ddd6fe871/pillow-12.2.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:03e7e372d5240cc23e9f07deca4d775c0817bffc641b01e9c3af208dbd300987", size = 6284462, upload-time = "2026-04-01T14:43:18.268Z" }, + { url = "https://files.pythonhosted.org/packages/e9/9e/c05e19657fd57841e476be1ab46c4d501bffbadbafdc31a6d665f8b737b6/pillow-12.2.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:b86024e52a1b269467a802258c25521e6d742349d760728092e1bc2d135b4d76", size = 8094744, upload-time = "2026-04-01T14:43:20.716Z" }, + { url = "https://files.pythonhosted.org/packages/2b/54/1789c455ed10176066b6e7e6da1b01e50e36f94ba584dc68d9eebfe9156d/pillow-12.2.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7371b48c4fa448d20d2714c9a1f775a81155050d383333e0a6c15b1123dda005", size = 6398371, upload-time = "2026-04-01T14:43:23.443Z" }, + { url = "https://files.pythonhosted.org/packages/43/e3/fdc657359e919462369869f1c9f0e973f353f9a9ee295a39b1fea8ee1a77/pillow-12.2.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:62f5409336adb0663b7caa0da5c7d9e7bdbaae9ce761d34669420c2a801b2780", size = 7087215, upload-time = "2026-04-01T14:43:26.758Z" }, + { url = "https://files.pythonhosted.org/packages/8b/f8/2f6825e441d5b1959d2ca5adec984210f1ec086435b0ed5f52c19b3b8a6e/pillow-12.2.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:01afa7cf67f74f09523699b4e88c73fb55c13346d212a59a2db1f86b0a63e8c5", size = 6509783, upload-time = "2026-04-01T14:43:29.56Z" }, + { url = "https://files.pythonhosted.org/packages/67/f9/029a27095ad20f854f9dba026b3ea6428548316e057e6fc3545409e86651/pillow-12.2.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:fc3d34d4a8fbec3e88a79b92e5465e0f9b842b628675850d860b8bd300b159f5", size = 7212112, upload-time = "2026-04-01T14:43:32.091Z" }, + { url = "https://files.pythonhosted.org/packages/be/42/025cfe05d1be22dbfdb4f264fe9de1ccda83f66e4fc3aac94748e784af04/pillow-12.2.0-cp312-cp312-win32.whl", hash = "sha256:58f62cc0f00fd29e64b29f4fd923ffdb3859c9f9e6105bfc37ba1d08994e8940", size = 6378489, upload-time = "2026-04-01T14:43:34.601Z" }, + { url = "https://files.pythonhosted.org/packages/5d/7b/25a221d2c761c6a8ae21bfa3874988ff2583e19cf8a27bf2fee358df7942/pillow-12.2.0-cp312-cp312-win_amd64.whl", hash = "sha256:7f84204dee22a783350679a0333981df803dac21a0190d706a50475e361c93f5", size = 7084129, upload-time = "2026-04-01T14:43:37.213Z" }, + { url = "https://files.pythonhosted.org/packages/10/e1/542a474affab20fd4a0f1836cb234e8493519da6b76899e30bcc5d990b8b/pillow-12.2.0-cp312-cp312-win_arm64.whl", hash = "sha256:af73337013e0b3b46f175e79492d96845b16126ddf79c438d7ea7ff27783a414", size = 2463612, upload-time = "2026-04-01T14:43:39.421Z" }, ] [[package]] @@ -4266,13 +4619,14 @@ wheels = [ [[package]] name = "polyfile-weave" -version = "0.5.8" +version = "0.5.9" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "abnf" }, { name = "chardet" }, { name = "cint" }, { name = "fickling" }, + { name = "filelock" }, { name = "graphviz" }, { name = "intervaltree" }, { name = "jinja2" }, @@ -4282,11 +4636,10 @@ dependencies = [ { name = "pillow" }, { name = "pyreadline3", marker = "sys_platform == 'win32'" }, { name = "pyyaml" }, - { name = "setuptools" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/e7/d4/76e56e4429646d9353b4287794f8324ff94201bdb0a2c35ce88cf3de90d0/polyfile_weave-0.5.8.tar.gz", hash = "sha256:cf2ca6a1351165fbbf2971ace4b8bebbb03b2c00e4f2159ff29bed88854e7b32", size = 5989602, upload-time = "2026-01-08T04:21:26.689Z" } +sdist = { url = "https://files.pythonhosted.org/packages/70/55/e5400762e3884f743d59291e71eaaa9c52dd7e144b75a11911e74ec1bac9/polyfile_weave-0.5.9.tar.gz", hash = "sha256:12341fab03e06ede1bfebbd3627dd24015fde5353ea74ece2da186321b818bdb", size = 6024974, upload-time = "2026-01-22T22:08:48.081Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/54/32/c09fd626366c00325d1981e310be5cac8661c09206098d267a592e0c5000/polyfile_weave-0.5.8-py3-none-any.whl", hash = "sha256:f68c570ef189a4219798a7c797730fc3b7feace7ff5bd7e662490f89b772964a", size = 1656208, upload-time = "2026-01-08T04:21:15.213Z" }, + { url = "https://files.pythonhosted.org/packages/52/94/215005530a48c5f7d4ec4a31acdb5828f2bfb985cc6e577b0eaa5882c0e2/polyfile_weave-0.5.9-py3-none-any.whl", hash = "sha256:6ae4b1b5eeac9f5bfc862474484d6d3e33655fab31749d93af0b0a91fddabfc7", size = 1700174, upload-time = "2026-01-22T22:08:46.346Z" }, ] [[package]] @@ -4511,20 +4864,17 @@ wheels = [ [[package]] name = "pyarrow" -version = "14.0.2" +version = "23.0.1" source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "numpy" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/d7/8b/d18b7eb6fb22e5ed6ffcbc073c85dae635778dbd1270a6cf5d750b031e84/pyarrow-14.0.2.tar.gz", hash = "sha256:36cef6ba12b499d864d1def3e990f97949e0b79400d08b7cf74504ffbd3eb025", size = 1063645, upload-time = "2023-12-18T15:43:41.625Z" } +sdist = { url = "https://files.pythonhosted.org/packages/88/22/134986a4cc224d593c1afde5494d18ff629393d74cc2eddb176669f234a4/pyarrow-23.0.1.tar.gz", hash = "sha256:b8c5873e33440b2bc2f4a79d2b47017a89c5a24116c055625e6f2ee50523f019", size = 1167336, upload-time = "2026-02-16T10:14:12.39Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/69/5b/d8ab6c20c43b598228710e4e4a6cba03a01f6faa3d08afff9ce76fd0fd47/pyarrow-14.0.2-cp312-cp312-macosx_10_14_x86_64.whl", hash = "sha256:c87824a5ac52be210d32906c715f4ed7053d0180c1060ae3ff9b7e560f53f944", size = 26819585, upload-time = "2023-12-18T15:41:27.59Z" }, - { url = "https://files.pythonhosted.org/packages/2d/29/bed2643d0dd5e9570405244a61f6db66c7f4704a6e9ce313f84fa5a3675a/pyarrow-14.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a25eb2421a58e861f6ca91f43339d215476f4fe159eca603c55950c14f378cc5", size = 23965222, upload-time = "2023-12-18T15:41:32.449Z" }, - { url = "https://files.pythonhosted.org/packages/2a/34/da464632e59a8cdd083370d69e6c14eae30221acb284f671c6bc9273fadd/pyarrow-14.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c1da70d668af5620b8ba0a23f229030a4cd6c5f24a616a146f30d2386fec422", size = 35942036, upload-time = "2023-12-18T15:41:38.767Z" }, - { url = "https://files.pythonhosted.org/packages/a8/ff/cbed4836d543b29f00d2355af67575c934999ff1d43e3f438ab0b1b394f1/pyarrow-14.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2cc61593c8e66194c7cdfae594503e91b926a228fba40b5cf25cc593563bcd07", size = 38089266, upload-time = "2023-12-18T15:41:47.617Z" }, - { url = "https://files.pythonhosted.org/packages/38/41/345011cb831d3dbb2dab762fc244c745a5df94b199223a99af52a5f7dff6/pyarrow-14.0.2-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:78ea56f62fb7c0ae8ecb9afdd7893e3a7dbeb0b04106f5c08dbb23f9c0157591", size = 35404468, upload-time = "2023-12-18T15:41:54.49Z" }, - { url = "https://files.pythonhosted.org/packages/fd/af/2fc23ca2068ff02068d8dabf0fb85b6185df40ec825973470e613dbd8790/pyarrow-14.0.2-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:37c233ddbce0c67a76c0985612fef27c0c92aef9413cf5aa56952f359fcb7379", size = 38003134, upload-time = "2023-12-18T15:42:01.593Z" }, - { url = "https://files.pythonhosted.org/packages/95/1f/9d912f66a87e3864f694e000977a6a70a644ea560289eac1d733983f215d/pyarrow-14.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:e4b123ad0f6add92de898214d404e488167b87b5dd86e9a434126bc2b7a5578d", size = 25043754, upload-time = "2023-12-18T15:42:07.108Z" }, + { url = "https://files.pythonhosted.org/packages/9a/4b/4166bb5abbfe6f750fc60ad337c43ecf61340fa52ab386da6e8dbf9e63c4/pyarrow-23.0.1-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:f4b0dbfa124c0bb161f8b5ebb40f1a680b70279aa0c9901d44a2b5a20806039f", size = 34214575, upload-time = "2026-02-16T10:09:56.225Z" }, + { url = "https://files.pythonhosted.org/packages/e1/da/3f941e3734ac8088ea588b53e860baeddac8323ea40ce22e3d0baa865cc9/pyarrow-23.0.1-cp312-cp312-macosx_12_0_x86_64.whl", hash = "sha256:7707d2b6673f7de054e2e83d59f9e805939038eebe1763fe811ee8fa5c0cd1a7", size = 35832540, upload-time = "2026-02-16T10:10:03.428Z" }, + { url = "https://files.pythonhosted.org/packages/88/7c/3d841c366620e906d54430817531b877ba646310296df42ef697308c2705/pyarrow-23.0.1-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:86ff03fb9f1a320266e0de855dee4b17da6794c595d207f89bba40d16b5c78b9", size = 44470940, upload-time = "2026-02-16T10:10:10.704Z" }, + { url = "https://files.pythonhosted.org/packages/2c/a5/da83046273d990f256cb79796a190bbf7ec999269705ddc609403f8c6b06/pyarrow-23.0.1-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:813d99f31275919c383aab17f0f455a04f5a429c261cc411b1e9a8f5e4aaaa05", size = 47586063, upload-time = "2026-02-16T10:10:17.95Z" }, + { url = "https://files.pythonhosted.org/packages/5b/3c/b7d2ebcff47a514f47f9da1e74b7949138c58cfeb108cdd4ee62f43f0cf3/pyarrow-23.0.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:bf5842f960cddd2ef757d486041d57c96483efc295a8c4a0e20e704cbbf39c67", size = 48173045, upload-time = "2026-02-16T10:10:25.363Z" }, + { url = "https://files.pythonhosted.org/packages/43/b2/b40961262213beaba6acfc88698eb773dfce32ecdf34d19291db94c2bd73/pyarrow-23.0.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:564baf97c858ecc03ec01a41062e8f4698abc3e6e2acd79c01c2e97880a19730", size = 50621741, upload-time = "2026-02-16T10:10:33.477Z" }, + { url = "https://files.pythonhosted.org/packages/f6/70/1fdda42d65b28b078e93d75d371b2185a61da89dda4def8ba6ba41ebdeb4/pyarrow-23.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:07deae7783782ac7250989a7b2ecde9b3c343a643f82e8a4df03d93b633006f0", size = 27620678, upload-time = "2026-02-16T10:10:39.31Z" }, ] [[package]] @@ -4658,11 +5008,11 @@ wheels = [ [[package]] name = "pyjwt" -version = "2.12.0" +version = "2.12.1" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/a8/10/e8192be5f38f3e8e7e046716de4cae33d56fd5ae08927a823bb916be36c1/pyjwt-2.12.0.tar.gz", hash = "sha256:2f62390b667cd8257de560b850bb5a883102a388829274147f1d724453f8fb02", size = 102511, upload-time = "2026-03-12T17:15:30.831Z" } +sdist = { url = "https://files.pythonhosted.org/packages/c2/27/a3b6e5bf6ff856d2509292e95c8f57f0df7017cf5394921fc4e4ef40308a/pyjwt-2.12.1.tar.gz", hash = "sha256:c74a7a2adf861c04d002db713dd85f84beb242228e671280bf709d765b03672b", size = 102564, upload-time = "2026-03-13T19:27:37.25Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/15/70/70f895f404d363d291dcf62c12c85fdd47619ad9674ac0f53364d035925a/pyjwt-2.12.0-py3-none-any.whl", hash = "sha256:9bb459d1bdd0387967d287f5656bf7ec2b9a26645d1961628cda1764e087fd6e", size = 29700, upload-time = "2026-03-12T17:15:29.257Z" }, + { url = "https://files.pythonhosted.org/packages/e5/7a/8dd906bd22e79e47397a61742927f6747fe93242ef86645ee9092e610244/pyjwt-2.12.1-py3-none-any.whl", hash = "sha256:28ca37c070cad8ba8cd9790cd940535d40274d22f80ab87f3ac6a713e6e8454c", size = 29726, upload-time = "2026-03-13T19:27:35.677Z" }, ] [package.optional-dependencies] @@ -4672,7 +5022,7 @@ crypto = [ [[package]] name = "pymilvus" -version = "2.6.11" +version = "2.6.12" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "cachetools" }, @@ -4684,9 +5034,9 @@ dependencies = [ { name = "requests" }, { name = "setuptools" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/c9/e6/0adc3b374f5c5d1eebd4f551b455c6865c449b170b17545001b208e2b153/pymilvus-2.6.11.tar.gz", hash = "sha256:a40c10322cde25184a8c3d84993a14dfb67ad2bdcfc5dff7e68b11a79ff8f6d8", size = 1583634, upload-time = "2026-03-27T06:25:46.023Z" } +sdist = { url = "https://files.pythonhosted.org/packages/2c/d7/c5d1381248a33975ccc864a0f980f93270ecc35354de8646c8a16443cccb/pymilvus-2.6.12.tar.gz", hash = "sha256:8323e990dc305e607fef525498eb779e42940a69e0691dde009cd02d48845f7a", size = 1584521, upload-time = "2026-04-09T07:49:11.374Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/9c/1c/bccb331d71f824738f80f11e9b8b4da47973c903826355526ae4fa2b762f/pymilvus-2.6.11-py3-none-any.whl", hash = "sha256:a11e1718b15045361c71ca671b959900cb7e2faae863c896f6b7e87bf2e4d10a", size = 315252, upload-time = "2026-03-27T06:25:44.215Z" }, + { url = "https://files.pythonhosted.org/packages/ce/5d/44b0fa94c91503381e6f12298277f84f8e7b0bb00715ab89fc273c4d681e/pymilvus-2.6.12-py3-none-any.whl", hash = "sha256:69051b8b62712f157b2b50aeb7bde7fd7cdb5940aac0122094eb3cd58bc20f0d", size = 315183, upload-time = "2026-04-09T07:49:09.013Z" }, ] [[package]] @@ -4763,11 +5113,11 @@ wheels = [ [[package]] name = "pypdf" -version = "6.9.2" +version = "6.10.2" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/31/83/691bdb309306232362503083cb15777491045dd54f45393a317dc7d8082f/pypdf-6.9.2.tar.gz", hash = "sha256:7f850faf2b0d4ab936582c05da32c52214c2b089d61a316627b5bfb5b0dab46c", size = 5311837, upload-time = "2026-03-23T14:53:27.983Z" } +sdist = { url = "https://files.pythonhosted.org/packages/7b/3f/9f2167401c2e94833ca3b69535bad89e533b5de75fefe4197a2c224baec2/pypdf-6.10.2.tar.gz", hash = "sha256:7d09ce108eff6bf67465d461b6ef352dcb8d84f7a91befc02f904455c6eea11d", size = 5315679, upload-time = "2026-04-15T16:37:36.978Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/a5/7e/c85f41243086a8fe5d1baeba527cb26a1918158a565932b41e0f7c0b32e9/pypdf-6.9.2-py3-none-any.whl", hash = "sha256:662cf29bcb419a36a1365232449624ab40b7c2d0cfc28e54f42eeecd1fd7e844", size = 333744, upload-time = "2026-03-23T14:53:26.573Z" }, + { url = "https://files.pythonhosted.org/packages/0c/d6/1d5c60cc17bbdf37c1552d9c03862fc6d32c5836732a0415b2d637edc2d0/pypdf-6.10.2-py3-none-any.whl", hash = "sha256:aa53be9826655b51c96741e5d7983ca224d898ac0a77896e64636810517624aa", size = 336308, upload-time = "2026-04-15T16:37:34.851Z" }, ] [[package]] @@ -4825,24 +5175,24 @@ wheels = [ [[package]] name = "pyrefly" -version = "0.59.1" +version = "0.60.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/d5/ce/7882c2af92b2ff6505fcd3430eff8048ece6c6254cc90bdc76ecee12dfab/pyrefly-0.59.1.tar.gz", hash = "sha256:bf1675b0c38d45df2c8f8618cbdfa261a1b92430d9d31eba16e0282b551e210f", size = 5475432, upload-time = "2026-04-01T22:04:04.11Z" } +sdist = { url = "https://files.pythonhosted.org/packages/c6/c7/28d14b64888e2d03815627ebff8d57a9f08389c4bbebfe70ae1ed98a1267/pyrefly-0.60.0.tar.gz", hash = "sha256:2499f5b6ff5342e86dfe1cd94bcce133519bbbc93b7ad5636195fea4f0fa3b81", size = 5500389, upload-time = "2026-04-06T19:57:30.643Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/d0/10/04a0e05b08fc855b6fe38c3df549925fc3c2c6e750506870de7335d3e1f7/pyrefly-0.59.1-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:390db3cd14aa7e0268e847b60cd9ee18b04273eddfa38cf341ed3bb43f3fef2a", size = 12868133, upload-time = "2026-04-01T22:03:39.436Z" }, - { url = "https://files.pythonhosted.org/packages/c7/78/fa7be227c3e3fcacee501c1562278dd026186ffd1b5b5beb51d3941a3aed/pyrefly-0.59.1-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d246d417b6187c1650d7f855f61c68fbfd6d6155dc846d4e4d273a3e6b5175cb", size = 12379325, upload-time = "2026-04-01T22:03:42.046Z" }, - { url = "https://files.pythonhosted.org/packages/bb/13/6828ce1c98171b5f8388f33c4b0b9ea2ab8c49abe0ef8d793c31e30a05cb/pyrefly-0.59.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:575ac67b04412dc651a7143d27e38a40fbdd3c831c714d5520d0e9d4c8631ab4", size = 35826408, upload-time = "2026-04-01T22:03:45.067Z" }, - { url = "https://files.pythonhosted.org/packages/23/56/79ed8ece9a7ecad0113c394a06a084107db3ad8f1fefe19e7ded43c51245/pyrefly-0.59.1-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:062e6262ce1064d59dcad81ac0499bb7a3ad501e9bc8a677a50dc630ff0bf862", size = 38532699, upload-time = "2026-04-01T22:03:48.376Z" }, - { url = "https://files.pythonhosted.org/packages/18/7d/ecc025e0f0e3f295b497f523cc19cefaa39e57abede8fc353d29445d174b/pyrefly-0.59.1-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:43ef4247f9e6f734feb93e1f2b75335b943629956e509f545cc9cdcccd76dd20", size = 36743570, upload-time = "2026-04-01T22:03:51.362Z" }, - { url = "https://files.pythonhosted.org/packages/2f/03/b1ce882ebcb87c673165c00451fbe4df17bf96ccfde18c75880dc87c5f5e/pyrefly-0.59.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:59a2d01723b84d042f4fa6ec871ffd52d0a7e83b0ea791c2e0bb0ff750abce56", size = 41236246, upload-time = "2026-04-01T22:03:54.361Z" }, - { url = "https://files.pythonhosted.org/packages/17/af/5e9c7afd510e7dd64a2204be0ed39e804089cbc4338675a28615c7176acb/pyrefly-0.59.1-py3-none-win32.whl", hash = "sha256:4ea70c780848f8376411e787643ae5d2d09da8a829362332b7b26d15ebcbaf56", size = 11884747, upload-time = "2026-04-01T22:03:56.776Z" }, - { url = "https://files.pythonhosted.org/packages/aa/c1/7db1077627453fd1068f0761f059a9512645c00c4c20acfb9f0c24ac02ec/pyrefly-0.59.1-py3-none-win_amd64.whl", hash = "sha256:67e6a08cfd129a0d2788d5e40a627f9860e0fe91a876238d93d5c63ff4af68ae", size = 12720608, upload-time = "2026-04-01T22:03:59.252Z" }, - { url = "https://files.pythonhosted.org/packages/07/16/4bb6e5fce5a9cf0992932d9435d964c33e507aaaf96fdfbb1be493078a4a/pyrefly-0.59.1-py3-none-win_arm64.whl", hash = "sha256:01179cb215cf079e8223a064f61a074f7079aa97ea705cbbc68af3d6713afd15", size = 12223158, upload-time = "2026-04-01T22:04:01.869Z" }, + { url = "https://files.pythonhosted.org/packages/31/99/6c9984a09220e5eb7dd5c869b7a32d25c3d06b5e8854c6eb679db1145c3e/pyrefly-0.60.0-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:bf1691af0fee69d0c99c3c6e9d26ab6acd3c8afef96416f9ba2e74934833b7b5", size = 12921262, upload-time = "2026-04-06T19:57:00.745Z" }, + { url = "https://files.pythonhosted.org/packages/05/b3/6216aa3c00c88e59a27eb4149851b5affe86eeea6129f4224034a32dddb0/pyrefly-0.60.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:3e71b70c9b95545cf3b479bc55d1381b531de7b2380eb64411088a1e56b634cb", size = 12424413, upload-time = "2026-04-06T19:57:03.417Z" }, + { url = "https://files.pythonhosted.org/packages/9b/87/eb8dd73abd92a93952ac27a605e463c432fb250fb23186574038c7035594/pyrefly-0.60.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:680ee5f8f98230ea145652d7344708f5375786209c5bf03d8b911fdb0d0d4195", size = 35940884, upload-time = "2026-04-06T19:57:06.909Z" }, + { url = "https://files.pythonhosted.org/packages/0d/34/dc6aeb67b840c745fcee6db358295d554abe6ab555a7eaaf44624bd80bf1/pyrefly-0.60.0-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0d0b20dbbe4aff15b959e8d825b7521a144c4122c11e57022e83b36568c54470", size = 38677220, upload-time = "2026-04-06T19:57:11.235Z" }, + { url = "https://files.pythonhosted.org/packages/66/6b/c863fcf7ef592b7d1db91502acf0d1113be8bed7a2a7143fc6f0dd90616f/pyrefly-0.60.0-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2911563c8e6b2eaefff68885c94727965469a35375a409235a7a4d2b7157dc15", size = 36907431, upload-time = "2026-04-06T19:57:15.074Z" }, + { url = "https://files.pythonhosted.org/packages/8e/a2/25ea095ab2ecca8e62884669b11a79f14299db93071685b73a97efbaf4f3/pyrefly-0.60.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e0a631d9d04705e303fe156f2e62551611bc7ef8066c34708ceebcfb3088bd55", size = 41447898, upload-time = "2026-04-06T19:57:19.382Z" }, + { url = "https://files.pythonhosted.org/packages/8e/2c/097bdc6e8d40676b28eb03710a4577bc3c7b803cd24693ac02bf15de3d67/pyrefly-0.60.0-py3-none-win32.whl", hash = "sha256:a08d69298da5626cf502d3debbb6944fd13d2f405ea6625363751f1ff570d366", size = 11913434, upload-time = "2026-04-06T19:57:22.887Z" }, + { url = "https://files.pythonhosted.org/packages/0a/d4/8d27fe310e830c8d11ab73db38b93f9fd2e218744b6efb1204401c9a74d5/pyrefly-0.60.0-py3-none-win_amd64.whl", hash = "sha256:56cf30654e708ae1dd635ffefcba4fa4b349dd7004a6ccc5c41e3a9bb944320c", size = 12745033, upload-time = "2026-04-06T19:57:25.517Z" }, + { url = "https://files.pythonhosted.org/packages/1f/ad/8eea1f8fb8209f91f6dbfe48000c9d05fd0cdb1b5b3157283c9b1dada55d/pyrefly-0.60.0-py3-none-win_arm64.whl", hash = "sha256:b6d27fba970f4777063c0227c54167d83bece1804ea34f69e7118e409ba038d2", size = 12246390, upload-time = "2026-04-06T19:57:28.141Z" }, ] [[package]] name = "pytest" -version = "9.0.2" +version = "9.0.3" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "colorama", marker = "sys_platform == 'win32'" }, @@ -4851,9 +5201,9 @@ dependencies = [ { name = "pluggy" }, { name = "pygments" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" } +sdist = { url = "https://files.pythonhosted.org/packages/7d/0d/549bd94f1a0a402dc8cf64563a117c0f3765662e2e668477624baeec44d5/pytest-9.0.3.tar.gz", hash = "sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c", size = 1572165, upload-time = "2026-04-07T17:16:18.027Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" }, + { url = "https://files.pythonhosted.org/packages/d4/24/a372aaf5c9b7208e7112038812994107bc65a84cd00e0354a88c2c77a617/pytest-9.0.3-py3-none-any.whl", hash = "sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9", size = 375249, upload-time = "2026-04-07T17:16:16.13Z" }, ] [[package]] @@ -4988,6 +5338,18 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/0b/d7/1959b9648791274998a9c3526f6d0ec8fd2233e4d4acce81bbae76b44b2a/python_dotenv-1.2.2-py3-none-any.whl", hash = "sha256:1d8214789a24de455a8b8bd8ae6fe3c6b69a5e3d64aa8a8e5d68e694bbcb285a", size = 22101, upload-time = "2026-03-01T16:00:25.09Z" }, ] +[[package]] +name = "python-engineio" +version = "4.13.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "simple-websocket" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/34/12/bdef9dbeedbe2cdeba2a2056ad27b1fb081557d34b69a97f574843462cae/python_engineio-4.13.1.tar.gz", hash = "sha256:0a853fcef52f5b345425d8c2b921ac85023a04dfcf75d7b74696c61e940fd066", size = 92348, upload-time = "2026-02-06T23:38:06.12Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/aa/54/0cce26da03a981f949bb8449c9778537f75f5917c172e1d2992ff25cb57d/python_engineio-4.13.1-py3-none-any.whl", hash = "sha256:f32ad10589859c11053ad7d9bb3c9695cdf862113bfb0d20bc4d890198287399", size = 59847, upload-time = "2026-02-06T23:38:04.861Z" }, +] + [[package]] name = "python-http-client" version = "3.3.7" @@ -5044,6 +5406,19 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/d9/4f/00be2196329ebbff56ce564aa94efb0fbc828d00de250b1980de1a34ab49/python_pptx-1.0.2-py3-none-any.whl", hash = "sha256:160838e0b8565a8b1f67947675886e9fea18aa5e795db7ae531606d68e785cba", size = 472788, upload-time = "2024-08-07T17:33:28.192Z" }, ] +[[package]] +name = "python-socketio" +version = "5.16.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "bidict" }, + { name = "python-engineio" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/59/81/cf8284f45e32efa18d3848ed82cdd4dcc1b657b082458fbe01ad3e1f2f8d/python_socketio-5.16.1.tar.gz", hash = "sha256:f863f98eacce81ceea2e742f6388e10ca3cdd0764be21d30d5196470edf5ea89", size = 128508, upload-time = "2026-02-06T23:42:07Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/07/c7/deb8c5e604404dbf10a3808a858946ca3547692ff6316b698945bb72177e/python_socketio-5.16.1-py3-none-any.whl", hash = "sha256:a3eb1702e92aa2f2b5d3ba00261b61f062cce51f1cfb6900bf3ab4d1934d2d35", size = 82054, upload-time = "2026-02-06T23:42:05.772Z" }, +] + [[package]] name = "pytz" version = "2025.2" @@ -5248,15 +5623,15 @@ wheels = [ [[package]] name = "resend" -version = "2.26.0" +version = "2.27.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "requests" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/07/ff/6a4e5e758fc2145c6a7d8563934d8ee24bf96a0212d7ec7d1af1f155bb74/resend-2.26.0.tar.gz", hash = "sha256:957a6a59dc597ce27fbd6d5383220dd9cc497fab99d4f3d775c8a42a449a569e", size = 36238, upload-time = "2026-03-20T22:49:09.728Z" } +sdist = { url = "https://files.pythonhosted.org/packages/96/da/3d342cacbde7143e36782243caa3715d9e49cadb43e804419493c784869b/resend-2.27.0.tar.gz", hash = "sha256:abc183da7566c1fdba8221ec5acd9f954c2ff516a0c2615bee2a41bc9db3e277", size = 37177, upload-time = "2026-04-01T21:19:31.823Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/16/c2/f88d3299d97aa1d36a923d0846fe185fcf5355ca898c954b2e5a79f090b5/resend-2.26.0-py2.py3-none-any.whl", hash = "sha256:5e25a804a84a68df504f2ade5369ac37e0139e37788a1f20b66c88696595b4bc", size = 57699, upload-time = "2026-03-20T22:49:08.354Z" }, + { url = "https://files.pythonhosted.org/packages/b4/95/783b09d24c8f40b900a2728b67fd3c1401d4a6afcdf1db1c8475c249559d/resend-2.27.0-py2.py3-none-any.whl", hash = "sha256:5bc8ddebb0418127fc3e47eb29ab72af727861481c4b051b96cb693df8f8dc40", size = 59831, upload-time = "2026-04-01T21:19:30.471Z" }, ] [[package]] @@ -5310,27 +5685,27 @@ wheels = [ [[package]] name = "ruff" -version = "0.15.9" +version = "0.15.10" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/e6/97/e9f1ca355108ef7194e38c812ef40ba98c7208f47b13ad78d023caa583da/ruff-0.15.9.tar.gz", hash = "sha256:29cbb1255a9797903f6dde5ba0188c707907ff44a9006eb273b5a17bfa0739a2", size = 4617361, upload-time = "2026-04-02T18:17:20.829Z" } +sdist = { url = "https://files.pythonhosted.org/packages/e7/d9/aa3f7d59a10ef6b14fe3431706f854dbf03c5976be614a9796d36326810c/ruff-0.15.10.tar.gz", hash = "sha256:d1f86e67ebfdef88e00faefa1552b5e510e1d35f3be7d423dc7e84e63788c94e", size = 4631728, upload-time = "2026-04-09T14:06:09.884Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/0b/1f/9cdfd0ac4b9d1e5a6cf09bedabdf0b56306ab5e333c85c87281273e7b041/ruff-0.15.9-py3-none-linux_armv6l.whl", hash = "sha256:6efbe303983441c51975c243e26dff328aca11f94b70992f35b093c2e71801e1", size = 10511206, upload-time = "2026-04-02T18:16:41.574Z" }, - { url = "https://files.pythonhosted.org/packages/3d/f6/32bfe3e9c136b35f02e489778d94384118bb80fd92c6d92e7ccd97db12ce/ruff-0.15.9-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:4965bac6ac9ea86772f4e23587746f0b7a395eccabb823eb8bfacc3fa06069f7", size = 10923307, upload-time = "2026-04-02T18:17:08.645Z" }, - { url = "https://files.pythonhosted.org/packages/ca/25/de55f52ab5535d12e7aaba1de37a84be6179fb20bddcbe71ec091b4a3243/ruff-0.15.9-py3-none-macosx_11_0_arm64.whl", hash = "sha256:eaf05aad70ca5b5a0a4b0e080df3a6b699803916d88f006efd1f5b46302daab8", size = 10316722, upload-time = "2026-04-02T18:16:44.206Z" }, - { url = "https://files.pythonhosted.org/packages/48/11/690d75f3fd6278fe55fff7c9eb429c92d207e14b25d1cae4064a32677029/ruff-0.15.9-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9439a342adb8725f32f92732e2bafb6d5246bd7a5021101166b223d312e8fc59", size = 10623674, upload-time = "2026-04-02T18:16:50.951Z" }, - { url = "https://files.pythonhosted.org/packages/bd/ec/176f6987be248fc5404199255522f57af1b4a5a1b57727e942479fec98ad/ruff-0.15.9-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:9c5e6faf9d97c8edc43877c3f406f47446fc48c40e1442d58cfcdaba2acea745", size = 10351516, upload-time = "2026-04-02T18:16:57.206Z" }, - { url = "https://files.pythonhosted.org/packages/b2/fc/51cffbd2b3f240accc380171d51446a32aa2ea43a40d4a45ada67368fbd2/ruff-0.15.9-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b34a9766aeec27a222373d0b055722900fbc0582b24f39661aa96f3fe6ad901", size = 11150202, upload-time = "2026-04-02T18:17:06.452Z" }, - { url = "https://files.pythonhosted.org/packages/d6/d4/25292a6dfc125f6b6528fe6af31f5e996e19bf73ca8e3ce6eb7fa5b95885/ruff-0.15.9-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:89dd695bc72ae76ff484ae54b7e8b0f6b50f49046e198355e44ea656e521fef9", size = 11988891, upload-time = "2026-04-02T18:17:18.575Z" }, - { url = "https://files.pythonhosted.org/packages/13/e1/1eebcb885c10e19f969dcb93d8413dfee8172578709d7ee933640f5e7147/ruff-0.15.9-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ce187224ef1de1bd225bc9a152ac7102a6171107f026e81f317e4257052916d5", size = 11480576, upload-time = "2026-04-02T18:16:52.986Z" }, - { url = "https://files.pythonhosted.org/packages/ff/6b/a1548ac378a78332a4c3dcf4a134c2475a36d2a22ddfa272acd574140b50/ruff-0.15.9-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2b0c7c341f68adb01c488c3b7d4b49aa8ea97409eae6462d860a79cf55f431b6", size = 11254525, upload-time = "2026-04-02T18:17:02.041Z" }, - { url = "https://files.pythonhosted.org/packages/42/aa/4bb3af8e61acd9b1281db2ab77e8b2c3c5e5599bf2a29d4a942f1c62b8d6/ruff-0.15.9-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:55cc15eee27dc0eebdfcb0d185a6153420efbedc15eb1d38fe5e685657b0f840", size = 11204072, upload-time = "2026-04-02T18:17:13.581Z" }, - { url = "https://files.pythonhosted.org/packages/69/48/d550dc2aa6e423ea0bcc1d0ff0699325ffe8a811e2dba156bd80750b86dc/ruff-0.15.9-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:a6537f6eed5cda688c81073d46ffdfb962a5f29ecb6f7e770b2dc920598997ed", size = 10594998, upload-time = "2026-04-02T18:16:46.369Z" }, - { url = "https://files.pythonhosted.org/packages/63/47/321167e17f5344ed5ec6b0aa2cff64efef5f9e985af8f5622cfa6536043f/ruff-0.15.9-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:6d3fcbca7388b066139c523bda744c822258ebdcfbba7d24410c3f454cc9af71", size = 10359769, upload-time = "2026-04-02T18:17:10.994Z" }, - { url = "https://files.pythonhosted.org/packages/67/5e/074f00b9785d1d2c6f8c22a21e023d0c2c1817838cfca4c8243200a1fa87/ruff-0.15.9-py3-none-musllinux_1_2_i686.whl", hash = "sha256:058d8e99e1bfe79d8a0def0b481c56059ee6716214f7e425d8e737e412d69677", size = 10850236, upload-time = "2026-04-02T18:16:48.749Z" }, - { url = "https://files.pythonhosted.org/packages/76/37/804c4135a2a2caf042925d30d5f68181bdbd4461fd0d7739da28305df593/ruff-0.15.9-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:8e1ddb11dbd61d5983fa2d7d6370ef3eb210951e443cace19594c01c72abab4c", size = 11358343, upload-time = "2026-04-02T18:16:55.068Z" }, - { url = "https://files.pythonhosted.org/packages/88/3d/1364fcde8656962782aa9ea93c92d98682b1ecec2f184e625a965ad3b4a6/ruff-0.15.9-py3-none-win32.whl", hash = "sha256:bde6ff36eaf72b700f32b7196088970bf8fdb2b917b7accd8c371bfc0fd573ec", size = 10583382, upload-time = "2026-04-02T18:17:04.261Z" }, - { url = "https://files.pythonhosted.org/packages/4c/56/5c7084299bd2cacaa07ae63a91c6f4ba66edc08bf28f356b24f6b717c799/ruff-0.15.9-py3-none-win_amd64.whl", hash = "sha256:45a70921b80e1c10cf0b734ef09421f71b5aa11d27404edc89d7e8a69505e43d", size = 11744969, upload-time = "2026-04-02T18:16:59.611Z" }, - { url = "https://files.pythonhosted.org/packages/03/36/76704c4f312257d6dbaae3c959add2a622f63fcca9d864659ce6d8d97d3d/ruff-0.15.9-py3-none-win_arm64.whl", hash = "sha256:0694e601c028fd97dc5c6ee244675bc241aeefced7ef80cd9c6935a871078f53", size = 11005870, upload-time = "2026-04-02T18:17:15.773Z" }, + { url = "https://files.pythonhosted.org/packages/eb/00/a1c2fdc9939b2c03691edbda290afcd297f1f389196172826b03d6b6a595/ruff-0.15.10-py3-none-linux_armv6l.whl", hash = "sha256:0744e31482f8f7d0d10a11fcbf897af272fefdfcb10f5af907b18c2813ff4d5f", size = 10563362, upload-time = "2026-04-09T14:06:21.189Z" }, + { url = "https://files.pythonhosted.org/packages/5c/15/006990029aea0bebe9d33c73c3e28c80c391ebdba408d1b08496f00d422d/ruff-0.15.10-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b1e7c16ea0ff5a53b7c2df52d947e685973049be1cdfe2b59a9c43601897b22e", size = 10951122, upload-time = "2026-04-09T14:06:02.236Z" }, + { url = "https://files.pythonhosted.org/packages/f2/c0/4ac978fe874d0618c7da647862afe697b281c2806f13ce904ad652fa87e4/ruff-0.15.10-py3-none-macosx_11_0_arm64.whl", hash = "sha256:93cc06a19e5155b4441dd72808fdf84290d84ad8a39ca3b0f994363ade4cebb1", size = 10314005, upload-time = "2026-04-09T14:06:00.026Z" }, + { url = "https://files.pythonhosted.org/packages/da/73/c209138a5c98c0d321266372fc4e33ad43d506d7e5dd817dd89b60a8548f/ruff-0.15.10-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83e1dd04312997c99ea6965df66a14fb4f03ba978564574ffc68b0d61fd3989e", size = 10643450, upload-time = "2026-04-09T14:05:42.137Z" }, + { url = "https://files.pythonhosted.org/packages/ec/76/0deec355d8ec10709653635b1f90856735302cb8e149acfdf6f82a5feb70/ruff-0.15.10-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8154d43684e4333360fedd11aaa40b1b08a4e37d8ffa9d95fee6fa5b37b6fab1", size = 10379597, upload-time = "2026-04-09T14:05:49.984Z" }, + { url = "https://files.pythonhosted.org/packages/dc/be/86bba8fc8798c081e28a4b3bb6d143ccad3fd5f6f024f02002b8f08a9fa3/ruff-0.15.10-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ab88715f3a6deb6bde6c227f3a123410bec7b855c3ae331b4c006189e895cef", size = 11146645, upload-time = "2026-04-09T14:06:12.246Z" }, + { url = "https://files.pythonhosted.org/packages/a8/89/140025e65911b281c57be1d385ba1d932c2366ca88ae6663685aed8d4881/ruff-0.15.10-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a768ff5969b4f44c349d48edf4ab4f91eddb27fd9d77799598e130fb628aa158", size = 12030289, upload-time = "2026-04-09T14:06:04.776Z" }, + { url = "https://files.pythonhosted.org/packages/88/de/ddacca9545a5e01332567db01d44bd8cf725f2db3b3d61a80550b48308ea/ruff-0.15.10-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ee3ef42dab7078bda5ff6a1bcba8539e9857deb447132ad5566a038674540d0", size = 11496266, upload-time = "2026-04-09T14:05:55.485Z" }, + { url = "https://files.pythonhosted.org/packages/bc/bb/7ddb00a83760ff4a83c4e2fc231fd63937cc7317c10c82f583302e0f6586/ruff-0.15.10-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:51cb8cc943e891ba99989dd92d61e29b1d231e14811db9be6440ecf25d5c1609", size = 11256418, upload-time = "2026-04-09T14:05:57.69Z" }, + { url = "https://files.pythonhosted.org/packages/dc/8d/55de0d35aacf6cd50b6ee91ee0f291672080021896543776f4170fc5c454/ruff-0.15.10-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:e59c9bdc056a320fb9ea1700a8d591718b8faf78af065484e801258d3a76bc3f", size = 11288416, upload-time = "2026-04-09T14:05:44.695Z" }, + { url = "https://files.pythonhosted.org/packages/68/cf/9438b1a27426ec46a80e0a718093c7f958ef72f43eb3111862949ead3cc1/ruff-0.15.10-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:136c00ca2f47b0018b073f28cb5c1506642a830ea941a60354b0e8bc8076b151", size = 10621053, upload-time = "2026-04-09T14:05:52.782Z" }, + { url = "https://files.pythonhosted.org/packages/4c/50/e29be6e2c135e9cd4cb15fbade49d6a2717e009dff3766dd080fcb82e251/ruff-0.15.10-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:8b80a2f3c9c8a950d6237f2ca12b206bccff626139be9fa005f14feb881a1ae8", size = 10378302, upload-time = "2026-04-09T14:06:14.361Z" }, + { url = "https://files.pythonhosted.org/packages/18/2f/e0b36a6f99c51bb89f3a30239bc7bf97e87a37ae80aa2d6542d6e5150364/ruff-0.15.10-py3-none-musllinux_1_2_i686.whl", hash = "sha256:e3e53c588164dc025b671c9df2462429d60357ea91af7e92e9d56c565a9f1b07", size = 10850074, upload-time = "2026-04-09T14:06:16.581Z" }, + { url = "https://files.pythonhosted.org/packages/11/08/874da392558ce087a0f9b709dc6ec0d60cbc694c1c772dab8d5f31efe8cb/ruff-0.15.10-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:b0c52744cf9f143a393e284125d2576140b68264a93c6716464e129a3e9adb48", size = 11358051, upload-time = "2026-04-09T14:06:18.948Z" }, + { url = "https://files.pythonhosted.org/packages/e4/46/602938f030adfa043e67112b73821024dc79f3ab4df5474c25fa4c1d2d14/ruff-0.15.10-py3-none-win32.whl", hash = "sha256:d4272e87e801e9a27a2e8df7b21011c909d9ddd82f4f3281d269b6ba19789ca5", size = 10588964, upload-time = "2026-04-09T14:06:07.14Z" }, + { url = "https://files.pythonhosted.org/packages/25/b6/261225b875d7a13b33a6d02508c39c28450b2041bb01d0f7f1a83d569512/ruff-0.15.10-py3-none-win_amd64.whl", hash = "sha256:28cb32d53203242d403d819fd6983152489b12e4a3ae44993543d6fe62ab42ed", size = 11745044, upload-time = "2026-04-09T14:05:39.473Z" }, + { url = "https://files.pythonhosted.org/packages/58/ed/dea90a65b7d9e69888890fb14c90d7f51bf0c1e82ad800aeb0160e4bacfd/ruff-0.15.10-py3-none-win_arm64.whl", hash = "sha256:601d1610a9e1f1c2165a4f561eeaa2e2ea1e97f3287c5aa258d3dab8b57c6188", size = 11035607, upload-time = "2026-04-09T14:05:47.593Z" }, ] [[package]] @@ -5381,36 +5756,29 @@ wheels = [ [[package]] name = "sendgrid" -version = "6.12.4" +version = "6.12.5" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "ecdsa" }, + { name = "cryptography" }, { name = "python-http-client" }, { name = "werkzeug" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/11/31/62e00433878dccf33edf07f8efa417b9030a2464eb3b04bbd797a11b4447/sendgrid-6.12.4.tar.gz", hash = "sha256:9e88b849daf0fa4bdf256c3b5da9f5a3272402c0c2fd6b1928c9de440db0a03d", size = 50271, upload-time = "2025-06-12T10:29:37.213Z" } +sdist = { url = "https://files.pythonhosted.org/packages/da/fa/f718b2b953f99c1f0085811598ac7e31ccbd4229a81ec2a5290be868187a/sendgrid-6.12.5.tar.gz", hash = "sha256:ea9aae30cd55c332e266bccd11185159482edfc07c149b6cd15cf08869fabdb7", size = 50310, upload-time = "2025-09-19T06:23:09.229Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/c2/9c/45d068fd831a65e6ed1e2ab3233de58784842afdc62fdcdd0a01bbb6b39d/sendgrid-6.12.4-py3-none-any.whl", hash = "sha256:9a211b96241e63bd5b9ed9afcc8608f4bcac426e4a319b3920ab877c8426e92c", size = 102122, upload-time = "2025-06-12T10:29:35.457Z" }, + { url = "https://files.pythonhosted.org/packages/bd/55/b3c3880a77082e8f7374954e0074aafafaa9bc78bdf9c8f5a92c2e7afc6a/sendgrid-6.12.5-py3-none-any.whl", hash = "sha256:96f92cc91634bf552fdb766b904bbb53968018da7ae41fdac4d1090dc0311ca8", size = 102173, upload-time = "2025-09-19T06:23:07.93Z" }, ] [[package]] name = "sentry-sdk" -version = "2.55.0" +version = "2.57.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "certifi" }, { name = "urllib3" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/e9/b8/285293dc60fc198fffc3fcdbc7c6d4e646e0f74e61461c355d40faa64ceb/sentry_sdk-2.55.0.tar.gz", hash = "sha256:3774c4d8820720ca4101548131b9c162f4c9426eb7f4d24aca453012a7470f69", size = 424505, upload-time = "2026-03-17T14:15:51.707Z" } +sdist = { url = "https://files.pythonhosted.org/packages/4f/87/46c0406d8b5ddd026f73adaf5ab75ce144219c41a4830b52df4b9ab55f7f/sentry_sdk-2.57.0.tar.gz", hash = "sha256:4be8d1e71c32fb27f79c577a337ac8912137bba4bcbc64a4ec1da4d6d8dc5199", size = 435288, upload-time = "2026-03-31T09:39:29.264Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/9a/66/20465097782d7e1e742d846407ea7262d338c6e876ddddad38ca8907b38f/sentry_sdk-2.55.0-py2.py3-none-any.whl", hash = "sha256:97026981cb15699394474a196b88503a393cbc58d182ece0d3abe12b9bd978d4", size = 449284, upload-time = "2026-03-17T14:15:49.604Z" }, -] - -[package.optional-dependencies] -flask = [ - { name = "blinker" }, - { name = "flask" }, - { name = "markupsafe" }, + { url = "https://files.pythonhosted.org/packages/c9/64/982e07b93219cb52e1cca5d272cb579e2f3eb001956c9e7a9a6d106c9473/sentry_sdk-2.57.0-py2.py3-none-any.whl", hash = "sha256:812c8bf5ff3d2f0e89c82f5ce80ab3a6423e102729c4706af7413fd1eb480585", size = 456489, upload-time = "2026-03-31T09:39:27.524Z" }, ] [[package]] @@ -5431,6 +5799,18 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/e0/f9/0595336914c5619e5f28a1fb793285925a8cd4b432c9da0a987836c7f822/shellingham-1.5.4-py2.py3-none-any.whl", hash = "sha256:7ecfff8f2fd72616f7481040475a65b2bf8af90a56c89140852d1120324e8686", size = 9755, upload-time = "2023-10-24T04:13:38.866Z" }, ] +[[package]] +name = "simple-websocket" +version = "1.1.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "wsproto" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/b0/d4/bfa032f961103eba93de583b161f0e6a5b63cebb8f2c7d0c6e6efe1e3d2e/simple_websocket-1.1.0.tar.gz", hash = "sha256:7939234e7aa067c534abdab3a9ed933ec9ce4691b0713c78acb195560aa52ae4", size = 17300, upload-time = "2024-10-10T22:39:31.412Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/52/59/0782e51887ac6b07ffd1570e0364cf901ebc36345fea669969d2084baebb/simple_websocket-1.1.0-py3-none-any.whl", hash = "sha256:4af6069630a38ed6c561010f0e11a5bc0d4ca569b36306eb257cd9a192497c8c", size = 13842, upload-time = "2024-10-10T22:39:29.645Z" }, +] + [[package]] name = "six" version = "1.17.0" @@ -5709,7 +6089,7 @@ wheels = [ [[package]] name = "tablestore" -version = "6.4.3" +version = "6.4.4" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "aiohttp" }, @@ -5722,18 +6102,9 @@ dependencies = [ { name = "six" }, { name = "urllib3" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/85/0b/c875c2314d472eed9f9644a94ae0aa7e702a6084779a0136e539d5e7ed32/tablestore-6.4.3.tar.gz", hash = "sha256:4981139e68705052ade6341060a4b6238b1fb9a8c18b43a77383fda14f7554a9", size = 5072450, upload-time = "2026-03-31T04:34:37.832Z" } +sdist = { url = "https://files.pythonhosted.org/packages/2f/bc/84d7592188b060950f4fe5713eb3b03068d42b2e43ad37decdb5242c1879/tablestore-6.4.4.tar.gz", hash = "sha256:0f40834030aff0c67e568b09deaab97144229b569710d66557edf7a06a5dcb19", size = 5076731, upload-time = "2026-04-09T09:40:20.399Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/39/e0/e11626aea61e1352dafe7707c548d482769afd3ca28f45653d380ba85a5d/tablestore-6.4.3-py3-none-any.whl", hash = "sha256:207b89324cd4157db4559c7619d42b9510a55c0565f00a439389f14426d114c5", size = 5115764, upload-time = "2026-03-31T04:34:35.761Z" }, -] - -[[package]] -name = "tabulate" -version = "0.9.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/ec/fe/802052aecb21e3797b8f7902564ab6ea0d60ff8ca23952079064155d1ae1/tabulate-0.9.0.tar.gz", hash = "sha256:0095b12bf5966de529c0feb1fa08671671b3368eec77d7ef7ab114be2c068b3c", size = 81090, upload-time = "2022-10-06T17:21:48.54Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/40/44/4a5f08c96eb108af5cb50b41f76142f0afa346dfa99d5296fe7202a11854/tabulate-0.9.0-py3-none-any.whl", hash = "sha256:024ca478df22e9340661486f85298cff5f6dcdba14f3813e8830015b9ed1948f", size = 35252, upload-time = "2022-10-06T17:21:44.262Z" }, + { url = "https://files.pythonhosted.org/packages/45/3f/48af1e72e59d60481724b326317bd311615bdedc31f8f81f9508fb84cda6/tablestore-6.4.4-py3-none-any.whl", hash = "sha256:984f086fa7acabaa3558da93205ad6df562b266b85fd249bc5891f2dd1d65814", size = 5118758, upload-time = "2026-04-09T09:40:17.209Z" }, ] [[package]] @@ -5779,7 +6150,7 @@ wheels = [ [[package]] name = "testcontainers" -version = "4.14.1" +version = "4.14.2" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "docker" }, @@ -5788,9 +6159,9 @@ dependencies = [ { name = "urllib3" }, { name = "wrapt" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/8b/02/ef62dec9e4f804189c44df23f0b86897c738d38e9c48282fcd410308632f/testcontainers-4.14.1.tar.gz", hash = "sha256:316f1bb178d829c003acd650233e3ff3c59a833a08d8661c074f58a4fbd42a64", size = 80148, upload-time = "2026-01-31T23:13:46.915Z" } +sdist = { url = "https://files.pythonhosted.org/packages/ca/ac/a597c3a0e02b26cbed6dd07df68be1e57684766fd1c381dee9b170a99690/testcontainers-4.14.2.tar.gz", hash = "sha256:1340ccf16fe3acd9389a6c9e1d9ab21d9fe99a8afdf8165f89c3e69c1967d239", size = 166841, upload-time = "2026-03-18T05:19:16.696Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/c8/31/5e7b23f9e43ff7fd46d243808d70c5e8daf3bc08ecf5a7fb84d5e38f7603/testcontainers-4.14.1-py3-none-any.whl", hash = "sha256:03dfef4797b31c82e7b762a454b6afec61a2a512ad54af47ab41e4fa5415f891", size = 125640, upload-time = "2026-01-31T23:13:45.464Z" }, + { url = "https://files.pythonhosted.org/packages/13/2d/26b8b30067d94339afee62c3edc9b803a6eb9332f521ba77d8aaab5de873/testcontainers-4.14.2-py3-none-any.whl", hash = "sha256:0d0522c3cd8f8d9627cda41f7a6b51b639fa57bdc492923c045117933c668d68", size = 125712, upload-time = "2026-03-18T05:19:15.29Z" }, ] [[package]] @@ -5964,11 +6335,11 @@ wheels = [ [[package]] name = "types-aiofiles" -version = "25.1.0.20251011" +version = "25.1.0.20260409" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/84/6c/6d23908a8217e36704aa9c79d99a620f2fdd388b66a4b7f72fbc6b6ff6c6/types_aiofiles-25.1.0.20251011.tar.gz", hash = "sha256:1c2b8ab260cb3cd40c15f9d10efdc05a6e1e6b02899304d80dfa0410e028d3ff", size = 14535, upload-time = "2025-10-11T02:44:51.237Z" } +sdist = { url = "https://files.pythonhosted.org/packages/6c/66/9e62a2692792bc96c0f423f478149f4a7b84720704c546c8960b0a047c89/types_aiofiles-25.1.0.20260409.tar.gz", hash = "sha256:49e67d72bdcf9fe406f5815758a78dc34a1249bb5aa2adba78a80aec0a775435", size = 14812, upload-time = "2026-04-09T04:22:35.308Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/71/0f/76917bab27e270bb6c32addd5968d69e558e5b6f7fb4ac4cbfa282996a96/types_aiofiles-25.1.0.20251011-py3-none-any.whl", hash = "sha256:8ff8de7f9d42739d8f0dadcceeb781ce27cd8d8c4152d4a7c52f6b20edb8149c", size = 14338, upload-time = "2025-10-11T02:44:50.054Z" }, + { url = "https://files.pythonhosted.org/packages/27/d0/28236f869ba4dfb223ecdbc267eb2bdb634b81a561dd992230a4f9ec48fa/types_aiofiles-25.1.0.20260409-py3-none-any.whl", hash = "sha256:923fedb532c772cc0f62e0ce4282725afa82ca5b41cabd9857f06b55e5eee8de", size = 14372, upload-time = "2026-04-09T04:22:34.328Z" }, ] [[package]] @@ -5994,229 +6365,229 @@ wheels = [ [[package]] name = "types-cachetools" -version = "6.2.0.20260317" +version = "6.2.0.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/8b/7f/16a4d8344c28193a5a74358028c2d2f753f0d9658dd98b9e1967c50045a2/types_cachetools-6.2.0.20260317.tar.gz", hash = "sha256:6d91855bcc944665897c125e720aa3c80aace929b77a64e796343701df4f61c6", size = 9812, upload-time = "2026-03-17T04:06:32.007Z" } +sdist = { url = "https://files.pythonhosted.org/packages/ec/61/475b0e8f4a92e5e33affcc6f4e6344c6dee540824021d22f695ea170da63/types_cachetools-6.2.0.20260408.tar.gz", hash = "sha256:0d8ae2dd5ba0b4cfe6a55c34396dd0415f1be07d0033d84781cdc4ed9c2ebc6b", size = 9854, upload-time = "2026-04-08T04:31:49.665Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/17/9a/b00b23054934c4d569c19f7278c4fb32746cd36a64a175a216d3073a4713/types_cachetools-6.2.0.20260317-py3-none-any.whl", hash = "sha256:92fa9bc50e4629e31fca67ceb3fb1de71791e314fa16c0a0d2728724dc222c8b", size = 9346, upload-time = "2026-03-17T04:06:31.184Z" }, + { url = "https://files.pythonhosted.org/packages/bb/7d/579f50f4f004ee93c7d1baa95339591cac1fe02f4e3fb8fc0f900ee4a80f/types_cachetools-6.2.0.20260408-py3-none-any.whl", hash = "sha256:470e0b274737feae74beed3d764885bf4664002ecc393fba3778846b13ce92cb", size = 9350, upload-time = "2026-04-08T04:31:48.826Z" }, ] [[package]] name = "types-cffi" -version = "2.0.0.20260402" +version = "2.0.0.20260408" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "types-setuptools" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/cb/85/3896bfcb4e7c32904f762c36ff0afa96d3e39bfce5a95a41635af79c8761/types_cffi-2.0.0.20260402.tar.gz", hash = "sha256:47e1320c009f630c59c55c8e3d2b8c501e280babf52e92f6109cbfb0864ba367", size = 17476, upload-time = "2026-04-02T04:21:09.332Z" } +sdist = { url = "https://files.pythonhosted.org/packages/64/67/eb4ef3408fdc0b4e5af38b30c0e6ad4663b41bdae9fb85a9f09a8db61a99/types_cffi-2.0.0.20260408.tar.gz", hash = "sha256:aa8b9c456ab715c079fc655929811f21f331bfb940f4a821987c581bf4e36230", size = 17541, upload-time = "2026-04-08T04:36:03.918Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/ae/26/aacfef05841e31c65f889ae4225c6bce6b84cd5d3882c42a3661030f29ee/types_cffi-2.0.0.20260402-py3-none-any.whl", hash = "sha256:f647a400fba0a31d603479169d82ee5359db79bd1136e41dc7e6489296e3a2b2", size = 20103, upload-time = "2026-04-02T04:21:08.199Z" }, + { url = "https://files.pythonhosted.org/packages/c3/a3/7fbd93ededcc7c77e9e5948b9794161733ebdbf618a27965b1bea0e728a4/types_cffi-2.0.0.20260408-py3-none-any.whl", hash = "sha256:68bd296742b4ff7c0afe3547f50bd0acc55416ecf322ffefd2b7344ef6388a42", size = 20101, upload-time = "2026-04-08T04:36:02.995Z" }, ] [[package]] name = "types-colorama" -version = "0.4.15.20250801" +version = "0.4.15.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/99/37/af713e7d73ca44738c68814cbacf7a655aa40ddd2e8513d431ba78ace7b3/types_colorama-0.4.15.20250801.tar.gz", hash = "sha256:02565d13d68963d12237d3f330f5ecd622a3179f7b5b14ee7f16146270c357f5", size = 10437, upload-time = "2025-08-01T03:48:22.605Z" } +sdist = { url = "https://files.pythonhosted.org/packages/83/c0/1c02ed9edf3462a392f4ea4bda80fa10c538c63d1d7be255dc7dcb545007/types_colorama-0.4.15.20260408.tar.gz", hash = "sha256:9a816657927489463edec1b7b47933b73fe737d37a3616bf596b7de843441032", size = 10623, upload-time = "2026-04-08T04:28:31.763Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/95/3a/44ccbbfef6235aeea84c74041dc6dfee6c17ff3ddba782a0250e41687ec7/types_colorama-0.4.15.20250801-py3-none-any.whl", hash = "sha256:b6e89bd3b250fdad13a8b6a465c933f4a5afe485ea2e2f104d739be50b13eea9", size = 10743, upload-time = "2025-08-01T03:48:21.774Z" }, + { url = "https://files.pythonhosted.org/packages/b9/65/d03948be8ae9362ad26f36443eab051fe5524295fe008126cd65792f9833/types_colorama-0.4.15.20260408-py3-none-any.whl", hash = "sha256:7327a51c760d94f7df2e8c72c275a4468c03c3abb606d23995cb37e3d24d9132", size = 10763, upload-time = "2026-04-08T04:28:30.688Z" }, ] [[package]] name = "types-defusedxml" -version = "0.7.0.20260402" +version = "0.7.0.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/d3/3c/8e1243dda2fef73be93081d896503352fb92e2351b0b17ac172bbdb70ebf/types_defusedxml-0.7.0.20260402.tar.gz", hash = "sha256:4cc91b225e77c7fcf88b3fb7d821a37fb4e14530727c790b6b8a19f2968d6074", size = 10604, upload-time = "2026-04-02T04:19:00.265Z" } +sdist = { url = "https://files.pythonhosted.org/packages/39/af/d324da5ffbf0af40477533a09ee6c902de335c445a8dcc88c58f62af6e5f/types_defusedxml-0.7.0.20260408.tar.gz", hash = "sha256:f35377d59344f98b57f9bf319cff2107aac35f9e4d42f9ed6cfeeafacffadb00", size = 10638, upload-time = "2026-04-08T04:26:12.239Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/ad/4e/68f85712dfbcc929c54d57e9b0e7503c198fa65896cae2f6337840ab1cc5/types_defusedxml-0.7.0.20260402-py3-none-any.whl", hash = "sha256:200f3cb340c3c576adeb28cf365399e9bb059b34662b86ad4617692284c98bdb", size = 13434, upload-time = "2026-04-02T04:18:59.263Z" }, + { url = "https://files.pythonhosted.org/packages/ed/68/7570cfb818d6a5b3ff964114527e28e360eccf18329b457f057a18596e64/types_defusedxml-0.7.0.20260408-py3-none-any.whl", hash = "sha256:2d68db82412170b91b3e490b7c118a4f4e5a27756a126e2453f629c8d514b106", size = 13435, upload-time = "2026-04-08T04:26:11.347Z" }, ] [[package]] name = "types-deprecated" -version = "1.3.1.20260402" +version = "1.3.1.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/e2/ff/7e237c5118c1bd15e5205789901f7e01db232b0c61ca7c7c05de0394f5da/types_deprecated-1.3.1.20260402.tar.gz", hash = "sha256:00828ef7dce735d778583d00611f97da05b86b783ee14b0f22af2f945363cd12", size = 8481, upload-time = "2026-04-02T04:18:28.704Z" } +sdist = { url = "https://files.pythonhosted.org/packages/1a/db/076de3e81b106d3cec17aec9640ab1b2d02f29bad441de280459c161ce65/types_deprecated-1.3.1.20260408.tar.gz", hash = "sha256:62d6a86d0cc754c14bb2de31162d069b1c6a07ce11ee65e5258f8f75308eb3a3", size = 8524, upload-time = "2026-04-08T04:26:39.894Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/ed/3c/59aa775db5f69eba978390c33e1fd617817381cd87424ac1cff4bf2fb6c5/types_deprecated-1.3.1.20260402-py3-none-any.whl", hash = "sha256:ddf1813bd99cd1c00358cb0cb079878fdaa74509e7e482b79627f74f768f31a9", size = 9077, upload-time = "2026-04-02T04:18:27.867Z" }, + { url = "https://files.pythonhosted.org/packages/53/d0/d3258379deb749d949c3c72313981c9d2cceec518b87dcf506f022f5d49f/types_deprecated-1.3.1.20260408-py3-none-any.whl", hash = "sha256:b64e1eab560d4fa9394a27a3099211344b0e0f2f3ac8026d825c86e70d65cdd5", size = 9079, upload-time = "2026-04-08T04:26:38.752Z" }, ] [[package]] name = "types-docutils" -version = "0.22.3.20260322" +version = "0.22.3.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/44/bb/243a87fc1605a4a94c2c343d6dbddbf0d7ef7c0b9550f360b8cda8e82c39/types_docutils-0.22.3.20260322.tar.gz", hash = "sha256:e2450bb997283c3141ec5db3e436b91f0aa26efe35eb9165178ca976ccb4930b", size = 57311, upload-time = "2026-03-22T04:08:44.064Z" } +sdist = { url = "https://files.pythonhosted.org/packages/3c/49/48a386fe15539556de085b87a69568b028cca2fa4b92596a3d4f79ac6784/types_docutils-0.22.3.20260408.tar.gz", hash = "sha256:22d5d45e4e0d65a1bc8280987a73e28669bb1cc9d16b18d0afc91713d1be26da", size = 57383, upload-time = "2026-04-08T04:27:26.924Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/c6/4a/22c090cd4615a16917dff817cbe7c5956da376c961e024c241cd962d2c3d/types_docutils-0.22.3.20260322-py3-none-any.whl", hash = "sha256:681d4510ce9b80a0c6a593f0f9843d81f8caa786db7b39ba04d9fd5480ac4442", size = 91978, upload-time = "2026-03-22T04:08:43.117Z" }, + { url = "https://files.pythonhosted.org/packages/08/47/1667fda6e9fcb044f8fb797f6dc4367b88dc2ab40f1a035e387f5405e870/types_docutils-0.22.3.20260408-py3-none-any.whl", hash = "sha256:2545a86966022cdf1468d430b0007eba0837be77974a7f3fafa1b04a6815d531", size = 91981, upload-time = "2026-04-08T04:27:25.934Z" }, ] [[package]] name = "types-flask-cors" -version = "6.0.0.20260402" +version = "6.0.0.20260408" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "flask" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/b5/59/84d8ed3801cbf28876067387e1055467e94e3dd404e93e35fe2ec5e46729/types_flask_cors-6.0.0.20260402.tar.gz", hash = "sha256:57350b504328df7ec13a12599e67939189cb644c5d0efec9af80ed03c592052c", size = 10126, upload-time = "2026-04-02T04:20:57.954Z" } +sdist = { url = "https://files.pythonhosted.org/packages/22/68/e58191af5b56e836a4a2e2583ecfad91bde176940edf1bfc8ea706a5f74d/types_flask_cors-6.0.0.20260408.tar.gz", hash = "sha256:8c440c158c335819bb9286870c9770687ae6d943510fdd97e4b573324f8d2178", size = 10223, upload-time = "2026-04-08T04:35:42.608Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/51/71/d86f7644a18a8ccdddf50b9969fc94abbecd0ac52594880dc5667ca53e5e/types_flask_cors-6.0.0.20260402-py3-none-any.whl", hash = "sha256:e018d34946c110f5acfa71cc708ec66b47c4292131647e54889600c20892ca26", size = 9990, upload-time = "2026-04-02T04:20:57.12Z" }, + { url = "https://files.pythonhosted.org/packages/a2/8d/eb905e231aaed6c0853f002446a1fb12d5a32d79b688f2cdd4f8d6e6ce03/types_flask_cors-6.0.0.20260408-py3-none-any.whl", hash = "sha256:ccd8801862b3ebd27754734b84fc3dcfebd0f8056380ae88254c7dd799d64a39", size = 9993, upload-time = "2026-04-08T04:35:41.452Z" }, ] [[package]] name = "types-flask-migrate" -version = "4.1.0.20260402" +version = "4.1.0.20260408" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "flask" }, { name = "flask-sqlalchemy" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/a8/85/291317e13f72d5b2b6c1fe2c59c77a45d07bb225bf5bb2768da6a7b96351/types_flask_migrate-4.1.0.20260402.tar.gz", hash = "sha256:8e0062f063ecbe5c73b53ffc1e86f4d6de5ab970142c7d2dea939c5680ba817a", size = 8717, upload-time = "2026-04-02T04:21:45.77Z" } +sdist = { url = "https://files.pythonhosted.org/packages/46/31/56f5607fca2ad4e41da095b0e22ce9749d29d985df8c0229907bf662413b/types_flask_migrate-4.1.0.20260408.tar.gz", hash = "sha256:65ef927584777eac9a4591eb8320c09eb6eb8862d2ffdd6e23ad485a2869b228", size = 8773, upload-time = "2026-04-08T04:36:22.197Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/d4/d9/716b9cb9fca0f87e95f573e21e5ffe83d1cf9919ceb2e1cca8bc71488746/types_flask_migrate-4.1.0.20260402-py3-none-any.whl", hash = "sha256:6989d40d3cfae1c5f70c8f20ba39e714949b633329cc23b2dd00e82fd5b07d1c", size = 8669, upload-time = "2026-04-02T04:21:44.967Z" }, + { url = "https://files.pythonhosted.org/packages/cd/4b/1d1b300251d33f1a97004ef8bba53139116b00872cfb85521d1412259627/types_flask_migrate-4.1.0.20260408-py3-none-any.whl", hash = "sha256:ffdacb78f6697422aa09bdebba34f4133b2443a95a59761c279fc5d368c009d9", size = 8670, upload-time = "2026-04-08T04:36:21.394Z" }, ] [[package]] name = "types-gevent" -version = "25.9.0.20260402" +version = "26.4.0.20260409" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "types-greenlet" }, { name = "types-psutil" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/1c/2f/a2056079f14aeacf538b51b0e6585328c3584fa8e6f4758214c9773ea4b0/types_gevent-25.9.0.20260402.tar.gz", hash = "sha256:24297e6f5733e187a517f08dde6df7b2147e14f7de4d343148f410dffebb5381", size = 38270, upload-time = "2026-04-02T04:22:00.125Z" } +sdist = { url = "https://files.pythonhosted.org/packages/a4/ea/17fa935aa62d45cb9f67947e93c3c0c1ed97a76d579b12e1623cd348b68a/types_gevent-26.4.0.20260409.tar.gz", hash = "sha256:6b029c599fe4ec0efce8cd2bf5e5ae958d9808aa5b2f7bdfcb9b9eb42d91cc6a", size = 38333, upload-time = "2026-04-09T04:22:42.334Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/9e/2f/995920b5cc58bc9041ded8ea2fda32719f6c513bc6e43a0c5234780936db/types_gevent-25.9.0.20260402-py3-none-any.whl", hash = "sha256:178ba12e426c987dd69ef0b8ce9f1095a965103a0d673294831f49f7127bc5ba", size = 55494, upload-time = "2026-04-02T04:21:59.144Z" }, + { url = "https://files.pythonhosted.org/packages/29/17/04671a7e3de8c0fdd4c39dc43830b496ad68998d37cff38e0d9701f77a67/types_gevent-26.4.0.20260409-py3-none-any.whl", hash = "sha256:f5f5eb7365a9b8b738787a2dc93c509ee0ca919c6d4388504f2cd09e476d4066", size = 55491, upload-time = "2026-04-09T04:22:41.226Z" }, ] [[package]] name = "types-greenlet" -version = "3.3.0.20251206" +version = "3.4.0.20260409" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/fc/d3/23f4ab29a5ce239935bb3c157defcf50df8648c16c65965fae03980d67f3/types_greenlet-3.3.0.20251206.tar.gz", hash = "sha256:3e1ab312ab7154c08edc2e8110fbf00d9920323edc1144ad459b7b0052063055", size = 8901, upload-time = "2025-12-06T03:01:38.634Z" } +sdist = { url = "https://files.pythonhosted.org/packages/27/a6/668751bc864efe820e1eb12c2a77f9e62537f433cc002e483ad01badb04b/types_greenlet-3.4.0.20260409.tar.gz", hash = "sha256:81d2cf628934a16856bb9e54136def8de5356e934f0ad5d5474f219a0c5cb205", size = 8976, upload-time = "2026-04-09T04:22:31.693Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/7c/8f/aabde1b6e49b25a6804c12a707829e44ba0f5520563c09271f05d3196142/types_greenlet-3.3.0.20251206-py3-none-any.whl", hash = "sha256:8d11041c0b0db545619e8c8a1266aa4aaa4ebeae8ae6b4b7049917a6045a5590", size = 8809, upload-time = "2025-12-06T03:01:37.651Z" }, + { url = "https://files.pythonhosted.org/packages/4f/3f/c8a4d8782f78fccb4b5fe91c5eae2efce6648072754bc7096b1e3b5407ad/types_greenlet-3.4.0.20260409-py3-none-any.whl", hash = "sha256:cbceadb4594eccd95b57b3f7fa8a9b851488f5e6c05026f4a3db9aac02ec8333", size = 8812, upload-time = "2026-04-09T04:22:30.734Z" }, ] [[package]] name = "types-html5lib" -version = "1.1.11.20260402" +version = "1.1.11.20260408" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "types-webencodings" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/13/95/74eabb3bd0bb2f2b3a8ba56a55e87ee4b76f2b39e2a690eca399deffc837/types_html5lib-1.1.11.20260402.tar.gz", hash = "sha256:a167a30b9619a6eea82ec8b8948044859e033966a4721db34187d647c3a6c1f3", size = 18268, upload-time = "2026-04-02T04:21:56.528Z" } +sdist = { url = "https://files.pythonhosted.org/packages/16/59/914d00107c770e49fa57d4c4572e0371bbce14321385fd2ea3e06691b62d/types_html5lib-1.1.11.20260408.tar.gz", hash = "sha256:8a281aa367bc77dbc758358cd9bef79530f2d154eeed9b33705bb035a0dab9e4", size = 18316, upload-time = "2026-04-08T04:35:49.581Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/79/a9/fac9d4313b1851620610f46d086ba288482c0d5384ebf6feafb5bc4bdd15/types_html5lib-1.1.11.20260402-py3-none-any.whl", hash = "sha256:245d02cf53ef62d7342268c53dbc2af2d200849feec03f77f5909655cb54ab0d", size = 24314, upload-time = "2026-04-02T04:21:55.659Z" }, + { url = "https://files.pythonhosted.org/packages/61/19/12d95e98e42e120522665ec6850b38df8d2c1cca94e21c4d7f8578acb64e/types_html5lib-1.1.11.20260408-py3-none-any.whl", hash = "sha256:d18dc4b90d6d6745585790b920db13ede43e1f8ff6ee1ac0ceb0dec4223a06fa", size = 24313, upload-time = "2026-04-08T04:35:48.679Z" }, ] [[package]] name = "types-jmespath" -version = "1.1.0.20260124" +version = "1.1.0.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/2b/ca/c8d7fc6e450c2f8fc6f510cb194754c43b17f933f2dcabcfc6985cbb97a8/types_jmespath-1.1.0.20260124.tar.gz", hash = "sha256:29d86868e72c0820914577077b27d167dcab08b1fc92157a29d537ff7153fdfe", size = 10709, upload-time = "2026-01-24T03:18:46.557Z" } +sdist = { url = "https://files.pythonhosted.org/packages/a1/5e/33881ff525fbaa71cb6192d81fd4039607006ff48f85c40ef1e20d72d1d3/types_jmespath-1.1.0.20260408.tar.gz", hash = "sha256:42483cfc3d16bdd88c1150a7419d59ef59b8bdc4db3eec8ebf6971a0dad1a425", size = 10733, upload-time = "2026-04-08T04:29:22.923Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/61/91/915c4a6e6e9bd2bca3ec0c21c1771b175c59e204b85e57f3f572370fe753/types_jmespath-1.1.0.20260124-py3-none-any.whl", hash = "sha256:ec387666d446b15624215aa9cbd2867ffd885b6c74246d357c65e830c7a138b3", size = 11509, upload-time = "2026-01-24T03:18:45.536Z" }, + { url = "https://files.pythonhosted.org/packages/e4/f8/4c34097ce72dc8ea533db26a0162c53837398b26d4a0645ca3c7df74370b/types_jmespath-1.1.0.20260408-py3-none-any.whl", hash = "sha256:58a29fe039e5d3f9d0d42f1b067b9efa7c3e29c7e6df9c6830cbe5fa44ffb943", size = 11512, upload-time = "2026-04-08T04:29:22.133Z" }, ] [[package]] name = "types-markdown" -version = "3.10.2.20260211" +version = "3.10.2.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/6d/2e/35b30a09f6ee8a69142408d3ceb248c4454aa638c0a414d8704a3ef79563/types_markdown-3.10.2.20260211.tar.gz", hash = "sha256:66164310f88c11a58c6c706094c6f8c537c418e3525d33b76276a5fbd66b01ce", size = 19768, upload-time = "2026-02-11T04:19:29.497Z" } +sdist = { url = "https://files.pythonhosted.org/packages/dd/0e/a690840934c459aa50e0470e7550d7f151632eafa4a8e3c21d18009ad15c/types_markdown-3.10.2.20260408.tar.gz", hash = "sha256:d5cba15ed65a1420e80e31c17e3d4a2ad7208a3f3a4da97fd2c5f093caf523cd", size = 19784, upload-time = "2026-04-08T04:33:07.644Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/54/c9/659fa2df04b232b0bfcd05d2418e683080e91ec68f636f3c0a5a267350e7/types_markdown-3.10.2.20260211-py3-none-any.whl", hash = "sha256:2d94d08587e3738203b3c4479c449845112b171abe8b5cadc9b0c12fcf3e99da", size = 25854, upload-time = "2026-02-11T04:19:28.647Z" }, + { url = "https://files.pythonhosted.org/packages/75/7e/265a8df257c8dced6ea89295f793a19f0a49ccbfeae1ed562368b2caf7a3/types_markdown-3.10.2.20260408-py3-none-any.whl", hash = "sha256:b0bbe8b7a8174db732067b86e391262898f5f536589ea81efec6d35ceb829331", size = 25857, upload-time = "2026-04-08T04:33:06.769Z" }, ] [[package]] name = "types-oauthlib" -version = "3.3.0.20260324" +version = "3.3.0.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/91/38/543938f86d81bd6a78b8c355fe81bb8da0a26e4c28addfe3443e38a683d2/types_oauthlib-3.3.0.20260324.tar.gz", hash = "sha256:3c4cc07fa33886f881682237c1e445c5f1778b44efea118f4c1e4ede82cb52f2", size = 26030, upload-time = "2026-03-24T04:06:30.898Z" } +sdist = { url = "https://files.pythonhosted.org/packages/79/7e/4cf7b08b4c6b266d9967c02ebdba8c5390029d5750def924b23679a730a0/types_oauthlib-3.3.0.20260408.tar.gz", hash = "sha256:deaeccbc33634f5efa7ef320924bce743495f5a1520073ce4fa0fea441bf063d", size = 26066, upload-time = "2026-04-08T04:28:45.636Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/0e/60/26f0ddade4b2bb17b3d8f3ebaac436e5487caec28831da3d7ea309fe93b9/types_oauthlib-3.3.0.20260324-py3-none-any.whl", hash = "sha256:d24662033b04f4d50a2f1fed04c1b43ff2554aa037c1dafa0424f87100a46ccd", size = 48984, upload-time = "2026-03-24T04:06:29.696Z" }, + { url = "https://files.pythonhosted.org/packages/d7/77/6866665af7b414bbffd37028a92618d3771402ae587e9cd2d70efcb6d8f6/types_oauthlib-3.3.0.20260408-py3-none-any.whl", hash = "sha256:1c305d18a05636fac4953800aa0982e54c258562838dafeaa6a3d05b7f4669fe", size = 48987, upload-time = "2026-04-08T04:28:44.719Z" }, ] [[package]] name = "types-objgraph" -version = "3.6.0.20240907" +version = "3.6.0.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/22/48/ba0ec63d392904eee34ef1cbde2d8798f79a3663950e42fbbc25fd1bd6f7/types-objgraph-3.6.0.20240907.tar.gz", hash = "sha256:2e3dee675843ae387889731550b0ddfed06e9420946cf78a4bca565b5fc53634", size = 2928, upload-time = "2024-09-07T02:35:21.214Z" } +sdist = { url = "https://files.pythonhosted.org/packages/82/4b/e43381191b1d9d1a0d8b1d7da12ee28ea63f97f38bc6694231dde066b3c8/types_objgraph-3.6.0.20260408.tar.gz", hash = "sha256:9937aae5ad5bb625a2091b33e2f67e979f61e3719078d318b2261c4e7f13ac9a", size = 7606, upload-time = "2026-04-08T04:30:41.072Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/16/c9/6d647a947f3937b19bcc6d52262921ddad60d90060ff66511a4bd7e990c5/types_objgraph-3.6.0.20240907-py3-none-any.whl", hash = "sha256:67207633a9b5789ee1911d740b269c3371081b79c0d8f68b00e7b8539f5c43f5", size = 3314, upload-time = "2024-09-07T02:35:19.865Z" }, + { url = "https://files.pythonhosted.org/packages/38/99/6f618e0931367814b2ab9ad2b946f0f0ca4b8b02405a3552bcb90acc1b5c/types_objgraph-3.6.0.20260408-py3-none-any.whl", hash = "sha256:fd4ae0c6c10e260a3dfdd778f3f79f081c804f9d48b5bb6c2299d8645b287c5f", size = 8059, upload-time = "2026-04-08T04:30:40.301Z" }, ] [[package]] name = "types-olefile" -version = "0.47.0.20240806" +version = "0.47.0.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/49/18/9d87a1bc394323ce22690308c751680c4301fc3fbe47cd58e16d760b563a/types-olefile-0.47.0.20240806.tar.gz", hash = "sha256:96490f208cbb449a52283855319d73688ba9167ae58858ef8c506bf7ca2c6b67", size = 4369, upload-time = "2024-08-06T02:30:01.966Z" } +sdist = { url = "https://files.pythonhosted.org/packages/84/d0/1c7e058666a4e5364463ad0a7bfd7a0bbc2180df427016cc7ebeeddb0b29/types_olefile-0.47.0.20260408.tar.gz", hash = "sha256:01fbcb6332152e88486634460c8d59a8c75da9a50d85e0ff6f754c02db3fc23e", size = 9037, upload-time = "2026-04-08T04:30:13.973Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/a9/4d/f8acae53dd95353f8a789a06ea27423ae41f2067eb6ce92946fdc6a1f7a7/types_olefile-0.47.0.20240806-py3-none-any.whl", hash = "sha256:c760a3deab7adb87a80d33b0e4edbbfbab865204a18d5121746022d7f8555118", size = 4758, upload-time = "2024-08-06T02:30:01.15Z" }, + { url = "https://files.pythonhosted.org/packages/37/20/b88f8f1336fd3772813c21dd536e50d10c4416294539b8b623e769e9b4a2/types_olefile-0.47.0.20260408-py3-none-any.whl", hash = "sha256:2499f110beb659504173dcd66c069a297b210f715ad45a713e097ecd3a265992", size = 9493, upload-time = "2026-04-08T04:30:13.173Z" }, ] [[package]] name = "types-openpyxl" -version = "3.1.5.20260402" +version = "3.1.5.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/6b/8f/d9daf094e0bb468b26e74c1bf9e0170e58c3f16e583d244e9f32078b6bcc/types_openpyxl-3.1.5.20260402.tar.gz", hash = "sha256:855ad28d47c0965048082dfca424d6ebd54d8861d72abcee9106ba5868899e7f", size = 101310, upload-time = "2026-04-02T04:17:37.6Z" } +sdist = { url = "https://files.pythonhosted.org/packages/74/c9/24f03f9d9fedd164de699c1418869bef9b819f59f75e7f647f5788c02d98/types_openpyxl-3.1.5.20260408.tar.gz", hash = "sha256:b49274d086fbb6e6bcd2a67d161dd1161d4d380488e4cce546d647d45eadcac2", size = 101361, upload-time = "2026-04-08T04:30:37.809Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/58/ee/a0b22012076cf23b73fbb82d9c40843cbf6b1d228d7a2dc883da0a905a16/types_openpyxl-3.1.5.20260402-py3-none-any.whl", hash = "sha256:1d149989f0aad4e2074e96b87a045136399e27bc2a33cfefcd0eb4cad8ea5b4c", size = 166046, upload-time = "2026-04-02T04:17:36.162Z" }, + { url = "https://files.pythonhosted.org/packages/5f/e8/64db0c32c6fda4b198aff84329ceeea00a93d16c28f05e9fe0404ddb8b8c/types_openpyxl-3.1.5.20260408-py3-none-any.whl", hash = "sha256:7ab7586796aed017cde50b81bd67ba024120c39b99c102320b57dd91390c317f", size = 166044, upload-time = "2026-04-08T04:30:36.449Z" }, ] [[package]] name = "types-pexpect" -version = "4.9.0.20260127" +version = "4.9.0.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/2e/32/7e03a07e16f79a404d6200ed6bdfcc320d0fb833436a5c6895a1403dedb7/types_pexpect-4.9.0.20260127.tar.gz", hash = "sha256:f8d43efc24251a8e533c71ea9be03d19bb5d08af096d561611697af9720cba7f", size = 13461, upload-time = "2026-01-27T03:28:30.923Z" } +sdist = { url = "https://files.pythonhosted.org/packages/9a/0f/5e9aa68e4595264e968ffaa3358afb2a8d60093f460aaa7e0398c0d9bfd0/types_pexpect-4.9.0.20260408.tar.gz", hash = "sha256:faedd97fc8086b224bc1966770c486ac6ec96bef07dc47cc2724fe4ae62f8f4a", size = 13471, upload-time = "2026-04-08T04:32:30.624Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/8a/d9/7ac5c9aa5a89a1a64cd835ae348227f4939406d826e461b85b690a8ba1c2/types_pexpect-4.9.0.20260127-py3-none-any.whl", hash = "sha256:69216c0ebf0fe45ad2900823133959b027e9471e24fc3f2e4c7b00605555da5f", size = 17078, upload-time = "2026-01-27T03:28:29.848Z" }, + { url = "https://files.pythonhosted.org/packages/c2/13/96004a3e5dd6c6e7de4edc421d5a2926e062d22be7b006edab747ed42830/types_pexpect-4.9.0.20260408-py3-none-any.whl", hash = "sha256:ba6699609bb6593f00ef7204efc390fd10bc14a8d632f22c8dea13f263b16fcc", size = 17083, upload-time = "2026-04-08T04:32:29.75Z" }, ] [[package]] name = "types-protobuf" -version = "7.34.1.20260403" +version = "7.34.1.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/ae/b3/c2e407ea36e0e4355c135127cee1b88a2cc9a2c92eafca50a360ab9f2708/types_protobuf-7.34.1.20260403.tar.gz", hash = "sha256:8d7881867888e667eb9563c08a916fccdc12bdb5f9f34c31d217cce876e36765", size = 68782, upload-time = "2026-04-03T04:18:09.428Z" } +sdist = { url = "https://files.pythonhosted.org/packages/5b/b1/4521e68c2cc17703d80eb42796751345376dd4c706f84007ef5e7c707774/types_protobuf-7.34.1.20260408.tar.gz", hash = "sha256:e2c0a0430e08c75b52671a6f0035abfdcc791aad12af16274282de1b721758ab", size = 68835, upload-time = "2026-04-08T04:26:43.613Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/7d/95/24fb0f6fe37b41cf94f9b9912712645e17d8048d4becaf37c1607ddd8e32/types_protobuf-7.34.1.20260403-py3-none-any.whl", hash = "sha256:16d9bbca52ab0f306279958878567df2520f3f5579059419b0ce149a0ad1e332", size = 86011, upload-time = "2026-04-03T04:18:08.245Z" }, + { url = "https://files.pythonhosted.org/packages/ef/b5/0bc9874d89c58fb0ce851e150055ce732d254dbb10b06becbc7635d0d635/types_protobuf-7.34.1.20260408-py3-none-any.whl", hash = "sha256:ebbcd4e27b145aef6a59bc0cb6c013b3528151c1ba5e7f7337aeee355d276a5e", size = 86012, upload-time = "2026-04-08T04:26:42.566Z" }, ] [[package]] name = "types-psutil" -version = "7.2.2.20260402" +version = "7.2.2.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/31/a2/a608db0caf0d71bd231305dc3ab3f5d65624d77761003696a3ca8c6fad40/types_psutil-7.2.2.20260402.tar.gz", hash = "sha256:9f36eebf15ad8487f8004ed67c8e008b84b63ba00cfb709a3f60275058217329", size = 26522, upload-time = "2026-04-02T04:18:47.916Z" } +sdist = { url = "https://files.pythonhosted.org/packages/44/14/279fd5defebbd560ede04aecd38f7651cccee7336f2264d0889d8c9a9d43/types_psutil-7.2.2.20260408.tar.gz", hash = "sha256:e8053450685965b8cd52afb62569073d00ea9967ae78bb45dff5f606847f97f2", size = 26556, upload-time = "2026-04-08T04:27:44.349Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/81/8a/f4b3ca3154e8a77df91eb7a28c208af721d48f8a4aca667f582523a0beff/types_psutil-7.2.2.20260402-py3-none-any.whl", hash = "sha256:653d1fd908e68cc0666754b16a0cee28efbded0c401caa5314d2aeea67f227cd", size = 32860, upload-time = "2026-04-02T04:18:46.671Z" }, + { url = "https://files.pythonhosted.org/packages/af/40/2fd92a4a1ee088c4dbcc44c977908d9869838d9cd2a2fa2e001352f56694/types_psutil-7.2.2.20260408-py3-none-any.whl", hash = "sha256:0c334f6f6bc9e9c24fca5c7d1f0b6971c961a0a2e3956dc5ce704722c01f9762", size = 32861, upload-time = "2026-04-08T04:27:42.929Z" }, ] [[package]] name = "types-psycopg2" -version = "2.9.21.20260223" +version = "2.9.21.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/55/1f/4daff0ce5e8e191844e65aaa793ed1b9cb40027dc2700906ecf2b6bcc0ed/types_psycopg2-2.9.21.20260223.tar.gz", hash = "sha256:78ed70de2e56bc6b5c26c8c1da8e9af54e49fdc3c94d1504609f3519e2b84f02", size = 27090, upload-time = "2026-02-23T04:11:18.177Z" } +sdist = { url = "https://files.pythonhosted.org/packages/cd/24/d8ae11a0c056535557aaabeb7d7838423abdfdcf1e5f8dfb2c04d316c65d/types_psycopg2-2.9.21.20260408.tar.gz", hash = "sha256:bb65cd12f53b6633077fd782607a33065e1f3bf585219c9f786b61ad2b72211c", size = 27078, upload-time = "2026-04-08T04:26:15.848Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/8d/e7/c566df58410bc0728348b514e718f0b38fa0d248b5c10599a11494ba25d2/types_psycopg2-2.9.21.20260223-py3-none-any.whl", hash = "sha256:c6228ade72d813b0624f4c03feeb89471950ac27cd0506b5debed6f053086bc8", size = 24919, upload-time = "2026-02-23T04:11:17.214Z" }, + { url = "https://files.pythonhosted.org/packages/1a/fe/9aab9239640107b6e46afddcee578a916b8b98bfee36e03da5b0d2c95124/types_psycopg2-2.9.21.20260408-py3-none-any.whl", hash = "sha256:49b086bfc9e0ce901c6537403ead1c19c75275571040b037af0248a8e48c322f", size = 24921, upload-time = "2026-04-08T04:26:14.715Z" }, ] [[package]] name = "types-pygments" -version = "2.20.0.20260406" +version = "2.20.0.20260408" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "types-docutils" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/08/bd/d17c28a4c65c556bc4c4bc8f363aa2fbfc91b397e3c0019839d74d9ead31/types_pygments-2.20.0.20260406.tar.gz", hash = "sha256:d3ed7ecd7c34a382459d28ce624b87e1dee03d6844e43aa7590ef4b8c7c9dfce", size = 19486, upload-time = "2026-04-06T04:33:59.632Z" } +sdist = { url = "https://files.pythonhosted.org/packages/fe/89/4b443128fa540c54a8f7ecdeec225aab4818534167c4a2d133099dc00fa6/types_pygments-2.20.0.20260408.tar.gz", hash = "sha256:e8a56a3ab1aee7f4ed8f1876d2f62c96e0f41ede52405a7d30c888f3989d8f00", size = 21115, upload-time = "2026-04-08T04:34:24.29Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/eb/00/dca7518e6f99ce0f235ec1c6512593ee4bd25109ae1c912bf9ee836a26e1/types_pygments-2.20.0.20260406-py3-none-any.whl", hash = "sha256:6bb0c79874c304977e1c097f7007140e16fe78c443329154db803d7910d945b3", size = 27278, upload-time = "2026-04-06T04:33:58.744Z" }, + { url = "https://files.pythonhosted.org/packages/5e/d8/30924b38eef70caef6b05af5440c84d7673cea2a042e206f404c8100a88d/types_pygments-2.20.0.20260408-py3-none-any.whl", hash = "sha256:6d347d5967b5f0654b659a8b8461a870b207b7e60cd4d646bbc047f6a8db8e1e", size = 29055, upload-time = "2026-04-08T04:34:23.412Z" }, ] [[package]] name = "types-pymysql" -version = "1.1.0.20251220" +version = "1.1.0.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/d3/59/e959dd6d2f8e3b3c3f058d79ac9ece328922a5a8770c707fe9c3a757481c/types_pymysql-1.1.0.20251220.tar.gz", hash = "sha256:ae1c3df32a777489431e2e9963880a0df48f6591e0aa2fd3a6fabd9dee6eca54", size = 22184, upload-time = "2025-12-20T03:07:38.689Z" } +sdist = { url = "https://files.pythonhosted.org/packages/b3/04/c3570f05ebab083f28698c829dddf754ffefc30aae4e29915610848e44db/types_pymysql-1.1.0.20260408.tar.gz", hash = "sha256:b784dc37908479e3767e2d794ab507b3674adb1c686ca3d13fc9e2960dbcb9ec", size = 22344, upload-time = "2026-04-08T04:27:47.651Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/8b/fa/4f4d3bfca9ef6dd17d69ed18b96564c53b32d3ce774132308d0bee849f10/types_pymysql-1.1.0.20251220-py3-none-any.whl", hash = "sha256:fa1082af7dea6c53b6caa5784241924b1296ea3a8d3bd060417352c5e10c0618", size = 23067, upload-time = "2025-12-20T03:07:37.766Z" }, + { url = "https://files.pythonhosted.org/packages/70/b3/15dee33878709705a4cc83bcc1bb30e00e95bbe038b472cb1207a15b50a1/types_pymysql-1.1.0.20260408-py3-none-any.whl", hash = "sha256:da630647eaaa7a926a3907794f4067f269cd245b2c202c74aa3c6a3bd660a9db", size = 23071, upload-time = "2026-04-08T04:27:46.735Z" }, ] [[package]] @@ -6234,38 +6605,38 @@ wheels = [ [[package]] name = "types-python-dateutil" -version = "2.9.0.20260402" +version = "2.9.0.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/a7/30/c5d9efbff5422b20c9551dc5af237d1ab0c3d33729a9b3239a876ca47dd4/types_python_dateutil-2.9.0.20260402.tar.gz", hash = "sha256:a980142b9966713acb382c467e35c5cc4208a2f91b10b8d785a0ae6765df6c0b", size = 16941, upload-time = "2026-04-02T04:18:35.834Z" } +sdist = { url = "https://files.pythonhosted.org/packages/88/f3/2427775f80cd5e19a0a71ba8e5ab7645a01a852f43a5fd0ffc24f66338e0/types_python_dateutil-2.9.0.20260408.tar.gz", hash = "sha256:8b056ec01568674235f64ecbcef928972a5fac412f5aab09c516dfa2acfbb582", size = 16981, upload-time = "2026-04-08T04:28:10.995Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/e6/d7/fe753bf8329c8c3c1addcba1d2bf716c33898216757abb24f8b80f82d040/types_python_dateutil-2.9.0.20260402-py3-none-any.whl", hash = "sha256:7827e6a9c93587cc18e766944254d1351a2396262e4abe1510cbbd7601c5e01f", size = 18436, upload-time = "2026-04-02T04:18:34.806Z" }, + { url = "https://files.pythonhosted.org/packages/fd/c6/eeba37bfee282a6a97f889faef9352d6172c6a5088eb9a4daf570d9d748d/types_python_dateutil-2.9.0.20260408-py3-none-any.whl", hash = "sha256:473139d514a71c9d1fbd8bb328974bedcb1cc3dba57aad04ffa4157f483c216f", size = 18437, upload-time = "2026-04-08T04:28:10.095Z" }, ] [[package]] name = "types-python-http-client" -version = "3.3.7.20250708" +version = "3.3.7.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/55/a0/0ad93698a3ebc6846ca23aca20ff6f6f8ebe7b4f0c1de7f19e87c03dbe8f/types_python_http_client-3.3.7.20250708.tar.gz", hash = "sha256:5f85b32dc64671a4e5e016142169aa187c5abed0b196680944e4efd3d5ce3322", size = 7707, upload-time = "2025-07-08T03:14:36.197Z" } +sdist = { url = "https://files.pythonhosted.org/packages/2a/30/f741f5edce6b02a838a30064360f5480510d5f2861561f44c5e33bc1dd96/types_python_http_client-3.3.7.20260408.tar.gz", hash = "sha256:ae84aadeec645ede7e602714090e6c8ebca97dbf28af509ac5eeccfc300174a2", size = 7684, upload-time = "2026-04-08T04:27:09.67Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/85/4f/b88274658cf489e35175be8571c970e9a1219713bafd8fc9e166d7351ecb/types_python_http_client-3.3.7.20250708-py3-none-any.whl", hash = "sha256:e2fc253859decab36713d82fc7f205868c3ddeaee79dbb55956ad9ca77abe12b", size = 8890, upload-time = "2025-07-08T03:14:35.506Z" }, + { url = "https://files.pythonhosted.org/packages/f1/d8/45c6c04924086e8856e7f9a33a38ee713992d9ae9cd6d449de97badcba3c/types_python_http_client-3.3.7.20260408-py3-none-any.whl", hash = "sha256:3f310282e0fe2a18c5291f935538e1f97b9f80d2c5571aad155e66806719017c", size = 8851, upload-time = "2026-04-08T04:27:08.877Z" }, ] [[package]] name = "types-pywin32" -version = "311.0.0.20260402" +version = "311.0.0.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/b3/f0/fc3c923b5d7822f3a93c7b242a69de0e1945e7c153cc5367074621a6509f/types_pywin32-311.0.0.20260402.tar.gz", hash = "sha256:637f041065f02fb49cbaba530ae8cf2e483b5d2c145a9bf97fd084c3e913c7e3", size = 332312, upload-time = "2026-04-02T04:18:52.748Z" } +sdist = { url = "https://files.pythonhosted.org/packages/30/40/0d182fbf578f30f7ff2b07b8fe494cc42178992d0087a739c70990adca8c/types_pywin32-311.0.0.20260408.tar.gz", hash = "sha256:cb86c6beae20195165e770a65c3ee707746dc777ca8e03e4f06a66d4013a4bd0", size = 332341, upload-time = "2026-04-08T04:33:29.824Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/80/0c/a2ee20785df4ebcda6d6ec62d58b7c08a37072f9d00cda4f9548e9c8e5aa/types_pywin32-311.0.0.20260402-py3-none-any.whl", hash = "sha256:4db644fcf40ee85a3ee2551f110d009e427c01569ed4670bb53cfe999df0929f", size = 395413, upload-time = "2026-04-02T04:18:51.529Z" }, + { url = "https://files.pythonhosted.org/packages/0d/b5/3cc67baf622805270d84e2252dfa130daf7ccd49795f80b51350abb91bd9/types_pywin32-311.0.0.20260408-py3-none-any.whl", hash = "sha256:0b691da60aaed0ee7169a69268bad1e2051eb52f4acc10248c103aadcd1f2451", size = 395413, upload-time = "2026-04-08T04:33:28.476Z" }, ] [[package]] name = "types-pyyaml" -version = "6.0.12.20250915" +version = "6.0.12.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/7e/69/3c51b36d04da19b92f9e815be12753125bd8bc247ba0470a982e6979e71c/types_pyyaml-6.0.12.20250915.tar.gz", hash = "sha256:0f8b54a528c303f0e6f7165687dd33fafa81c807fcac23f632b63aa624ced1d3", size = 17522, upload-time = "2025-09-15T03:01:00.728Z" } +sdist = { url = "https://files.pythonhosted.org/packages/74/73/b759b1e413c31034cc01ecdfb96b38115d0ab4db55a752a3929f0cd449fd/types_pyyaml-6.0.12.20260408.tar.gz", hash = "sha256:92a73f2b8d7f39ef392a38131f76b970f8c66e4c42b3125ae872b7c93b556307", size = 17735, upload-time = "2026-04-08T04:30:50.974Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/bd/e0/1eed384f02555dde685fff1a1ac805c1c7dcb6dd019c916fe659b1c1f9ec/types_pyyaml-6.0.12.20250915-py3-none-any.whl", hash = "sha256:e7d4d9e064e89a3b3cae120b4990cd370874d2bf12fa5f46c97018dd5d3c9ab6", size = 20338, upload-time = "2025-09-15T03:00:59.218Z" }, + { url = "https://files.pythonhosted.org/packages/1c/f0/c391068b86abb708882c6d75a08cd7d25b2c7227dab527b3a3685a3c635b/types_pyyaml-6.0.12.20260408-py3-none-any.whl", hash = "sha256:fbc42037d12159d9c801ebfcc79ebd28335a7c13b08a4cfbc6916df78fee9384", size = 20339, upload-time = "2026-04-08T04:30:50.113Z" }, ] [[package]] @@ -6283,11 +6654,11 @@ wheels = [ [[package]] name = "types-regex" -version = "2026.4.4.20260405" +version = "2026.4.4.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/74/9c/dd7b36fe87902a161a69c4a6959e3a6afae09c2c600916beb1aecd300870/types_regex-2026.4.4.20260405.tar.gz", hash = "sha256:993b76a255d9b83fd68eed2fc52b2746be51a93b833796be4fcf9412efa0da51", size = 13143, upload-time = "2026-04-05T04:26:56.614Z" } +sdist = { url = "https://files.pythonhosted.org/packages/92/42/d7c691fc5a8a8ecfba3f23c1c4c087a089af0767610d88c29201193d8f60/types_regex-2026.4.4.20260408.tar.gz", hash = "sha256:86b2975ff11b06e7f538839821510daea2566d9cb18bb8acde47834315409cf9", size = 13182, upload-time = "2026-04-08T04:31:11.887Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/51/83/5dbae203616699890efcdb2a2670d62baf5ed93634f75d793157f1edefb3/types_regex-2026.4.4.20260405-py3-none-any.whl", hash = "sha256:40443cb88c43b9940dd4c904e251be7e65dab3798b2cf6f5ff19501ae99b2ab5", size = 11119, upload-time = "2026-04-05T04:26:55.636Z" }, + { url = "https://files.pythonhosted.org/packages/e1/92/e109654a804d11d9b60d67c7b29d64b2beac6b2e3209ea075e268e5a1021/types_regex-2026.4.4.20260408-py3-none-any.whl", hash = "sha256:d436bcc409abf9b06747b7e038014afc6d40ef7b72329655c353a1955534068f", size = 11116, upload-time = "2026-04-08T04:31:11.01Z" }, ] [[package]] @@ -6313,67 +6684,67 @@ wheels = [ [[package]] name = "types-setuptools" -version = "82.0.0.20260402" +version = "82.0.0.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/e9/f8/74f8a76b4311e70772c0df8f2d432040a3b0facd7bcce6b72b0b26e1746b/types_setuptools-82.0.0.20260402.tar.gz", hash = "sha256:63d2b10ba7958396ad79bbc24d2f6311484e452daad4637ffd40407983a27069", size = 44805, upload-time = "2026-04-02T04:17:49.229Z" } +sdist = { url = "https://files.pythonhosted.org/packages/c3/12/3464b410c50420dd4674fa5fe9d3880711c1dbe1a06f5fe4960ee9067b9e/types_setuptools-82.0.0.20260408.tar.gz", hash = "sha256:036c68caf7e672a699f5ebbf914708d40644c14e05298bc49f7272be91cf43d3", size = 44861, upload-time = "2026-04-08T04:29:33.292Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/0e/e9/22451997f70ac2c5f18dc5f988750c986011fb049d9021767277119e63fa/types_setuptools-82.0.0.20260402-py3-none-any.whl", hash = "sha256:4b9a9f6c3c4c65107a3956ad6a6acbccec38e398ff6d5f78d5df7f103dadb8d6", size = 68429, upload-time = "2026-04-02T04:17:48.11Z" }, + { url = "https://files.pythonhosted.org/packages/3d/e1/46a4fc3ef03aabf5d18bac9df5cf37c6b02c3bddf3e05c3533f4b4588331/types_setuptools-82.0.0.20260408-py3-none-any.whl", hash = "sha256:ece0a215cdfa6463a65fd6f68bd940f39e455729300ddfe61cab1147ed1d2462", size = 68428, upload-time = "2026-04-08T04:29:32.175Z" }, ] [[package]] name = "types-shapely" -version = "2.1.0.20260402" +version = "2.1.0.20260408" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "numpy" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/a3/f7/46e95b09434105d7b772d05657495f2900bae8e108fdf4e6d8b5902aa28c/types_shapely-2.1.0.20260402.tar.gz", hash = "sha256:0eb592328170433b4724430a64c309bf07ba69d5d11489d3dba21382d78f5297", size = 26481, upload-time = "2026-04-02T04:20:03.104Z" } +sdist = { url = "https://files.pythonhosted.org/packages/10/8d/bf9e3eb51249601e22d797481999a06fb34998c4db5c76804394f8a3fa28/types_shapely-2.1.0.20260408.tar.gz", hash = "sha256:8552549d9429baa52ec4331e43b5db3b334fc3a7f30da48663010b7454b1451c", size = 26529, upload-time = "2026-04-08T04:34:42.111Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/14/3a/1aa3a62f5b85d4a9e649e7b42842a9e5503fef7eb50c480137a6b94f8bb1/types_shapely-2.1.0.20260402-py3-none-any.whl", hash = "sha256:8d70a16f615a104fd8abdd73e684d4e83b9dedf31d6432ecf86945b5ef0e35de", size = 37817, upload-time = "2026-04-02T04:20:02.17Z" }, + { url = "https://files.pythonhosted.org/packages/8e/3d/cbec691f56e71636192a07bf6809f598bed06d869b03b4e2b1ad2f7df032/types_shapely-2.1.0.20260408-py3-none-any.whl", hash = "sha256:8a31e2b074342a363f0c9d0c7d6e1e6c0dcce302a92ef94d64d0ca2a2b94a1d1", size = 37818, upload-time = "2026-04-08T04:34:41.243Z" }, ] [[package]] name = "types-simplejson" -version = "3.20.0.20260402" +version = "3.20.0.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/94/93/2ff2f4b8ccd942ee3a4b62c013d2c1779e416d303950060ed8b3f1a4fc11/types_simplejson-3.20.0.20260402.tar.gz", hash = "sha256:ee2bbf65830fe93270a1c0406f3474c952fe1232532c7b6f3eb9500edb308c5a", size = 10650, upload-time = "2026-04-02T04:19:26.266Z" } +sdist = { url = "https://files.pythonhosted.org/packages/9f/36/e319fd0f6d906dbf7c2c03eef17db77ef461197a75b253fccd9c7c695d3e/types_simplejson-3.20.0.20260408.tar.gz", hash = "sha256:0b0e1bf61e70f81dfe6ef4c2b9c02e39403848c0652df334e7a430c3a26c06b3", size = 10693, upload-time = "2026-04-08T04:28:07.8Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/2c/2a/7ba2bede9c2b25fb338d0bda9925a23b73a5ac99fd97304ebe067c090e33/types_simplejson-3.20.0.20260402-py3-none-any.whl", hash = "sha256:b3bdef21bc24fee26b80385ffea5163b6b10381089aa619fe2f8f8d3790e6148", size = 10419, upload-time = "2026-04-02T04:19:25.464Z" }, + { url = "https://files.pythonhosted.org/packages/22/c0/01a5a4c3948c2269cf9d727e5e66a8b404e03beb4f9522680a3f71097011/types_simplejson-3.20.0.20260408-py3-none-any.whl", hash = "sha256:f9e542199cb159ed34ad54b6ceb3dc9af890c256b810ad1bd7c69c61db7d2236", size = 10415, upload-time = "2026-04-08T04:28:06.984Z" }, ] [[package]] name = "types-six" -version = "1.17.0.20251009" +version = "1.17.0.20260408" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/ee/f7/448215bc7695cfa0c8a7e0dcfa54fe31b1d52fb87004fed32e659dd85c80/types_six-1.17.0.20251009.tar.gz", hash = "sha256:efe03064ecd0ffb0f7afe133990a2398d8493d8d1c1cc10ff3dfe476d57ba44f", size = 15552, upload-time = "2025-10-09T02:54:26.02Z" } +sdist = { url = "https://files.pythonhosted.org/packages/9d/95/14bb40b2fa8f19234d60b370bfa1ff64b42509b6d2dee070132949ce4f80/types_six-1.17.0.20260408.tar.gz", hash = "sha256:b28579aedb204d07abac52e49c87e2b4c03cb6171bd764bd9b7775ba58fffaba", size = 15766, upload-time = "2026-04-08T04:26:23.54Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/b8/2f/94baa623421940e3eb5d2fc63570ebb046f2bb4d9573b8787edab3ed2526/types_six-1.17.0.20251009-py3-none-any.whl", hash = "sha256:2494f4c2a58ada0edfe01ea84b58468732e43394c572d9cf5b1dd06d86c487a3", size = 19935, upload-time = "2025-10-09T02:54:25.096Z" }, + { url = "https://files.pythonhosted.org/packages/83/04/3e9c382043579b5170c3bc38d13154d48e8ef2c89c4473748a33e3c9bccd/types_six-1.17.0.20260408-py3-none-any.whl", hash = "sha256:02208fa1099944ed0c8f8de42f065ffd63c55cd7b59f49be49802626b8d58318", size = 19937, upload-time = "2026-04-08T04:26:22.259Z" }, ] [[package]] name = "types-tensorflow" -version = "2.18.0.20260402" +version = "2.18.0.20260408" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "numpy" }, { name = "types-protobuf" }, { name = "types-requests" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/b9/d9/1ca68336ce7ad8c4a19001fce85f47ffae9d7ac335e5ddd73497b6bfbca4/types_tensorflow-2.18.0.20260402.tar.gz", hash = "sha256:607c4a5895d44c88c7c465410093ee050aa760c3cedab5b9662f475c5e2137d3", size = 259058, upload-time = "2026-04-02T04:22:39.113Z" } +sdist = { url = "https://files.pythonhosted.org/packages/cb/15/d9f1a54e75008fde3dc48f333b4d3c86f0d27b822e3a9c109214f8957ae6/types_tensorflow-2.18.0.20260408.tar.gz", hash = "sha256:68bfbcc76dd9e314eae0a91964edf463c52fc0e3d60189542efbf67006e71015", size = 259103, upload-time = "2026-04-08T04:36:45.263Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/c1/6c/0ad58c7246a5369ceb2ae16c146ac0684a0827f499a8141fc3d13743c38b/types_tensorflow-2.18.0.20260402-py3-none-any.whl", hash = "sha256:0d4a74921c457ade8f46eb09cf728a1732156678e497ce15a88b9c0c16dc2fe5", size = 329776, upload-time = "2026-04-02T04:22:37.903Z" }, + { url = "https://files.pythonhosted.org/packages/11/64/4005df91e916f586d9f80c3f052f2ae41afbcd9c9a54d33005fabeefcaab/types_tensorflow-2.18.0.20260408-py3-none-any.whl", hash = "sha256:01cff182dd6c38c300b27b9d1a26791f04607d914fa9429e5f85766c3bc0d71d", size = 329775, upload-time = "2026-04-08T04:36:43.863Z" }, ] [[package]] name = "types-tqdm" -version = "4.67.3.20260402" +version = "4.67.3.20260408" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "types-requests" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/54/42/e9e6688891d8db77b5795ec02b329524170892ff81bec63c4c4ca7425b30/types_tqdm-4.67.3.20260402.tar.gz", hash = "sha256:e0739f3bc5d1c801999a202f0537280aa1bc2e669c49f5be91bfb99376690624", size = 18077, upload-time = "2026-04-02T04:22:23.049Z" } +sdist = { url = "https://files.pythonhosted.org/packages/43/42/2e2968e68a694d3dac3a47aa0df06e46be1a6eef498e5bd15f4c54674eb9/types_tqdm-4.67.3.20260408.tar.gz", hash = "sha256:fd849a79891ae7136ed47541aface15c35bd9a13160fa8a93e42e10f60cf4c8d", size = 18119, upload-time = "2026-04-08T04:36:52.488Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/4f/73/a6cf75de5be376d7b57ce6c934ae9bc90aa5be6ada4ac50a99ecbdf9763e/types_tqdm-4.67.3.20260402-py3-none-any.whl", hash = "sha256:b5d1a65fe3286e1a855e51ddebf63d3641daf9bad285afd1ec56808eb59df76e", size = 24562, upload-time = "2026-04-02T04:22:22.114Z" }, + { url = "https://files.pythonhosted.org/packages/14/5d/7dedddc32ab7bc2344ece772b5e0f03ec63a1d47ad259696689713c1cf50/types_tqdm-4.67.3.20260408-py3-none-any.whl", hash = "sha256:3b9ed74ebef04df8f53d470ffdc84348e93496d8acafa08bf79fafce0f2f5b5d", size = 24561, upload-time = "2026-04-08T04:36:51.538Z" }, ] [[package]] @@ -6760,13 +7131,13 @@ wheels = [ [[package]] name = "weave" -version = "0.52.17" +version = "0.52.36" source = { registry = "https://pypi.org/simple" } dependencies = [ + { name = "cachetools" }, { name = "click" }, - { name = "diskcache" }, - { name = "eval-type-backport" }, - { name = "gql", extra = ["aiohttp", "requests"] }, + { name = "diskcache-weave" }, + { name = "gql", extra = ["httpx"] }, { name = "jsonschema" }, { name = "packaging" }, { name = "polyfile-weave" }, @@ -6776,27 +7147,26 @@ dependencies = [ { name = "tzdata", marker = "sys_platform == 'win32'" }, { name = "wandb" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/09/95/27e05d954972a83372a3ceb6b5db6136bc4f649fa69d8009b27c144ca111/weave-0.52.17.tar.gz", hash = "sha256:940aaf892b65c72c67cb893e97ed5339136a4b33a7ea85d52ed36671111826ef", size = 609149, upload-time = "2025-11-13T22:09:51.045Z" } +sdist = { url = "https://files.pythonhosted.org/packages/69/ee/63064875e0d1d7bf261484da7a8ef1d447d38f739b3e93e4d8673d51c882/weave-0.52.36.tar.gz", hash = "sha256:f58b37786de5444914e408e64026c3131b5c4417e6889d5a61fdcbec12c8e8dd", size = 793026, upload-time = "2026-04-01T17:23:50.671Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/ed/0b/ae7860d2b0c02e7efab26815a9a5286d3b0f9f4e0356446f2896351bf770/weave-0.52.17-py3-none-any.whl", hash = "sha256:5772ef82521a033829c921115c5779399581a7ae06d81dfd527126e2115d16d4", size = 765887, upload-time = "2025-11-13T22:09:49.161Z" }, + { url = "https://files.pythonhosted.org/packages/78/34/57a0843b3016e3dc63660cd06bfa52322f3333225ae7e7cfa6db97a59f8c/weave-0.52.36-py3-none-any.whl", hash = "sha256:4ff5b53323d20cc321aec665a4b4da746d6d85d432eda2ccca0e85bc8891649d", size = 983539, upload-time = "2026-04-01T17:23:48.819Z" }, ] [[package]] name = "weaviate-client" -version = "4.20.4" +version = "4.20.5" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "authlib" }, - { name = "deprecation" }, { name = "grpcio" }, { name = "httpx" }, { name = "protobuf" }, { name = "pydantic" }, { name = "validators" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/c9/1c/82b560254f612f95b644849d86e092da6407f17965d61e22b583b30b72cf/weaviate_client-4.20.4.tar.gz", hash = "sha256:08703234b59e4e03739f39e740e9e88cb50cd0aa147d9408b88ea6ce995c37b6", size = 809529, upload-time = "2026-03-10T15:08:13.845Z" } +sdist = { url = "https://files.pythonhosted.org/packages/81/c8/aa47cfa0a2b1e260846eaf04ce4cc2ab1bb03f29d793e7b009bc3e3babc7/weaviate_client-4.20.5.tar.gz", hash = "sha256:c07c688f0e6b78723dfecbcfeebf897cefa75f1a89c63ebd84aab88c662e4394", size = 811866, upload-time = "2026-04-09T20:08:45.268Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/1d/d7/9461c3e7d8c44080d2307078e33dc7fefefa3171c8f930f2b83a5cbf67f2/weaviate_client-4.20.4-py3-none-any.whl", hash = "sha256:7af3a213bebcb30dcf456b0db8b6225d8926106b835d7b883276de9dc1c301fe", size = 619517, upload-time = "2026-03-10T15:08:12.047Z" }, + { url = "https://files.pythonhosted.org/packages/58/ba/d55f1a665802f736436d09198afc0d00806a405aadb9977193a2f009cfcb/weaviate_client-4.20.5-py3-none-any.whl", hash = "sha256:3f508e3dc08257f85230f9d2ea0562443ed0715e7e89156f22b7e950d6c08cdb", size = 620766, upload-time = "2026-04-09T20:08:43.215Z" }, ] [[package]] @@ -6877,6 +7247,18 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/ff/21/abdedb4cdf6ff41ebf01a74087740a709e2edb146490e4d9beea054b0b7a/wrapt-1.16.0-py3-none-any.whl", hash = "sha256:6906c4100a8fcbf2fa735f6059214bb13b97f75b1a61777fcf6432121ef12ef1", size = 23362, upload-time = "2023-11-09T06:33:28.271Z" }, ] +[[package]] +name = "wsproto" +version = "1.3.2" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "h11" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/c7/79/12135bdf8b9c9367b8701c2c19a14c913c120b882d50b014ca0d38083c2c/wsproto-1.3.2.tar.gz", hash = "sha256:b86885dcf294e15204919950f666e06ffc6c7c114ca900b060d6e16293528294", size = 50116, upload-time = "2025-11-20T18:18:01.871Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/a4/f5/10b68b7b1544245097b2a1b8238f66f2fc6dcaeb24ba5d917f52bd2eed4f/wsproto-1.3.2-py3-none-any.whl", hash = "sha256:61eea322cdf56e8cc904bd3ad7573359a242ba65688716b0710a5eb12beab584", size = 24405, upload-time = "2025-11-20T18:18:00.454Z" }, +] + [[package]] name = "xinference-client" version = "2.4.0" diff --git a/dev/pytest/pytest_unit_tests.sh b/dev/pytest/pytest_unit_tests.sh index 1d4ff4d86f..962532de81 100755 --- a/dev/pytest/pytest_unit_tests.sh +++ b/dev/pytest/pytest_unit_tests.sh @@ -10,7 +10,11 @@ PYTEST_XDIST_ARGS="${PYTEST_XDIST_ARGS:--n auto}" # Run most tests in parallel (excluding controllers which have import conflicts with xdist) # Controller tests have module-level side effects (Flask route registration) that cause # race conditions when imported concurrently by multiple pytest-xdist workers. -pytest --timeout "${PYTEST_TIMEOUT}" ${PYTEST_XDIST_ARGS} api/tests/unit_tests --ignore=api/tests/unit_tests/controllers +pytest --timeout "${PYTEST_TIMEOUT}" ${PYTEST_XDIST_ARGS} \ + api/tests/unit_tests \ + api/providers/vdb/*/tests/unit_tests \ + --ignore=api/tests/unit_tests/controllers # Run controller tests sequentially to avoid import race conditions pytest --timeout "${PYTEST_TIMEOUT}" --cov-append api/tests/unit_tests/controllers + diff --git a/dev/pytest/pytest_vdb.sh b/dev/pytest/pytest_vdb.sh index 126aebf7bd..c1f129bee0 100755 --- a/dev/pytest/pytest_vdb.sh +++ b/dev/pytest/pytest_vdb.sh @@ -6,19 +6,7 @@ cd "$SCRIPT_DIR/../.." PYTEST_TIMEOUT="${PYTEST_TIMEOUT:-180}" -pytest --timeout "${PYTEST_TIMEOUT}" api/tests/integration_tests/vdb/chroma \ - api/tests/integration_tests/vdb/milvus \ - api/tests/integration_tests/vdb/pgvecto_rs \ - api/tests/integration_tests/vdb/pgvector \ - api/tests/integration_tests/vdb/qdrant \ - api/tests/integration_tests/vdb/weaviate \ - api/tests/integration_tests/vdb/elasticsearch \ - api/tests/integration_tests/vdb/vikingdb \ - api/tests/integration_tests/vdb/baidu \ - api/tests/integration_tests/vdb/tcvectordb \ - api/tests/integration_tests/vdb/upstash \ - api/tests/integration_tests/vdb/couchbase \ - api/tests/integration_tests/vdb/oceanbase \ - api/tests/integration_tests/vdb/tidb_vector \ - api/tests/integration_tests/vdb/huawei \ - api/tests/integration_tests/vdb/hologres \ +uv sync --project api --group dev + +uv run --project api pytest --timeout "${PYTEST_TIMEOUT}" \ + api/providers/vdb/*/tests/integration_tests \ diff --git a/dev/start-docker-compose b/dev/start-docker-compose index aa4f66a6cf..1321c3210f 100755 --- a/dev/start-docker-compose +++ b/dev/start-docker-compose @@ -1,8 +1,8 @@ -#!/usr/bin/env bash -set -euo pipefail - -SCRIPT_DIR="$(dirname "$(realpath "$0")")" -ROOT="$(dirname "$SCRIPT_DIR")" - -cd "$ROOT/docker" -docker compose --env-file middleware.env -f docker-compose.middleware.yaml -p dify up -d +#!/usr/bin/env bash +set -euo pipefail + +SCRIPT_DIR="$(dirname "$(realpath "$0")")" +ROOT="$(dirname "$SCRIPT_DIR")" + +cd "$ROOT/docker" +docker compose --env-file middleware.env -f docker-compose.middleware.yaml -p dify up -d diff --git a/docker/.env.example b/docker/.env.example index f20d57c71a..8176155698 100644 --- a/docker/.env.example +++ b/docker/.env.example @@ -132,6 +132,10 @@ MIGRATION_ENABLED=true # The default value is 300 seconds. FILES_ACCESS_TIMEOUT=300 +# Collaboration mode toggle +# To open collaboration features, you also need to set SERVER_WORKER_CLASS=geventwebsocket.gunicorn.workers.GeventWebSocketWorker +ENABLE_COLLABORATION_MODE=false + # Access token expiration time in minutes ACCESS_TOKEN_EXPIRE_MINUTES=60 @@ -167,6 +171,7 @@ SERVER_WORKER_AMOUNT=1 # Modifying it may also decrease throughput. # # It is strongly discouraged to change this parameter. +# If enable collaboration mode, it must be set to geventwebsocket.gunicorn.workers.GeventWebSocketWorker SERVER_WORKER_CLASS=gevent # Default number of worker connections, the default is 10. @@ -351,6 +356,9 @@ REDIS_SSL_CERTFILE= REDIS_SSL_KEYFILE= # Path to client private key file for SSL authentication REDIS_DB=0 +# Optional global prefix for Redis keys, topics, streams, and Celery Redis transport artifacts. +# Leave empty to preserve current unprefixed behavior. +REDIS_KEY_PREFIX= # Optional: limit total Redis connections used by API/Worker (unset for default) # Align with API's REDIS_MAX_CONNECTIONS in configs REDIS_MAX_CONNECTIONS= @@ -373,6 +381,20 @@ REDIS_USE_CLUSTERS=false REDIS_CLUSTERS= REDIS_CLUSTERS_PASSWORD= +# Redis connection and retry configuration +# max redis retry +REDIS_RETRY_RETRIES=3 +# Base delay (in seconds) for exponential backoff on retries +REDIS_RETRY_BACKOFF_BASE=1.0 +# Cap (in seconds) for exponential backoff on retries +REDIS_RETRY_BACKOFF_CAP=10.0 +# Timeout (in seconds) for Redis socket operations +REDIS_SOCKET_TIMEOUT=5.0 +# Timeout (in seconds) for establishing a Redis connection +REDIS_SOCKET_CONNECT_TIMEOUT=5.0 +# Interval (in seconds) for Redis health checks +REDIS_HEALTH_CHECK_INTERVAL=30 + # ------------------------------ # Celery Configuration # ------------------------------ @@ -411,6 +433,8 @@ CONSOLE_CORS_ALLOW_ORIGINS=* COOKIE_DOMAIN= # When the frontend and backend run on different subdomains, set NEXT_PUBLIC_COOKIE_DOMAIN=1. NEXT_PUBLIC_COOKIE_DOMAIN= +# WebSocket server URL. +NEXT_PUBLIC_SOCKET_URL=ws://localhost NEXT_PUBLIC_BATCH_CONCURRENCY=5 # ------------------------------ @@ -455,6 +479,7 @@ S3_REGION=us-east-1 S3_BUCKET_NAME=difyai S3_ACCESS_KEY= S3_SECRET_KEY= +S3_ADDRESS_STYLE=auto # Whether to use AWS managed IAM roles for authenticating with the S3 service. # If set to false, the access key and secret key must be provided. S3_USE_AWS_MANAGED_IAM=false @@ -1173,6 +1198,14 @@ MAX_ITERATIONS_NUM=99 # The timeout for the text generation in millisecond TEXT_GENERATION_TIMEOUT_MS=60000 +# Enable the experimental vinext runtime shipped in the image. +EXPERIMENTAL_ENABLE_VINEXT=false + +# Allow inline style attributes in Markdown rendering. +# Enable this if your workflows use Jinja2 templates with styled HTML. +# Only recommended for self-hosted deployments with trusted content. +ALLOW_INLINE_STYLES=false + # Allow rendering unsafe URLs which have "data:" scheme. ALLOW_UNSAFE_DATA_SCHEME=false diff --git a/docker/README.md b/docker/README.md index 4c40317f37..3130fa9886 100644 --- a/docker/README.md +++ b/docker/README.md @@ -88,6 +88,7 @@ The `.env.example` file provided in the Docker setup is extensive and covers a w 1. **Redis Configuration**: - `REDIS_HOST`, `REDIS_PORT`, `REDIS_PASSWORD`: Redis server connection settings. + - `REDIS_KEY_PREFIX`: Optional global namespace prefix for Redis keys, topics, streams, and Celery Redis transport artifacts. 1. **Celery Configuration**: diff --git a/docker/dify-env-sync.py b/docker/dify-env-sync.py index d7c762748c..afa39d8451 100755 --- a/docker/dify-env-sync.py +++ b/docker/dify-env-sync.py @@ -172,7 +172,10 @@ def analyze_value_change(current: str, recommended: str) -> str | None: return None # Boolean comparison - if current.lower() in {"true", "false"} and recommended.lower() in {"true", "false"}: + if current.lower() in {"true", "false"} and recommended.lower() in { + "true", + "false", + }: if current.lower() != recommended.lower(): return colorize(BLUE, f" -> Boolean value change ({current} -> {recommended})") return None @@ -187,7 +190,10 @@ def analyze_value_change(current: str, recommended: str) -> str | None: # String length if len(current) != len(recommended): - return colorize(YELLOW, f" -> String length change ({len(current)} -> {len(recommended)} characters)") + return colorize( + YELLOW, + f" -> String length change ({len(current)} -> {len(recommended)} characters)", + ) return None @@ -311,7 +317,10 @@ def sync_env_file(work_dir: Path, env_vars: dict[str, str], diffs: dict[str, tup env_var_pattern = re.compile(r"^([A-Za-z_][A-Za-z0-9_]*)\s*=") - with example_file.open(encoding="utf-8") as src, new_env_file.open("w", encoding="utf-8") as dst: + with ( + example_file.open(encoding="utf-8") as src, + new_env_file.open("w", encoding="utf-8") as dst, + ): for line in src: raw_line = line.rstrip("\n") match = env_var_pattern.match(raw_line) diff --git a/docker/docker-compose-template.yaml b/docker/docker-compose-template.yaml index 5234202a62..888f96332c 100644 --- a/docker/docker-compose-template.yaml +++ b/docker/docker-compose-template.yaml @@ -159,11 +159,14 @@ services: APP_API_URL: ${APP_API_URL:-} AMPLITUDE_API_KEY: ${AMPLITUDE_API_KEY:-} NEXT_PUBLIC_COOKIE_DOMAIN: ${NEXT_PUBLIC_COOKIE_DOMAIN:-} + NEXT_PUBLIC_SOCKET_URL: ${NEXT_PUBLIC_SOCKET_URL:-ws://localhost} SENTRY_DSN: ${WEB_SENTRY_DSN:-} NEXT_TELEMETRY_DISABLED: ${NEXT_TELEMETRY_DISABLED:-0} + EXPERIMENTAL_ENABLE_VINEXT: ${EXPERIMENTAL_ENABLE_VINEXT:-false} TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000} CSP_WHITELIST: ${CSP_WHITELIST:-} ALLOW_EMBED: ${ALLOW_EMBED:-false} + ALLOW_INLINE_STYLES: ${ALLOW_INLINE_STYLES:-false} ALLOW_UNSAFE_DATA_SCHEME: ${ALLOW_UNSAFE_DATA_SCHEME:-false} MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai} MARKETPLACE_URL: ${MARKETPLACE_URL:-https://marketplace.dify.ai} diff --git a/docker/docker-compose.yaml b/docker/docker-compose.yaml index d03835e2b0..a10fdf77c6 100644 --- a/docker/docker-compose.yaml +++ b/docker/docker-compose.yaml @@ -34,6 +34,7 @@ x-shared-env: &shared-api-worker-env OPENAI_API_BASE: ${OPENAI_API_BASE:-https://api.openai.com/v1} MIGRATION_ENABLED: ${MIGRATION_ENABLED:-true} FILES_ACCESS_TIMEOUT: ${FILES_ACCESS_TIMEOUT:-300} + ENABLE_COLLABORATION_MODE: ${ENABLE_COLLABORATION_MODE:-false} ACCESS_TOKEN_EXPIRE_MINUTES: ${ACCESS_TOKEN_EXPIRE_MINUTES:-60} REFRESH_TOKEN_EXPIRE_DAYS: ${REFRESH_TOKEN_EXPIRE_DAYS:-30} APP_DEFAULT_ACTIVE_REQUESTS: ${APP_DEFAULT_ACTIVE_REQUESTS:-0} @@ -90,6 +91,7 @@ x-shared-env: &shared-api-worker-env REDIS_SSL_CERTFILE: ${REDIS_SSL_CERTFILE:-} REDIS_SSL_KEYFILE: ${REDIS_SSL_KEYFILE:-} REDIS_DB: ${REDIS_DB:-0} + REDIS_KEY_PREFIX: ${REDIS_KEY_PREFIX:-} REDIS_MAX_CONNECTIONS: ${REDIS_MAX_CONNECTIONS:-} REDIS_USE_SENTINEL: ${REDIS_USE_SENTINEL:-false} REDIS_SENTINELS: ${REDIS_SENTINELS:-} @@ -100,6 +102,12 @@ x-shared-env: &shared-api-worker-env REDIS_USE_CLUSTERS: ${REDIS_USE_CLUSTERS:-false} REDIS_CLUSTERS: ${REDIS_CLUSTERS:-} REDIS_CLUSTERS_PASSWORD: ${REDIS_CLUSTERS_PASSWORD:-} + REDIS_RETRY_RETRIES: ${REDIS_RETRY_RETRIES:-3} + REDIS_RETRY_BACKOFF_BASE: ${REDIS_RETRY_BACKOFF_BASE:-1.0} + REDIS_RETRY_BACKOFF_CAP: ${REDIS_RETRY_BACKOFF_CAP:-10.0} + REDIS_SOCKET_TIMEOUT: ${REDIS_SOCKET_TIMEOUT:-5.0} + REDIS_SOCKET_CONNECT_TIMEOUT: ${REDIS_SOCKET_CONNECT_TIMEOUT:-5.0} + REDIS_HEALTH_CHECK_INTERVAL: ${REDIS_HEALTH_CHECK_INTERVAL:-30} CELERY_BROKER_URL: ${CELERY_BROKER_URL:-redis://:difyai123456@redis:6379/1} CELERY_BACKEND: ${CELERY_BACKEND:-redis} BROKER_USE_SSL: ${BROKER_USE_SSL:-false} @@ -112,6 +120,7 @@ x-shared-env: &shared-api-worker-env CONSOLE_CORS_ALLOW_ORIGINS: ${CONSOLE_CORS_ALLOW_ORIGINS:-*} COOKIE_DOMAIN: ${COOKIE_DOMAIN:-} NEXT_PUBLIC_COOKIE_DOMAIN: ${NEXT_PUBLIC_COOKIE_DOMAIN:-} + NEXT_PUBLIC_SOCKET_URL: ${NEXT_PUBLIC_SOCKET_URL:-ws://localhost} NEXT_PUBLIC_BATCH_CONCURRENCY: ${NEXT_PUBLIC_BATCH_CONCURRENCY:-5} STORAGE_TYPE: ${STORAGE_TYPE:-opendal} OPENDAL_SCHEME: ${OPENDAL_SCHEME:-fs} @@ -125,6 +134,7 @@ x-shared-env: &shared-api-worker-env S3_BUCKET_NAME: ${S3_BUCKET_NAME:-difyai} S3_ACCESS_KEY: ${S3_ACCESS_KEY:-} S3_SECRET_KEY: ${S3_SECRET_KEY:-} + S3_ADDRESS_STYLE: ${S3_ADDRESS_STYLE:-auto} S3_USE_AWS_MANAGED_IAM: ${S3_USE_AWS_MANAGED_IAM:-false} ARCHIVE_STORAGE_ENABLED: ${ARCHIVE_STORAGE_ENABLED:-false} ARCHIVE_STORAGE_ENDPOINT: ${ARCHIVE_STORAGE_ENDPOINT:-} @@ -509,6 +519,8 @@ x-shared-env: &shared-api-worker-env MAX_PARALLEL_LIMIT: ${MAX_PARALLEL_LIMIT:-10} MAX_ITERATIONS_NUM: ${MAX_ITERATIONS_NUM:-99} TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000} + EXPERIMENTAL_ENABLE_VINEXT: ${EXPERIMENTAL_ENABLE_VINEXT:-false} + ALLOW_INLINE_STYLES: ${ALLOW_INLINE_STYLES:-false} ALLOW_UNSAFE_DATA_SCHEME: ${ALLOW_UNSAFE_DATA_SCHEME:-false} MAX_TREE_DEPTH: ${MAX_TREE_DEPTH:-50} PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata} @@ -868,11 +880,14 @@ services: APP_API_URL: ${APP_API_URL:-} AMPLITUDE_API_KEY: ${AMPLITUDE_API_KEY:-} NEXT_PUBLIC_COOKIE_DOMAIN: ${NEXT_PUBLIC_COOKIE_DOMAIN:-} + NEXT_PUBLIC_SOCKET_URL: ${NEXT_PUBLIC_SOCKET_URL:-ws://localhost} SENTRY_DSN: ${WEB_SENTRY_DSN:-} NEXT_TELEMETRY_DISABLED: ${NEXT_TELEMETRY_DISABLED:-0} + EXPERIMENTAL_ENABLE_VINEXT: ${EXPERIMENTAL_ENABLE_VINEXT:-false} TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000} CSP_WHITELIST: ${CSP_WHITELIST:-} ALLOW_EMBED: ${ALLOW_EMBED:-false} + ALLOW_INLINE_STYLES: ${ALLOW_INLINE_STYLES:-false} ALLOW_UNSAFE_DATA_SCHEME: ${ALLOW_UNSAFE_DATA_SCHEME:-false} MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai} MARKETPLACE_URL: ${MARKETPLACE_URL:-https://marketplace.dify.ai} diff --git a/docker/nginx/conf.d/default.conf.template b/docker/nginx/conf.d/default.conf.template index 1d63c1b97d..94a748290f 100644 --- a/docker/nginx/conf.d/default.conf.template +++ b/docker/nginx/conf.d/default.conf.template @@ -14,6 +14,14 @@ server { include proxy.conf; } + location /socket.io/ { + proxy_pass http://api:5001; + include proxy.conf; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_cache_bypass $http_upgrade; + } + location /v1 { proxy_pass http://api:5001; include proxy.conf; diff --git a/e2e/AGENTS.md b/e2e/AGENTS.md index ae642768f5..e56aab20a7 100644 --- a/e2e/AGENTS.md +++ b/e2e/AGENTS.md @@ -165,3 +165,132 @@ Open the HTML report locally with: ```bash open cucumber-report/report.html ``` + +## Writing new scenarios + +### Workflow + +1. Create a `.feature` file under `features//` +1. Add step definitions under `features/step-definitions//` +1. Reuse existing steps from `common/` and other definition files before writing new ones +1. Run with `pnpm -C e2e e2e -- --tags @your-tag` to verify +1. Run `pnpm -C e2e check` before committing + +### Feature file conventions + +Tag every feature or scenario with a capability tag. Add auth tags only when they clarify intent or change the browser session behavior: + +```gherkin +@datasets @authenticated +Feature: Create dataset + Scenario: Create a new empty dataset + Given I am signed in as the default E2E admin + When I open the datasets page + ... +``` + +- Capability tags (`@apps`, `@auth`, `@datasets`, …) group related scenarios for selective runs +- Auth/session tags: + - default behavior — scenarios run with the shared authenticated storageState unless marked otherwise + - `@unauthenticated` — uses a clean BrowserContext with no cookies or storage + - `@authenticated` — optional intent tag for readability or selective runs; it does not currently change hook behavior on its own +- `@fresh` — only runs in `e2e:full` mode (requires uninitialized instance) +- `@skip` — excluded from all runs + +Keep scenarios short and declarative. Each step should describe **what** the user does, not **how** the UI works. + +### Step definition conventions + +```typescript +import type { DifyWorld } from '../../support/world' +import { Then, When } from '@cucumber/cucumber' +import { expect } from '@playwright/test' + +When('I open the datasets page', async function (this: DifyWorld) { + await this.getPage().goto('/datasets') +}) +``` + +Rules: + +- Always type `this` as `DifyWorld` for proper context access +- Use `async function` (not arrow functions — Cucumber binds `this`) +- One step = one user-visible action or one assertion +- Keep steps stateless across scenarios; use `DifyWorld` properties for in-scenario state + +### Locator priority + +Follow the Playwright recommended locator strategy, in order of preference: + +| Priority | Locator | Example | When to use | +| -------- | ------------------ | ----------------------------------------- | ----------------------------------------- | +| 1 | `getByRole` | `getByRole('button', { name: 'Create' })` | Default choice — accessible and resilient | +| 2 | `getByLabel` | `getByLabel('App name')` | Form inputs with visible labels | +| 3 | `getByPlaceholder` | `getByPlaceholder('Enter name')` | Inputs without visible labels | +| 4 | `getByText` | `getByText('Welcome')` | Static text content | +| 5 | `getByTestId` | `getByTestId('workflow-canvas')` | Only when no semantic locator works | + +Avoid raw CSS/XPath selectors. They break when the DOM structure changes. + +### Assertions + +Use `@playwright/test` `expect` — it auto-waits and retries until the condition is met or the timeout expires: + +```typescript +// URL assertion +await expect(page).toHaveURL(/\/datasets\/[a-f0-9-]+\/documents/) + +// Element visibility +await expect(page.getByRole('button', { name: 'Save' })).toBeVisible() + +// Element state +await expect(page.getByRole('button', { name: 'Submit' })).toBeEnabled() + +// Negation +await expect(page.getByText('Loading')).not.toBeVisible() +``` + +Do not use manual `waitForTimeout` or polling loops. If you need a longer wait for a specific assertion, pass `{ timeout: 30_000 }` to the assertion. + +### Cucumber expressions + +Use Cucumber expression parameter types to extract values from Gherkin steps: + +| Type | Pattern | Example step | +| ---------- | ------------- | ---------------------------------- | +| `{string}` | Quoted string | `I select the "Workflow" app type` | +| `{int}` | Integer | `I should see {int} items` | +| `{float}` | Decimal | `the progress is {float} percent` | +| `{word}` | Single word | `I click the {word} tab` | + +Prefer `{string}` for UI labels, names, and text content — it maps naturally to Gherkin's quoted values. + +### Scoping locators + +When the page has multiple similar elements, scope locators to a container: + +```typescript +When('I fill in the app name in the dialog', async function (this: DifyWorld) { + const dialog = this.getPage().getByRole('dialog') + await dialog.getByPlaceholder('Give your app a name').fill('My App') +}) +``` + +### Failure diagnostics + +The `After` hook automatically captures on failure: + +- Full-page screenshot (PNG) +- Page HTML dump +- Console errors and page errors + +Artifacts are saved to `cucumber-report/artifacts/` and attached to the HTML report. No extra code needed in step definitions. + +## Reusing existing steps + +Before writing a new step definition, inspect the existing step definition files first. Reuse a matching step when the wording and behavior already fit, and only add a new step when the scenario needs a genuinely new user action or assertion. Steps in `common/` are designed for broad reuse across all features. + +Or browse the step definition files directly: + +- `features/step-definitions/common/` — auth guards and navigation assertions shared by all features +- `features/step-definitions//` — domain-specific steps scoped to a single feature area diff --git a/e2e/README.md b/e2e/README.md index 9b4046eaff..feca0cb419 100644 --- a/e2e/README.md +++ b/e2e/README.md @@ -1,3 +1,5 @@ # E2E -Canonical documentation for this package lives in [AGENTS.md](./AGENTS.md). +Canonical documentation for this package lives in [AGENTS.md]. + +[AGENTS.md]: ./AGENTS.md diff --git a/e2e/features/apps/create-chatbot-app.feature b/e2e/features/apps/create-chatbot-app.feature new file mode 100644 index 0000000000..4f506e4f40 --- /dev/null +++ b/e2e/features/apps/create-chatbot-app.feature @@ -0,0 +1,11 @@ +@apps @authenticated +Feature: Create Chatbot app + Scenario: Create a new Chatbot app and redirect to the configuration page + Given I am signed in as the default E2E admin + When I open the apps console + And I start creating a blank app + And I expand the beginner app types + And I select the "Chatbot" app type + And I enter a unique E2E app name + And I confirm app creation + Then I should land on the app configuration page diff --git a/e2e/features/apps/create-workflow-app.feature b/e2e/features/apps/create-workflow-app.feature new file mode 100644 index 0000000000..b88d94d899 --- /dev/null +++ b/e2e/features/apps/create-workflow-app.feature @@ -0,0 +1,10 @@ +@apps @authenticated +Feature: Create Workflow app + Scenario: Create a new Workflow app and redirect to the workflow editor + Given I am signed in as the default E2E admin + When I open the apps console + And I start creating a blank app + And I select the "Workflow" app type + And I enter a unique E2E app name + And I confirm app creation + Then I should land on the workflow editor diff --git a/e2e/features/auth/sign-out.feature b/e2e/features/auth/sign-out.feature new file mode 100644 index 0000000000..9112f1220a --- /dev/null +++ b/e2e/features/auth/sign-out.feature @@ -0,0 +1,18 @@ +@auth @authenticated +Feature: Sign out + Scenario: Sign out from the apps console + Given I am signed in as the default E2E admin + When I open the apps console + And I open the account menu + And I sign out + Then I should be on the sign-in page + + Scenario: Redirect back to sign-in when reopening the apps console after signing out + Given I am signed in as the default E2E admin + When I open the apps console + And I open the account menu + And I sign out + Then I should be on the sign-in page + When I open the apps console + Then I should be redirected to the signin page + And I should see the "Sign in" button diff --git a/e2e/features/smoke/unauthenticated-entry.feature b/e2e/features/smoke/unauthenticated-entry.feature new file mode 100644 index 0000000000..a2783c1cba --- /dev/null +++ b/e2e/features/smoke/unauthenticated-entry.feature @@ -0,0 +1,7 @@ +@smoke @unauthenticated +Feature: Unauthenticated app console entry + Scenario: Redirect to the sign-in page when opening the apps console without logging in + Given I am not signed in + When I open the apps console + Then I should be redirected to the signin page + And I should see the "Sign in" button diff --git a/e2e/features/step-definitions/apps/create-app.steps.ts b/e2e/features/step-definitions/apps/create-app.steps.ts index b8e76c6f06..e444b97dc8 100644 --- a/e2e/features/step-definitions/apps/create-app.steps.ts +++ b/e2e/features/step-definitions/apps/create-app.steps.ts @@ -1,6 +1,6 @@ +import type { DifyWorld } from '../../support/world' import { Then, When } from '@cucumber/cucumber' import { expect } from '@playwright/test' -import type { DifyWorld } from '../../support/world' When('I start creating a blank app', async function (this: DifyWorld) { const page = this.getPage() @@ -24,6 +24,30 @@ When('I confirm app creation', async function (this: DifyWorld) { await createButton.click() }) +When('I select the {string} app type', async function (this: DifyWorld, appType: string) { + const dialog = this.getPage().getByRole('dialog') + const appTypeTitle = dialog.getByText(appType, { exact: true }) + + await expect(appTypeTitle).toBeVisible() + await appTypeTitle.click() +}) + +When('I expand the beginner app types', async function (this: DifyWorld) { + const page = this.getPage() + const toggle = page.getByRole('button', { name: 'More basic app types' }) + + await expect(toggle).toBeVisible() + await toggle.click() +}) + Then('I should land on the app editor', async function (this: DifyWorld) { await expect(this.getPage()).toHaveURL(/\/app\/[^/]+\/(workflow|configuration)(?:\?.*)?$/) }) + +Then('I should land on the workflow editor', async function (this: DifyWorld) { + await expect(this.getPage()).toHaveURL(/\/app\/[^/]+\/workflow(?:\?.*)?$/) +}) + +Then('I should land on the app configuration page', async function (this: DifyWorld) { + await expect(this.getPage()).toHaveURL(/\/app\/[^/]+\/configuration(?:\?.*)?$/) +}) diff --git a/e2e/features/step-definitions/auth/sign-out.steps.ts b/e2e/features/step-definitions/auth/sign-out.steps.ts new file mode 100644 index 0000000000..0cc5f76ccc --- /dev/null +++ b/e2e/features/step-definitions/auth/sign-out.steps.ts @@ -0,0 +1,25 @@ +import type { DifyWorld } from '../../support/world' +import { Then, When } from '@cucumber/cucumber' +import { expect } from '@playwright/test' + +When('I open the account menu', async function (this: DifyWorld) { + const page = this.getPage() + const trigger = page.getByRole('button', { name: 'Account' }) + + await expect(trigger).toBeVisible() + await trigger.click() +}) + +When('I sign out', async function (this: DifyWorld) { + const page = this.getPage() + + await expect(page.getByText('Log out')).toBeVisible() + await page.getByText('Log out').click() +}) + +Then('I should be on the sign-in page', async function (this: DifyWorld) { + await expect(this.getPage()).toHaveURL(/\/signin/) + await expect(this.getPage().getByRole('button', { name: /^Sign in$/i })).toBeVisible({ + timeout: 30_000, + }) +}) diff --git a/e2e/features/step-definitions/common/auth.steps.ts b/e2e/features/step-definitions/common/auth.steps.ts index bf03c2d8f4..67c18dfe6c 100644 --- a/e2e/features/step-definitions/common/auth.steps.ts +++ b/e2e/features/step-definitions/common/auth.steps.ts @@ -1,5 +1,5 @@ -import { Given } from '@cucumber/cucumber' import type { DifyWorld } from '../../support/world' +import { Given } from '@cucumber/cucumber' Given('I am signed in as the default E2E admin', async function (this: DifyWorld) { const session = await this.getAuthSession() @@ -9,3 +9,10 @@ Given('I am signed in as the default E2E admin', async function (this: DifyWorld 'text/plain', ) }) + +Given('I am not signed in', async function (this: DifyWorld) { + this.attach( + 'Using a clean browser context without the shared authenticated storage state.', + 'text/plain', + ) +}) diff --git a/e2e/features/step-definitions/common/navigation.steps.ts b/e2e/features/step-definitions/common/navigation.steps.ts index b18ff035fa..9bec34c224 100644 --- a/e2e/features/step-definitions/common/navigation.steps.ts +++ b/e2e/features/step-definitions/common/navigation.steps.ts @@ -1,6 +1,6 @@ +import type { DifyWorld } from '../../support/world' import { Then, When } from '@cucumber/cucumber' import { expect } from '@playwright/test' -import type { DifyWorld } from '../../support/world' When('I open the apps console', async function (this: DifyWorld) { await this.getPage().goto('/apps') @@ -10,6 +10,10 @@ Then('I should stay on the apps console', async function (this: DifyWorld) { await expect(this.getPage()).toHaveURL(/\/apps(?:\?.*)?$/) }) +Then('I should be redirected to the signin page', async function (this: DifyWorld) { + await expect(this.getPage()).toHaveURL(/\/signin(?:\?.*)?$/) +}) + Then('I should see the {string} button', async function (this: DifyWorld, label: string) { await expect(this.getPage().getByRole('button', { name: label })).toBeVisible() }) diff --git a/e2e/features/step-definitions/smoke/install.steps.ts b/e2e/features/step-definitions/smoke/install.steps.ts index 857e01a971..3f2f8b5199 100644 --- a/e2e/features/step-definitions/smoke/install.steps.ts +++ b/e2e/features/step-definitions/smoke/install.steps.ts @@ -1,6 +1,6 @@ +import type { DifyWorld } from '../../support/world' import { Given } from '@cucumber/cucumber' import { expect } from '@playwright/test' -import type { DifyWorld } from '../../support/world' Given( 'the last authentication bootstrap came from a fresh install', diff --git a/e2e/features/support/hooks.ts b/e2e/features/support/hooks.ts index a6862d79f5..33b337fb93 100644 --- a/e2e/features/support/hooks.ts +++ b/e2e/features/support/hooks.ts @@ -1,11 +1,12 @@ -import { After, AfterAll, Before, BeforeAll, Status, setDefaultTimeout } from '@cucumber/cucumber' -import { chromium, type Browser } from '@playwright/test' +import type { Browser } from '@playwright/test' +import type { DifyWorld } from './world' import { mkdir, writeFile } from 'node:fs/promises' import path from 'node:path' import { fileURLToPath } from 'node:url' -import { ensureAuthenticatedState } from '../../fixtures/auth' +import { After, AfterAll, Before, BeforeAll, setDefaultTimeout, Status } from '@cucumber/cucumber' +import { chromium } from '@playwright/test' +import { AUTH_BOOTSTRAP_TIMEOUT_MS, ensureAuthenticatedState } from '../../fixtures/auth' import { baseURL, cucumberHeadless, cucumberSlowMo } from '../../test-env' -import type { DifyWorld } from './world' const e2eRoot = fileURLToPath(new URL('../..', import.meta.url)) const artifactsDir = path.join(e2eRoot, 'cucumber-report', 'artifacts') @@ -15,7 +16,7 @@ let browser: Browser | undefined setDefaultTimeout(60_000) const sanitizeForPath = (value: string) => - value.replaceAll(/[^a-zA-Z0-9_-]+/g, '-').replaceAll(/^-+|-+$/g, '') + value.replaceAll(/[^\w-]+/g, '-').replaceAll(/^-+|-+$/g, '') const writeArtifact = async ( scenarioName: string, @@ -31,7 +32,7 @@ const writeArtifact = async ( return artifactPath } -BeforeAll(async () => { +BeforeAll({ timeout: AUTH_BOOTSTRAP_TIMEOUT_MS }, async () => { await mkdir(artifactsDir, { recursive: true }) browser = await chromium.launch({ @@ -44,12 +45,18 @@ BeforeAll(async () => { }) Before(async function (this: DifyWorld, { pickle }) { - if (!browser) throw new Error('Shared Playwright browser is not available.') + if (!browser) + throw new Error('Shared Playwright browser is not available.') + + const isUnauthenticatedScenario = pickle.tags.some(tag => tag.name === '@unauthenticated') + + if (isUnauthenticatedScenario) + await this.startUnauthenticatedSession(browser) + else await this.startAuthenticatedSession(browser) - await this.startAuthenticatedSession(browser) this.scenarioStartedAt = Date.now() - const tags = pickle.tags.map((tag) => tag.name).join(' ') + const tags = pickle.tags.map(tag => tag.name).join(' ') console.log(`[e2e] start ${pickle.name}${tags ? ` ${tags}` : ''}`) }) diff --git a/e2e/features/support/world.ts b/e2e/features/support/world.ts index 15ab8daf16..0e9c4b9c84 100644 --- a/e2e/features/support/world.ts +++ b/e2e/features/support/world.ts @@ -1,9 +1,11 @@ -import { type IWorldOptions, World, setWorldConstructor } from '@cucumber/cucumber' +import type { IWorldOptions } from '@cucumber/cucumber' import type { Browser, BrowserContext, ConsoleMessage, Page } from '@playwright/test' +import type { AuthSessionMetadata } from '../../fixtures/auth' +import { setWorldConstructor, World } from '@cucumber/cucumber' import { + authStatePath, readAuthSessionMetadata, - type AuthSessionMetadata, } from '../../fixtures/auth' import { baseURL, defaultLocale } from '../../test-env' @@ -25,27 +27,37 @@ export class DifyWorld extends World { this.pageErrors = [] } - async startAuthenticatedSession(browser: Browser) { + async startSession(browser: Browser, authenticated: boolean) { this.resetScenarioState() this.context = await browser.newContext({ baseURL, locale: defaultLocale, - storageState: authStatePath, + ...(authenticated ? { storageState: authStatePath } : {}), }) this.context.setDefaultTimeout(30_000) this.page = await this.context.newPage() this.page.setDefaultTimeout(30_000) this.page.on('console', (message: ConsoleMessage) => { - if (message.type() === 'error') this.consoleErrors.push(message.text()) + if (message.type() === 'error') + this.consoleErrors.push(message.text()) }) this.page.on('pageerror', (error) => { this.pageErrors.push(error.message) }) } + async startAuthenticatedSession(browser: Browser) { + await this.startSession(browser, true) + } + + async startUnauthenticatedSession(browser: Browser) { + await this.startSession(browser, false) + } + getPage() { - if (!this.page) throw new Error('Playwright page has not been initialized for this scenario.') + if (!this.page) + throw new Error('Playwright page has not been initialized for this scenario.') return this.page } diff --git a/e2e/fixtures/auth.ts b/e2e/fixtures/auth.ts index 853bfff5ed..cc54a6d47b 100644 --- a/e2e/fixtures/auth.ts +++ b/e2e/fixtures/auth.ts @@ -1,8 +1,8 @@ import type { Browser, Page } from '@playwright/test' -import { expect } from '@playwright/test' import { mkdir, readFile, writeFile } from 'node:fs/promises' import path from 'node:path' import { fileURLToPath } from 'node:url' +import { expect } from '@playwright/test' import { defaultBaseURL, defaultLocale } from '../test-env' export type AuthSessionMetadata = { @@ -12,7 +12,7 @@ export type AuthSessionMetadata = { usedInitPassword: boolean } -const WAIT_TIMEOUT_MS = 120_000 +export const AUTH_BOOTSTRAP_TIMEOUT_MS = 120_000 const e2eRoot = fileURLToPath(new URL('..', import.meta.url)) export const authDir = path.join(e2eRoot, '.auth') @@ -39,40 +39,56 @@ const escapeRegex = (value: string) => value.replaceAll(/[.*+?^${}()|[\]\\]/g, ' const appURL = (baseURL: string, pathname: string) => new URL(pathname, baseURL).toString() -const waitForPageState = async (page: Page) => { +type AuthPageState = 'install' | 'login' | 'init' + +const getRemainingTimeout = (deadline: number) => Math.max(deadline - Date.now(), 1) + +const waitForPageState = async (page: Page, deadline: number): Promise => { const installHeading = page.getByRole('heading', { name: 'Setting up an admin account' }) const signInButton = page.getByRole('button', { name: 'Sign in' }) const initPasswordField = page.getByLabel('Admin initialization password') - const deadline = Date.now() + WAIT_TIMEOUT_MS - - while (Date.now() < deadline) { - if (await installHeading.isVisible().catch(() => false)) return 'install' as const - if (await signInButton.isVisible().catch(() => false)) return 'login' as const - if (await initPasswordField.isVisible().catch(() => false)) return 'init' as const - - await page.waitForTimeout(1_000) + try { + return await Promise.any([ + installHeading + .waitFor({ state: 'visible', timeout: getRemainingTimeout(deadline) }) + .then(() => 'install'), + signInButton + .waitFor({ state: 'visible', timeout: getRemainingTimeout(deadline) }) + .then(() => 'login'), + initPasswordField + .waitFor({ state: 'visible', timeout: getRemainingTimeout(deadline) }) + .then(() => 'init'), + ]) + } + catch { + throw new Error(`Unable to determine auth page state for ${page.url()}`) } - - throw new Error(`Unable to determine auth page state for ${page.url()}`) } -const completeInitPasswordIfNeeded = async (page: Page) => { +const completeInitPasswordIfNeeded = async (page: Page, deadline: number) => { const initPasswordField = page.getByLabel('Admin initialization password') - if (!(await initPasswordField.isVisible({ timeout: 3_000 }).catch(() => false))) return false + + const needsInitPassword = await initPasswordField + .waitFor({ state: 'visible', timeout: Math.min(getRemainingTimeout(deadline), 3_000) }) + .then(() => true) + .catch(() => false) + + if (!needsInitPassword) + return false await initPasswordField.fill(initPassword) await page.getByRole('button', { name: 'Validate' }).click() await expect(page.getByRole('heading', { name: 'Setting up an admin account' })).toBeVisible({ - timeout: WAIT_TIMEOUT_MS, + timeout: getRemainingTimeout(deadline), }) return true } -const completeInstall = async (page: Page, baseURL: string) => { +const completeInstall = async (page: Page, baseURL: string, deadline: number) => { await expect(page.getByRole('heading', { name: 'Setting up an admin account' })).toBeVisible({ - timeout: WAIT_TIMEOUT_MS, + timeout: getRemainingTimeout(deadline), }) await page.getByLabel('Email address').fill(adminCredentials.email) @@ -81,13 +97,13 @@ const completeInstall = async (page: Page, baseURL: string) => { await page.getByRole('button', { name: 'Set up' }).click() await expect(page).toHaveURL(new RegExp(`^${escapeRegex(baseURL)}/apps(?:\\?.*)?$`), { - timeout: WAIT_TIMEOUT_MS, + timeout: getRemainingTimeout(deadline), }) } -const completeLogin = async (page: Page, baseURL: string) => { +const completeLogin = async (page: Page, baseURL: string, deadline: number) => { await expect(page.getByRole('button', { name: 'Sign in' })).toBeVisible({ - timeout: WAIT_TIMEOUT_MS, + timeout: getRemainingTimeout(deadline), }) await page.getByLabel('Email address').fill(adminCredentials.email) @@ -95,12 +111,13 @@ const completeLogin = async (page: Page, baseURL: string) => { await page.getByRole('button', { name: 'Sign in' }).click() await expect(page).toHaveURL(new RegExp(`^${escapeRegex(baseURL)}/apps(?:\\?.*)?$`), { - timeout: WAIT_TIMEOUT_MS, + timeout: getRemainingTimeout(deadline), }) } export const ensureAuthenticatedState = async (browser: Browser, configuredBaseURL?: string) => { const baseURL = resolveBaseURL(configuredBaseURL) + const deadline = Date.now() + AUTH_BOOTSTRAP_TIMEOUT_MS await mkdir(authDir, { recursive: true }) @@ -111,25 +128,29 @@ export const ensureAuthenticatedState = async (browser: Browser, configuredBaseU const page = await context.newPage() try { - await page.goto(appURL(baseURL, '/install'), { waitUntil: 'networkidle' }) + await page.goto(appURL(baseURL, '/install'), { + timeout: getRemainingTimeout(deadline), + waitUntil: 'domcontentloaded', + }) - let usedInitPassword = await completeInitPasswordIfNeeded(page) - let pageState = await waitForPageState(page) + let usedInitPassword = await completeInitPasswordIfNeeded(page, deadline) + let pageState = await waitForPageState(page, deadline) while (pageState === 'init') { - const completedInitPassword = await completeInitPasswordIfNeeded(page) + const completedInitPassword = await completeInitPasswordIfNeeded(page, deadline) if (!completedInitPassword) throw new Error(`Unable to validate initialization password for ${page.url()}`) usedInitPassword = true - pageState = await waitForPageState(page) + pageState = await waitForPageState(page, deadline) } - if (pageState === 'install') await completeInstall(page, baseURL) - else await completeLogin(page, baseURL) + if (pageState === 'install') + await completeInstall(page, baseURL, deadline) + else await completeLogin(page, baseURL, deadline) await expect(page.getByRole('button', { name: 'Create from Blank' })).toBeVisible({ - timeout: WAIT_TIMEOUT_MS, + timeout: getRemainingTimeout(deadline), }) await context.storageState({ path: authStatePath }) @@ -142,7 +163,8 @@ export const ensureAuthenticatedState = async (browser: Browser, configuredBaseU } await writeFile(authMetadataPath, `${JSON.stringify(metadata, null, 2)}\n`, 'utf8') - } finally { + } + finally { await context.close() } } diff --git a/e2e/package.json b/e2e/package.json index 0ee2afff7f..94fc857c0b 100644 --- a/e2e/package.json +++ b/e2e/package.json @@ -1,7 +1,7 @@ { "name": "dify-e2e", - "private": true, "type": "module", + "private": true, "scripts": { "check": "vp check --fix", "e2e": "tsx ./scripts/run-cucumber.ts", @@ -11,14 +11,17 @@ "e2e:install": "playwright install --with-deps chromium", "e2e:middleware:down": "tsx ./scripts/setup.ts middleware-down", "e2e:middleware:up": "tsx ./scripts/setup.ts middleware-up", - "e2e:reset": "tsx ./scripts/setup.ts reset" + "e2e:reset": "tsx ./scripts/setup.ts reset", + "type-check": "tsc" }, "devDependencies": { "@cucumber/cucumber": "catalog:", + "@dify/tsconfig": "workspace:*", "@playwright/test": "catalog:", "@types/node": "catalog:", "tsx": "catalog:", "typescript": "catalog:", + "vite": "catalog:", "vite-plus": "catalog:" } } diff --git a/e2e/scripts/common.ts b/e2e/scripts/common.ts index bb82121079..ea6c897b2d 100644 --- a/e2e/scripts/common.ts +++ b/e2e/scripts/common.ts @@ -1,4 +1,6 @@ -import { spawn, type ChildProcess } from 'node:child_process' +import type { ChildProcess } from 'node:child_process' +import { spawn } from 'node:child_process' +import { createHash } from 'node:crypto' import { access, copyFile, readFile, writeFile } from 'node:fs/promises' import net from 'node:net' import path from 'node:path' @@ -38,12 +40,17 @@ export const middlewareEnvExampleFile = path.join(dockerDir, 'middleware.env.exa export const webEnvLocalFile = path.join(webDir, '.env.local') export const webEnvExampleFile = path.join(webDir, '.env.example') export const apiEnvExampleFile = path.join(apiDir, 'tests', 'integration_tests', '.env.example') +export const e2eWebEnvOverrides = { + NEXT_PUBLIC_API_PREFIX: 'http://127.0.0.1:5001/console/api', + NEXT_PUBLIC_PUBLIC_API_PREFIX: 'http://127.0.0.1:5001/api', +} satisfies Record const formatCommand = (command: string, args: string[]) => [command, ...args].join(' ') export const isMainModule = (metaUrl: string) => { const entrypoint = process.argv[1] - if (!entrypoint) return false + if (!entrypoint) + return false return pathToFileURL(entrypoint).href === metaUrl } @@ -102,7 +109,8 @@ export const runCommandOrThrow = async (options: RunCommandOptions) => { const forwardSignalsToChild = (childProcess: ChildProcess) => { const handleSignal = (signal: NodeJS.Signals) => { - if (childProcess.exitCode === null) childProcess.kill(signal) + if (childProcess.exitCode === null) + childProcess.kill(signal) } const onSigint = () => handleSignal('SIGINT') @@ -147,7 +155,8 @@ export const runForegroundProcess = async ({ export const ensureFileExists = async (filePath: string, exampleFilePath: string) => { try { await access(filePath) - } catch { + } + catch { await copyFile(exampleFilePath, filePath) } } @@ -157,38 +166,42 @@ export const ensureLineInFile = async (filePath: string, line: string) => { const lines = fileContent.split(/\r?\n/) const assignmentPrefix = line.includes('=') ? `${line.slice(0, line.indexOf('='))}=` : null - if (lines.includes(line)) return + if (lines.includes(line)) + return - if (assignmentPrefix && lines.some((existingLine) => existingLine.startsWith(assignmentPrefix))) + if (assignmentPrefix && lines.some(existingLine => existingLine.startsWith(assignmentPrefix))) return const normalizedContent = fileContent.endsWith('\n') ? fileContent : `${fileContent}\n` await writeFile(filePath, `${normalizedContent}${line}\n`, 'utf8') } -export const ensureWebEnvLocal = async () => { - await ensureFileExists(webEnvLocalFile, webEnvExampleFile) - - const fileContent = await readFile(webEnvLocalFile, 'utf8') - const nextContent = fileContent.replaceAll('http://localhost:5001', 'http://127.0.0.1:5001') - - if (nextContent !== fileContent) await writeFile(webEnvLocalFile, nextContent, 'utf8') +export const getWebEnvLocalHash = async () => { + const fileContent = await readFile(webEnvLocalFile, 'utf8').catch(() => '') + return createHash('sha256') + .update( + JSON.stringify({ + envLocal: fileContent, + overrides: e2eWebEnvOverrides, + }), + ) + .digest('hex') } export const readSimpleDotenv = async (filePath: string) => { const fileContent = await readFile(filePath, 'utf8') const entries = fileContent .split(/\r?\n/) - .map((line) => line.trim()) - .filter((line) => line && !line.startsWith('#')) + .map(line => line.trim()) + .filter(line => line && !line.startsWith('#')) .map<[string, string]>((line) => { const separatorIndex = line.indexOf('=') const key = separatorIndex === -1 ? line : line.slice(0, separatorIndex).trim() const rawValue = separatorIndex === -1 ? '' : line.slice(separatorIndex + 1).trim() if ( - (rawValue.startsWith('"') && rawValue.endsWith('"')) || - (rawValue.startsWith("'") && rawValue.endsWith("'")) + (rawValue.startsWith('"') && rawValue.endsWith('"')) + || (rawValue.startsWith('\'') && rawValue.endsWith('\'')) ) { return [key, rawValue.slice(1, -1)] } @@ -213,7 +226,8 @@ export const waitForCondition = async ({ const deadline = Date.now() + timeoutMs while (Date.now() < deadline) { - if (await check()) return + if (await check()) + return await sleep(intervalMs) } diff --git a/e2e/scripts/run-cucumber.ts b/e2e/scripts/run-cucumber.ts index 39e9157916..d7778e65e2 100644 --- a/e2e/scripts/run-cucumber.ts +++ b/e2e/scripts/run-cucumber.ts @@ -1,7 +1,7 @@ import { mkdir, rm } from 'node:fs/promises' import path from 'node:path' +import { startLoggedProcess, stopManagedProcess, waitForUrl } from '../support/process' import { startWebServer, stopWebServer } from '../support/web-server' -import { waitForUrl, startLoggedProcess, stopManagedProcess } from '../support/process' import { apiURL, baseURL, reuseExistingWebServer } from '../test-env' import { e2eDir, isMainModule, runCommand } from './common' import { resetState, startMiddleware, stopMiddleware } from './setup' @@ -17,12 +17,10 @@ const parseArgs = (argv: string[]): RunOptions => { let headed = false const forwardArgs: string[] = [] - for (let index = 0; index < argv.length; index += 1) { - const arg = argv[index] - + for (const [index, arg] of argv.entries()) { if (arg === '--') { forwardArgs.push(...argv.slice(index + 1)) - break + return { forwardArgs, full, headed } } if (arg === '--full') { @@ -38,24 +36,22 @@ const parseArgs = (argv: string[]): RunOptions => { forwardArgs.push(arg) } - return { - forwardArgs, - full, - headed, - } + return { forwardArgs, full, headed } } const hasCustomTags = (forwardArgs: string[]) => - forwardArgs.some((arg) => arg === '--tags' || arg.startsWith('--tags=')) + forwardArgs.some(arg => arg === '--tags' || arg.startsWith('--tags=')) const main = async () => { const { forwardArgs, full, headed } = parseArgs(process.argv.slice(2)) const startMiddlewareForRun = full const resetStateForRun = full - if (resetStateForRun) await resetState() + if (resetStateForRun) + await resetState() - if (startMiddlewareForRun) await startMiddleware() + if (startMiddlewareForRun) + await startMiddleware() const cucumberReportDir = path.join(e2eDir, 'cucumber-report') const logDir = path.join(e2eDir, '.logs') @@ -81,7 +77,8 @@ const main = async () => { if (startMiddlewareForRun) { try { await stopMiddleware() - } catch { + } + catch { // Cleanup should continue even if middleware shutdown fails. } } @@ -103,7 +100,8 @@ const main = async () => { try { try { await waitForUrl(`${apiURL}/health`, 180_000, 1_000) - } catch { + } + catch { throw new Error(`API did not become ready at ${apiURL}/health.`) } @@ -139,7 +137,8 @@ const main = async () => { }) process.exitCode = result.exitCode - } finally { + } + finally { process.off('SIGINT', onTerminate) process.off('SIGTERM', onTerminate) await cleanup() diff --git a/e2e/scripts/setup.ts b/e2e/scripts/setup.ts index 6f38598df4..ba4c011b04 100644 --- a/e2e/scripts/setup.ts +++ b/e2e/scripts/setup.ts @@ -1,4 +1,4 @@ -import { access, mkdir, rm } from 'node:fs/promises' +import { access, mkdir, readFile, rm, writeFile } from 'node:fs/promises' import path from 'node:path' import { waitForUrl } from '../support/process' import { @@ -6,9 +6,10 @@ import { apiEnvExampleFile, dockerDir, e2eDir, + e2eWebEnvOverrides, ensureFileExists, ensureLineInFile, - ensureWebEnvLocal, + getWebEnvLocalHash, isMainModule, isTcpPortReachable, middlewareComposeFile, @@ -23,6 +24,7 @@ import { } from './common' const buildIdPath = path.join(webDir, '.next', 'BUILD_ID') +const webBuildEnvStampPath = path.join(webDir, '.next', 'e2e-web-env.sha256') const middlewareDataPaths = [ path.join(dockerDir, 'volumes', 'db', 'data'), @@ -77,7 +79,8 @@ const getContainerHealth = async (containerId: string) => { stdio: 'pipe', }) - if (result.exitCode !== 0) return '' + if (result.exitCode !== 0) + return '' return result.stdout.trim() } @@ -103,34 +106,56 @@ const waitForDependency = async ({ try { await wait() - } catch (error) { + } + catch (error) { await printComposeLogs(services) throw error } } export const ensureWebBuild = async () => { - await ensureWebEnvLocal() + const envHash = await getWebEnvLocalHash() + const buildEnv = { + ...e2eWebEnvOverrides, + } if (process.env.E2E_FORCE_WEB_BUILD === '1') { await runCommandOrThrow({ command: 'pnpm', args: ['run', 'build'], cwd: webDir, + env: buildEnv, }) + await writeFile(webBuildEnvStampPath, `${envHash}\n`, 'utf8') return } try { - await access(buildIdPath) - console.log('Reusing existing web build artifact.') - } catch { - await runCommandOrThrow({ - command: 'pnpm', - args: ['run', 'build'], - cwd: webDir, - }) + const [buildExists, previousEnvHash] = await Promise.all([ + access(buildIdPath) + .then(() => true) + .catch(() => false), + readFile(webBuildEnvStampPath, 'utf8') + .then(value => value.trim()) + .catch(() => ''), + ]) + + if (buildExists && previousEnvHash === envHash) { + console.log('Reusing existing web build artifact.') + return + } } + catch { + // Fall through to rebuild when the existing build cannot be verified. + } + + await runCommandOrThrow({ + command: 'pnpm', + args: ['run', 'build'], + cwd: webDir, + env: buildEnv, + }) + await writeFile(webBuildEnvStampPath, `${envHash}\n`, 'utf8') } export const startWeb = async () => { @@ -141,6 +166,7 @@ export const startWeb = async () => { args: ['run', 'start'], cwd: webDir, env: { + ...e2eWebEnvOverrides, HOSTNAME: '127.0.0.1', PORT: '3000', }, @@ -152,14 +178,25 @@ export const startApi = async () => { await runCommandOrThrow({ command: 'uv', - args: ['run', '--project', '.', 'flask', 'upgrade-db'], + args: ['run', '--project', '.', '--no-sync', 'flask', 'upgrade-db'], cwd: apiDir, env, }) await runForegroundProcess({ command: 'uv', - args: ['run', '--project', '.', 'flask', 'run', '--host', '127.0.0.1', '--port', '5001'], + args: [ + 'run', + '--project', + '.', + '--no-sync', + 'flask', + 'run', + '--host', + '127.0.0.1', + '--port', + '5001', + ], cwd: apiDir, env, }) @@ -177,7 +214,8 @@ export const resetState = async () => { console.log('Stopping middleware services...') try { await stopMiddleware() - } catch { + } + catch { // Reset should continue even if middleware is already stopped. } @@ -191,7 +229,7 @@ export const resetState = async () => { console.log('Removing E2E local state...') await Promise.all( - e2eStatePaths.map((targetPath) => rm(targetPath, { force: true, recursive: true })), + e2eStatePaths.map(targetPath => rm(targetPath, { force: true, recursive: true })), ) console.log('E2E state reset complete.') diff --git a/e2e/support/process.ts b/e2e/support/process.ts index 96273ef931..4de1161b08 100644 --- a/e2e/support/process.ts +++ b/e2e/support/process.ts @@ -1,6 +1,7 @@ import type { ChildProcess } from 'node:child_process' +import type { WriteStream } from 'node:fs' import { spawn } from 'node:child_process' -import { createWriteStream, type WriteStream } from 'node:fs' +import { createWriteStream } from 'node:fs' import { mkdir } from 'node:fs/promises' import net from 'node:net' import { dirname } from 'node:path' @@ -63,11 +64,14 @@ export const waitForUrl = async ( const response = await fetch(url, { signal: controller.signal, }) - if (response.ok) return - } finally { + if (response.ok) + return + } + finally { clearTimeout(timeout) } - } catch { + } + catch { // Keep polling until timeout. } @@ -138,7 +142,8 @@ const waitForProcessExit = (childProcess: ChildProcess, timeoutMs: number) => const signalManagedProcess = (childProcess: ChildProcess, signal: NodeJS.Signals) => { const { pid } = childProcess - if (!pid) return + if (!pid) + return try { if (process.platform !== 'win32') { @@ -147,13 +152,15 @@ const signalManagedProcess = (childProcess: ChildProcess, signal: NodeJS.Signals } childProcess.kill(signal) - } catch { + } + catch { // Best-effort shutdown. Cleanup continues even when the process is already gone. } } export const stopManagedProcess = async (managedProcess?: ManagedProcess) => { - if (!managedProcess) return + if (!managedProcess) + return const { childProcess, logStream } = managedProcess diff --git a/e2e/support/web-server.ts b/e2e/support/web-server.ts index ad5d5d916a..819f7effe3 100644 --- a/e2e/support/web-server.ts +++ b/e2e/support/web-server.ts @@ -34,7 +34,8 @@ export const startWebServer = async ({ }: WebServerStartOptions) => { const { host, port } = getUrlHostAndPort(baseURL) - if (reuseExistingServer && (await isPortReachable(host, port))) return + if (reuseExistingServer && (await isPortReachable(host, port))) + return activeProcess = await startLoggedProcess({ command, @@ -49,7 +50,8 @@ export const startWebServer = async ({ startupError = error }) activeProcess.childProcess.once('exit', (code, signal) => { - if (startupError) return + if (startupError) + return startupError = new Error( `Web server exited before readiness (code: ${code ?? 'unknown'}, signal: ${signal ?? 'none'}).`, @@ -67,7 +69,8 @@ export const startWebServer = async ({ try { await waitForUrl(baseURL, 1_000, 250, 1_000) return - } catch { + } + catch { // Continue polling until timeout or child exit. } } diff --git a/e2e/tsconfig.json b/e2e/tsconfig.json index 3976c12b66..3e72e790cf 100644 --- a/e2e/tsconfig.json +++ b/e2e/tsconfig.json @@ -1,16 +1,9 @@ { + "extends": "@dify/tsconfig/node.json", "compilerOptions": { - "target": "ES2023", "lib": ["ES2023", "DOM"], - "module": "ESNext", - "moduleResolution": "Bundler", - "allowJs": false, - "resolveJsonModule": true, - "noEmit": true, - "strict": true, - "skipLibCheck": true, "types": ["node", "@playwright/test", "@cucumber/cucumber"], - "isolatedModules": true, + "allowJs": false, "verbatimModuleSyntax": true }, "include": ["./**/*.ts"], diff --git a/e2e/vite.config.ts b/e2e/vite.config.ts index 98400d5b9b..2329b534b4 100644 --- a/e2e/vite.config.ts +++ b/e2e/vite.config.ts @@ -1,15 +1,5 @@ import { defineConfig } from 'vite-plus' export default defineConfig({ - lint: { - options: { - typeAware: true, - typeCheck: true, - denyWarnings: true, - }, - }, - fmt: { - singleQuote: true, - semi: false, - }, + }) diff --git a/eslint-suppressions.json b/eslint-suppressions.json new file mode 100644 index 0000000000..e4831c4e98 --- /dev/null +++ b/eslint-suppressions.json @@ -0,0 +1,7000 @@ +{ + "e2e/features/support/hooks.ts": { + "no-console": { + "count": 3 + }, + "node/prefer-global/buffer": { + "count": 1 + } + }, + "e2e/scripts/common.ts": { + "node/prefer-global/buffer": { + "count": 2 + } + }, + "e2e/support/process.ts": { + "ts/no-use-before-define": { + "count": 2 + } + }, + "packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/migrate.ts": { + "no-console": { + "count": 11 + } + }, + "packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/normalize.ts": { + "no-console": { + "count": 1 + } + }, + "packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/run.ts": { + "no-console": { + "count": 9 + } + }, + "web/.storybook/main.ts": { + "storybook/no-uninstalled-addons": { + "count": 3 + } + }, + "web/__mocks__/zustand.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/__tests__/document-detail-navigation-fix.test.tsx": { + "no-console": { + "count": 10 + } + }, + "web/__tests__/document-list-sorting.test.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/__tests__/embedded-user-id-auth.test.tsx": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/__tests__/embedded-user-id-store.test.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/__tests__/goto-anything/command-selector.test.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/__tests__/i18n-upload-features.test.ts": { + "no-console": { + "count": 3 + } + }, + "web/__tests__/navigation-utils.test.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/__tests__/plugin-tool-workflow-error.test.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/__tests__/real-browser-flicker.test.tsx": { + "no-console": { + "count": 16 + }, + "react/set-state-in-effect": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/__tests__/unified-tags-logic.test.ts": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/__tests__/workflow-onboarding-integration.test.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/layout-main.tsx": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/long-time-range-picker.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/time-range-picker/range-selector.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/__tests__/svg-attribute-error-reproduction.spec.tsx": { + "no-console": { + "count": 19 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/config-button.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/config-popup.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/provider-config-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/provider-panel.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/type.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/(commonLayout)/datasets/(datasetDetailLayout)/[datasetId]/layout-main.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/(humanInputLayout)/form/[token]/form.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/(shareLayout)/components/splash.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/(shareLayout)/webapp-reset-password/layout.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/(shareLayout)/webapp-signin/components/mail-and-password-auth.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/account/(commonLayout)/account-page/email-change-modal.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/account/(commonLayout)/account-page/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/account/(commonLayout)/delete-account/components/feed-back.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/account/(commonLayout)/delete-account/components/verify-email.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/account/(commonLayout)/delete-account/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/account/oauth/authorize/layout.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/account/oauth/authorize/page.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app-sidebar/app-info/app-operations.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 4 + } + }, + "web/app/components/app-sidebar/app-sidebar-dropdown.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app-sidebar/basic.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app-sidebar/dataset-info/dropdown.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app-sidebar/dataset-sidebar-dropdown.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app-sidebar/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app-sidebar/toggle-button.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/annotation/add-annotation-modal/edit-item/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/app/annotation/batch-add-annotation-modal/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/annotation/edit-annotation-modal/edit-item/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/app/annotation/header-opts/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/annotation/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/app/annotation/type.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/app/annotation/view-annotation-modal/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 5 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/app-access-control/specific-groups-or-members.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/app-publisher/features-wrapper.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/app/app-publisher/index.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/app/app-publisher/publish-with-multiple-model.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/app-publisher/version-info-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/base/var-highlight/index.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/app/configuration/config-prompt/advanced-prompt-input.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/configuration/config-prompt/conversation-history/edit-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/config-prompt/simple-prompt-input.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/app/configuration/config-var/__tests__/index.spec.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/configuration/config-var/config-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/app/configuration/config-var/config-modal/type-select.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/config-var/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/config-var/select-var-type.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/config-vision/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/config-vision/param-config-content.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/agent/agent-setting/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/agent/agent-setting/item-panel.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/agent/agent-tools/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 9 + } + }, + "web/app/components/app/configuration/config/agent/agent-tools/setting-built-in-tool.tsx": { + "react-hooks/exhaustive-deps": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/app/configuration/config/assistant-type-picker/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/automatic/get-automatic-res.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/automatic/instruction-editor.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/configuration/config/automatic/prompt-res-in-workflow.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/automatic/prompt-res.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/automatic/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/automatic/version-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/code-generator/get-code-generator-res.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/configuration/config/config-audio.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/config-document.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/context-var/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/context-var/var-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/params-config/__tests__/config-content.spec.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/params-config/config-content.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/params-config/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/select-dataset/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/settings-modal/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/app/configuration/dataset-config/settings-modal/retrieval-section.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/debug/__tests__/index.spec.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/debug/chat-user-input.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/debug/debug-with-multiple-model/chat-item.tsx": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/app/configuration/debug/debug-with-multiple-model/debug-item.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/app/configuration/debug/debug-with-multiple-model/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/configuration/debug/debug-with-multiple-model/model-parameter-trigger.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/debug/debug-with-multiple-model/text-generation-item.tsx": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/app/configuration/debug/debug-with-single-model/index.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/app/configuration/debug/hooks.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/app/configuration/debug/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 11 + } + }, + "web/app/components/app/configuration/debug/types.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/prompt-value-panel/index.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/app/configuration/prompt-value-panel/utils.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/create-app-dialog/app-list/sidebar.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/app/create-app-modal/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/create-from-dsl-modal/dsl-confirm-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/create-from-dsl-modal/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/app/duplicate-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/log/filter.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/app/log/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/app/log/list.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 6 + }, + "style/multiline-ternary": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/app/log/model-info.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/overview/app-card.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/overview/customize/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/overview/embedded/index.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/app/overview/settings/index.tsx": { + "no-restricted-imports": { + "count": 3 + }, + "react/set-state-in-effect": { + "count": 3 + }, + "regexp/no-unused-capturing-group": { + "count": 1 + } + }, + "web/app/components/app/overview/trigger-card.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/switch-app-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/app/text-generate/item/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/app/text-generate/item/result-tab.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/workflow-log/detail.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/workflow-log/filter.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/app/workflow-log/list.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/app/workflow-log/trigger-by-display.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/apps/app-card.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/apps/list.tsx": { + "react-hooks/exhaustive-deps": { + "count": 1 + }, + "react/unsupported-syntax": { + "count": 2 + } + }, + "web/app/components/apps/new-app-card.tsx": { + "react-hooks-extra/no-direct-set-state-in-use-effect": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/action-button/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/base/agent-log-modal/detail.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/agent-log-modal/index.stories.tsx": { + "no-console": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/agent-log-modal/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/agent-log-modal/result.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/agent-log-modal/tool-call.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/amplitude/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/amplitude/utils.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/app-icon-picker/ImageInput.tsx": { + "react/no-create-ref": { + "count": 1 + } + }, + "web/app/components/base/audio-btn/audio.ts": { + "node/prefer-global/buffer": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/audio-btn/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/audio-gallery/AudioPlayer.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/auto-height-textarea/index.stories.tsx": { + "no-console": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/auto-height-textarea/index.tsx": { + "react-hooks/rules-of-hooks": { + "count": 1 + }, + "react/rules-of-hooks": { + "count": 1 + } + }, + "web/app/components/base/badge/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/base/block-input/index.stories.tsx": { + "no-console": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/block-input/index.tsx": { + "react-refresh/only-export-components": { + "count": 1 + }, + "react/component-hook-factories": { + "count": 1 + }, + "react/no-nested-component-definitions": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/carousel/index.tsx": { + "react-hooks-extra/no-direct-set-state-in-use-effect": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 3 + } + }, + "web/app/components/base/chat/chat-with-history/chat-wrapper.tsx": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/base/chat/chat-with-history/context.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/base/chat/chat-with-history/header-in-mobile.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/chat/chat-with-history/header/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/chat/chat-with-history/hooks.tsx": { + "react/set-state-in-effect": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 18 + } + }, + "web/app/components/base/chat/chat-with-history/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/chat/chat-with-history/inputs-form/content.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/chat/chat-with-history/sidebar/operation.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/chat/chat-with-history/sidebar/rename-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/answer/agent-content.tsx": { + "style/multiline-ternary": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/answer/human-input-content/utils.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/answer/index.tsx": { + "react/set-state-in-effect": { + "count": 3 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/answer/operation.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/base/chat/chat/answer/workflow-process.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/chat-input-area/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/chat/chat/check-input-forms-hooks.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/citation/index.tsx": { + "react-hooks/exhaustive-deps": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/hooks.ts": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 17 + } + }, + "web/app/components/base/chat/chat/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + }, + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/type.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/base/chat/chat/utils.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/chat/embedded-chatbot/chat-wrapper.tsx": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/base/chat/embedded-chatbot/context.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/base/chat/embedded-chatbot/header/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/chat/embedded-chatbot/hooks.tsx": { + "react-hooks-extra/no-direct-set-state-in-use-effect": { + "count": 3 + }, + "react/set-state-in-effect": { + "count": 6 + } + }, + "web/app/components/base/chat/embedded-chatbot/inputs-form/content.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/chat/types.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/chat/utils.ts": { + "ts/no-explicit-any": { + "count": 10 + } + }, + "web/app/components/base/checkbox/index.stories.tsx": { + "no-console": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/chip/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/content-dialog/index.stories.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/copy-feedback/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/date-and-time-picker/date-picker/index.tsx": { + "react/set-state-in-effect": { + "count": 4 + } + }, + "web/app/components/base/date-and-time-picker/hooks.ts": { + "react/no-unnecessary-use-prefix": { + "count": 2 + } + }, + "web/app/components/base/date-and-time-picker/time-picker/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/base/date-and-time-picker/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/base/date-and-time-picker/utils/dayjs.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/dialog/index.stories.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/drawer-plus/index.stories.tsx": { + "react/component-hook-factories": { + "count": 1 + } + }, + "web/app/components/base/emoji-picker/Inner.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/emoji-picker/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/error-boundary/index.tsx": { + "react-refresh/only-export-components": { + "count": 3 + }, + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/features/context.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/base/features/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/annotation-reply/annotation-ctrl-button.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/annotation-reply/config-param-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/annotation-reply/config-param.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/annotation-reply/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/features/new-feature-panel/annotation-reply/type.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/annotation-reply/use-annotation-config.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/features/new-feature-panel/conversation-opener/modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/feature-bar.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/feature-card.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/base/features/new-feature-panel/file-upload/setting-modal.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/moderation/form-generation.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/moderation/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/moderation/moderation-setting-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/features/new-feature-panel/text-to-speech/param-config-content.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/base/features/new-feature-panel/text-to-speech/voice-settings.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/features/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/file-uploader/dynamic-pdf-preview.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/file-uploader/file-list-in-log.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/no-missing-key": { + "count": 1 + } + }, + "web/app/components/base/file-uploader/hooks.ts": { + "react/no-unnecessary-use-prefix": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/file-uploader/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 5 + } + }, + "web/app/components/base/file-uploader/pdf-highlighter-adapter.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/file-uploader/pdf-preview.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/file-uploader/store.tsx": { + "react-refresh/only-export-components": { + "count": 4 + } + }, + "web/app/components/base/file-uploader/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/base/file-uploader/utils.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/form/components/base/base-field.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/form/components/base/base-form.tsx": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/base/form/components/base/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/form/components/field/mixed-variable-text-input/placeholder.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/form/components/field/variable-or-constant-input.tsx": { + "no-console": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/form/components/field/variable-selector.tsx": { + "no-console": { + "count": 1 + } + }, + "web/app/components/base/form/form-scenarios/base/field.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/form/form-scenarios/base/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/form/form-scenarios/demo/index.tsx": { + "no-console": { + "count": 2 + } + }, + "web/app/components/base/form/form-scenarios/input-field/field.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/form/form-scenarios/input-field/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/form/form-scenarios/node-panel/field.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/form/form-scenarios/node-panel/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/form/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/form/hooks/use-check-validated.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/form/hooks/use-get-validators.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/form/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 15 + } + }, + "web/app/components/base/form/utils/secret-input/index.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/ga/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/base/icons/src/public/avatar/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/icons/src/public/billing/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/base/icons/src/public/common/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/base/icons/src/public/education/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/public/files/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 11 + } + }, + "web/app/components/base/icons/src/public/knowledge/dataset-card/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/base/icons/src/public/knowledge/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/base/icons/src/public/knowledge/online-drive/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/icons/src/public/llm/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 9 + } + }, + "web/app/components/base/icons/src/public/other/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/base/icons/src/public/thought/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/public/tracing/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 21 + } + }, + "web/app/components/base/icons/src/vender/features/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 10 + } + }, + "web/app/components/base/icons/src/vender/knowledge/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 15 + } + }, + "web/app/components/base/icons/src/vender/line/alertsAndFeedback/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/icons/src/vender/line/arrows/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/base/icons/src/vender/line/communication/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/icons/src/vender/line/development/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/icons/src/vender/line/editor/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/line/education/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/line/files/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/base/icons/src/vender/line/financeAndECommerce/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/base/icons/src/vender/line/general/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 12 + } + }, + "web/app/components/base/icons/src/vender/line/images/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/line/layout/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/line/mediaAndDevices/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/icons/src/vender/line/others/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 7 + } + }, + "web/app/components/base/icons/src/vender/line/time/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/icons/src/vender/other/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 8 + } + }, + "web/app/components/base/icons/src/vender/pipeline/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/icons/src/vender/plugin/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/icons/src/vender/solid/FinanceAndECommerce/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/solid/alertsAndFeedback/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/solid/arrows/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/icons/src/vender/solid/communication/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/base/icons/src/vender/solid/development/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/base/icons/src/vender/solid/editor/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/solid/education/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/icons/src/vender/solid/files/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/icons/src/vender/solid/general/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 11 + } + }, + "web/app/components/base/icons/src/vender/solid/mediaAndDevices/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/icons/src/vender/solid/security/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/solid/shapes/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/solid/users/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/system/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/workflow/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 30 + } + }, + "web/app/components/base/icons/utils.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/image-uploader/__tests__/image-preview.spec.tsx": { + "erasable-syntax-only/parameter-properties": { + "count": 1 + } + }, + "web/app/components/base/image-uploader/__tests__/utils.spec.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/image-uploader/hooks.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/base/image-uploader/image-link-input.tsx": { + "regexp/no-unused-capturing-group": { + "count": 1 + } + }, + "web/app/components/base/image-uploader/image-list.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/image-uploader/image-preview.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/image-uploader/utils.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/inline-delete-confirm/index.stories.tsx": { + "no-console": { + "count": 2 + } + }, + "web/app/components/base/input-with-copy/index.tsx": { + "react/unsupported-syntax": { + "count": 1 + } + }, + "web/app/components/base/input/index.stories.tsx": { + "no-console": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/input/index.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/base/logo/dify-logo.tsx": { + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/markdown-blocks/audio-block.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/base/markdown-blocks/button.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/markdown-blocks/code-block.tsx": { + "react/set-state-in-effect": { + "count": 7 + }, + "ts/no-explicit-any": { + "count": 9 + } + }, + "web/app/components/base/markdown-blocks/form.tsx": { + "erasable-syntax-only/enums": { + "count": 3 + } + }, + "web/app/components/base/markdown-blocks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 10 + } + }, + "web/app/components/base/markdown-blocks/link.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/markdown-blocks/paragraph.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/markdown-blocks/plugin-img.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/base/markdown-blocks/plugin-paragraph.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/base/markdown-blocks/think-block.tsx": { + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/base/markdown-blocks/video-block.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/base/markdown/error-boundary.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/markdown/markdown-utils.ts": { + "regexp/no-unused-capturing-group": { + "count": 1 + } + }, + "web/app/components/base/mermaid/index.tsx": { + "react/set-state-in-effect": { + "count": 7 + }, + "regexp/no-super-linear-backtracking": { + "count": 3 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/mermaid/utils.ts": { + "regexp/no-unused-capturing-group": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/base/message-log-modal/index.stories.tsx": { + "no-console": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/message-log-modal/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/modal-like-wrap/index.stories.tsx": { + "no-console": { + "count": 3 + } + }, + "web/app/components/base/modal/index.stories.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/modal/modal.stories.tsx": { + "no-console": { + "count": 4 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/new-audio-button/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/node-status/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/base/notion-connector/index.stories.tsx": { + "no-console": { + "count": 1 + } + }, + "web/app/components/base/notion-page-selector/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/pagination/index.tsx": { + "unicorn/prefer-number-properties": { + "count": 1 + } + }, + "web/app/components/base/pagination/type.ts": { + "ts/no-empty-object-type": { + "count": 1 + } + }, + "web/app/components/base/param-item/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/portal-to-follow-elem/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/index.stories.tsx": { + "no-console": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/index.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/base/prompt-editor/plugins/component-picker-block/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/plugins/component-picker-block/menu.tsx": { + "erasable-syntax-only/parameter-properties": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/plugins/context-block/component.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/plugins/context-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/current-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/draggable-plugin/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/error-message-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/history-block/component.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/plugins/history-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/hitl-input-block/component-ui.tsx": { + "react-hooks/exhaustive-deps": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/plugins/hitl-input-block/hitl-input-block-replacement-block.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/plugins/hitl-input-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/hitl-input-block/variable-block.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/plugins/last-run-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/query-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/request-url-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/shortcuts-popup-plugin/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/update-block.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/variable-block/index.tsx": { + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/workflow-variable-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 3 + } + }, + "web/app/components/base/prompt-editor/plugins/workflow-variable-block/node.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/utils.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/base/prompt-log-modal/index.stories.tsx": { + "no-console": { + "count": 1 + } + }, + "web/app/components/base/prompt-log-modal/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/qrcode/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/radio-card/index.stories.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/radio/component/group/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/radio/context/index.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/radio/index.stories.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/search-input/index.stories.tsx": { + "no-console": { + "count": 3 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/select/index.stories.tsx": { + "no-console": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/select/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + }, + "style/multiline-ternary": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/sort/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/svg-gallery/index.tsx": { + "node/prefer-global/buffer": { + "count": 1 + } + }, + "web/app/components/base/switch/index.stories.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/tab-slider/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/base/tag-input/index.stories.tsx": { + "no-console": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/tag-management/__tests__/panel.spec.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/tag-management/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/tag-management/tag-item-editor.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/tag-management/tag-remove-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/text-generation/hooks.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/text-generation/types.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/textarea/index.stories.tsx": { + "no-console": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/textarea/index.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/base/video-gallery/VideoPlayer.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/voice-input/__tests__/index.spec.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/voice-input/index.stories.tsx": { + "no-console": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/voice-input/utils.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/base/with-input-validation/index.stories.tsx": { + "no-console": { + "count": 1 + }, + "react/component-hook-factories": { + "count": 1 + } + }, + "web/app/components/base/with-input-validation/index.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/zendesk/utils.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/billing/annotation-full/modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/billing/billing-page/__tests__/index.spec.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/billing/plan-upgrade-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/billing/plan/assets/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/billing/plan/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/billing/pricing/assets/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 12 + } + }, + "web/app/components/billing/pricing/plan-switcher/plan-range-switcher.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/billing/pricing/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/billing/priority-label/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/billing/type.ts": { + "erasable-syntax-only/enums": { + "count": 4 + } + }, + "web/app/components/billing/upgrade-btn/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/billing/usage-info/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/common/document-picker/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/common/document-picker/preview-document-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/common/image-previewer/index.tsx": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/app/components/datasets/common/image-uploader/__tests__/store.spec.tsx": { + "react/error-boundaries": { + "count": 1 + } + }, + "web/app/components/datasets/common/image-uploader/hooks/use-upload.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/datasets/common/image-uploader/image-uploader-in-retrieval-testing/image-input.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/common/image-uploader/store.tsx": { + "react-refresh/only-export-components": { + "count": 3 + } + }, + "web/app/components/datasets/common/retrieval-method-info/index.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/datasets/common/retrieval-param-config/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create-from-pipeline/create-options/create-from-dsl-modal/dsl-confirm-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create-from-pipeline/create-options/create-from-dsl-modal/hooks/use-dsl-import.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/datasets/create-from-pipeline/create-options/create-from-dsl-modal/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create-from-pipeline/list/template-card/details/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create-from-pipeline/list/template-card/details/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/datasets/create-from-pipeline/list/template-card/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/embedding-process/indexing-progress-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/empty-dataset-creation-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/file-preview/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/create/notion-page-preview/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/create/step-one/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/datasets/create/step-one/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/datasets/create/step-two/components/general-chunking-options.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/step-two/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 5 + } + }, + "web/app/components/datasets/create/step-two/components/indexing-mode-section.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/datasets/create/step-two/components/inputs.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/step-two/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/datasets/create/step-two/hooks/use-indexing-config.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 3 + } + }, + "web/app/components/datasets/create/step-two/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 1 + }, + "react-hooks/exhaustive-deps": { + "count": 1 + } + }, + "web/app/components/datasets/create/step-two/preview-item/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/datasets/create/stop-embedding-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/website/base/checkbox-with-label.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/website/base/field.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/website/firecrawl/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-console": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/datasets/create/website/firecrawl/options.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/create/website/jina-reader/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-console": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/datasets/create/website/jina-reader/options.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/create/website/watercrawl/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-console": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/datasets/create/website/watercrawl/options.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/documents/components/document-list/components/document-table-row.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/components/document-list/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/datasets/documents/components/document-list/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/datasets/documents/components/documents-header.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/components/operations.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/components/rename-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/base/credential-selector/__tests__/index.spec.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/base/credential-selector/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/base/header.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/online-documents/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/online-drive/file-list/header/breadcrumbs/bucket.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/online-drive/file-list/header/breadcrumbs/dropdown/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/online-drive/file-list/list/item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/online-drive/index.tsx": { + "react/set-state-in-effect": { + "count": 5 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/online-drive/utils.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/store/provider.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/store/slices/online-drive.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/website-crawl/base/checkbox-with-label.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/website-crawl/base/options/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/website-crawl/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 5 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/process-documents/form.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/process-documents/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/processing/embedding-process/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/steps/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/types.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/batch-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/common/chunk-content.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/common/regeneration-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/common/summary-status.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/datasets/documents/detail/completed/components/menu-bar.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/datasets/documents/detail/completed/components/segment-list-content.tsx": { + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/display-toggle.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 5 + } + }, + "web/app/components/datasets/documents/detail/completed/hooks/use-search-filter.ts": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + }, + "react-refresh/only-export-components": { + "count": 1 + }, + "react/use-memo": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/segment-card/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/status-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/context.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/embedding/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/datasets/documents/detail/embedding/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/metadata/components/doc-type-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/metadata/components/field-info.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/metadata/components/metadata-field-list.tsx": { + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/metadata/hooks/use-metadata-state.ts": { + "react-hooks-extra/no-direct-set-state-in-use-effect": { + "count": 4 + }, + "react/set-state-in-effect": { + "count": 4 + } + }, + "web/app/components/datasets/documents/detail/metadata/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/segment-add/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/settings/pipeline-settings/index.tsx": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/datasets/documents/detail/settings/pipeline-settings/process-documents/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/datasets/documents/status-item/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/external-api/external-api-modal/index.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/external-knowledge-base/create/ExternalApiSelect.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/external-knowledge-base/create/ExternalApiSelection.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/extra-info/statistics.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/formatted-text/flavours/type.ts": { + "ts/no-empty-object-type": { + "count": 1 + } + }, + "web/app/components/datasets/hit-testing/components/chunk-detail-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/hit-testing/components/query-input/textarea.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/hit-testing/components/result-item-external.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/hit-testing/components/score.tsx": { + "unicorn/prefer-number-properties": { + "count": 1 + } + }, + "web/app/components/datasets/hit-testing/index.tsx": { + "react/unsupported-syntax": { + "count": 1 + } + }, + "web/app/components/datasets/list/dataset-card/components/dataset-card-footer.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/list/dataset-card/hooks/use-dataset-card-state.ts": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/metadata/edit-metadata-batch/edited-beacon.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/metadata/edit-metadata-batch/input-combined.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/datasets/metadata/edit-metadata-batch/modal.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/datasets/metadata/hooks/use-edit-dataset-metadata.ts": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/metadata/hooks/use-metadata-document.ts": { + "ts/no-explicit-any": { + "count": 1 + }, + "ts/no-non-null-asserted-optional-chain": { + "count": 2 + } + }, + "web/app/components/datasets/metadata/metadata-dataset/create-content.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/metadata/metadata-dataset/create-metadata-modal.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/metadata/metadata-dataset/dataset-metadata-drawer.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/datasets/metadata/metadata-dataset/select-metadata-modal.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/datasets/metadata/metadata-document/info-group.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/metadata/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/datasets/rename-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/settings/chunk-structure/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/datasets/settings/index-method/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/settings/index-method/keyword-number.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/settings/permission-selector/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/no-missing-key": { + "count": 1 + } + }, + "web/app/components/datasets/settings/summary-index-setting.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/develop/code.tsx": { + "ts/no-empty-object-type": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 9 + } + }, + "web/app/components/develop/md.tsx": { + "ts/no-empty-object-type": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/develop/secret-key/input-copy.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/develop/secret-key/secret-key-generate.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/develop/secret-key/secret-key-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/explore/banner/banner-item.tsx": { + "react-hooks-extra/no-direct-set-state-in-use-effect": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 3 + } + }, + "web/app/components/explore/banner/indicator-button.tsx": { + "react-hooks-extra/no-direct-set-state-in-use-effect": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/explore/create-app-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + }, + "unicorn/prefer-number-properties": { + "count": 1 + } + }, + "web/app/components/explore/item-operation/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/explore/try-app/app/chat.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/explore/try-app/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/explore/try-app/tab.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/goto-anything/actions/commands/command-bus.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/goto-anything/actions/commands/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/goto-anything/actions/commands/registry.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/goto-anything/actions/commands/slash.tsx": { + "react-refresh/only-export-components": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/goto-anything/actions/commands/types.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/goto-anything/actions/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/goto-anything/actions/types.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/goto-anything/command-selector.tsx": { + "react/unsupported-syntax": { + "count": 2 + } + }, + "web/app/components/goto-anything/components/footer.tsx": { + "react/unsupported-syntax": { + "count": 1 + } + }, + "web/app/components/goto-anything/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/goto-anything/context.tsx": { + "react-refresh/only-export-components": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 4 + } + }, + "web/app/components/goto-anything/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/goto-anything/hooks/use-goto-anything-results.ts": { + "@tanstack/query/exhaustive-deps": { + "count": 1 + } + }, + "web/app/components/header/account-about/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-dropdown/compliance.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/header/account-setting/api-based-extension-page/modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/api-based-extension-page/selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/data-source-page-new/card.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/data-source-page-new/configure.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/data-source-page-new/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/header/account-setting/data-source-page-new/hooks/use-marketplace-all-plugins.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/header/account-setting/data-source-page-new/install-from-marketplace.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/header/account-setting/data-source-page-new/item.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/header/account-setting/data-source-page-new/operator.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/header/account-setting/data-source-page-new/types.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/key-validator/declarations.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/header/account-setting/language-page/index.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/header/account-setting/members-page/invite-modal/index.tsx": { + "react/set-state-in-effect": { + "count": 3 + } + }, + "web/app/components/header/account-setting/members-page/operation/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/members-page/transfer-ownership-modal/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/header/account-setting/members-page/transfer-ownership-modal/member-selector.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/declarations.ts": { + "erasable-syntax-only/enums": { + "count": 11 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/header/account-setting/model-provider-page/hooks.ts": { + "react/no-unnecessary-use-prefix": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/add-credential-in-load-balancing.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/add-custom-model.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/authorized/index.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/config-provider.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/credential-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/hooks/use-auth.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/hooks/use-custom-models.ts": { + "react/no-unnecessary-use-prefix": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/hooks/use-model-form-schemas.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 7 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/switch-credential-in-load-balancing.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-modal/Form.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-modal/Input.tsx": { + "unicorn/prefer-number-properties": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-modal/index.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-parameter-modal/configuration-button.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-parameter-modal/model-display.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-parameter-modal/status-indicators.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-selector/feature-icon.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/model-provider-page/provider-added-card/cooldown-timer.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/provider-added-card/model-auth-dropdown/__tests__/use-activate-credential.spec.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/provider-added-card/model-list-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/model-provider-page/provider-added-card/model-load-balancing-configs.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/header/account-setting/model-provider-page/provider-added-card/model-load-balancing-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/header/account-setting/model-provider-page/provider-added-card/priority-use-tip.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/model-provider-page/utils.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/header/account-setting/plugin-page/utils.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/header/app-nav/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/header/header-wrapper.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/base/badges/icon-with-tooltip.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/base/key-value-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/card/index.tsx": { + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/plugins/install-plugin/hooks.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/plugins/install-plugin/hooks/use-fold-anim-into.ts": { + "react/no-unnecessary-use-prefix": { + "count": 1 + } + }, + "web/app/components/plugins/install-plugin/install-bundle/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/plugins/install-plugin/install-bundle/item/github-item.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/plugins/install-plugin/install-bundle/steps/hooks/use-install-multi-state.ts": { + "react-hooks/exhaustive-deps": { + "count": 1 + } + }, + "web/app/components/plugins/install-plugin/install-from-github/index.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/plugins/install-plugin/install-from-github/steps/selectPackage.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/install-plugin/install-from-local-package/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/install-plugin/install-from-local-package/steps/uploading.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/install-plugin/install-from-marketplace/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/marketplace/hooks.ts": { + "@tanstack/query/exhaustive-deps": { + "count": 1 + } + }, + "web/app/components/plugins/marketplace/search-box/tags-filter.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/marketplace/sort-dropdown/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-auth/authorize/add-oauth-button.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-auth/authorize/api-key-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-auth/authorize/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-auth/authorize/oauth-client-settings.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-auth/authorized-in-node.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-auth/authorized/index.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-auth/authorized/item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-auth/hooks/use-get-api.ts": { + "react/no-unnecessary-use-prefix": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-auth/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 12 + }, + "react-refresh/only-export-components": { + "count": 3 + } + }, + "web/app/components/plugins/plugin-auth/plugin-auth-in-agent.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-auth/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + }, + "no-barrel-files/no-barrel-files": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-auth/utils.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/agent-strategy-list.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/app-selector/app-inputs-form.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/plugins/plugin-detail-panel/app-selector/app-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/app-selector/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/datasource-action-list.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/detail-header.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/detail-header/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/detail-header/components/plugin-source-badge.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/detail-header/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/plugins/plugin-detail-panel/endpoint-card.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/endpoint-list.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/endpoint-modal.tsx": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/plugins/plugin-detail-panel/model-list.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/model-selector/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/plugins/plugin-detail-panel/model-selector/tts-params-panel.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/multiple-tool-selector/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/strategy-detail.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/__tests__/index.spec.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/create/common-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/create/hooks/use-common-modal-state.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/create/hooks/use-oauth-client-state.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/create/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "no-restricted-imports": { + "count": 3 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/create/oauth-client.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/create/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/edit/apikey-edit-modal.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/edit/manual-edit-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/edit/oauth-edit-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/list-view.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/log-viewer.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/selector-entry.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/selector-view.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/subscription-card.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/tool-selector/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 7 + } + }, + "web/app/components/plugins/plugin-detail-panel/tool-selector/components/reasoning-config-form.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/tool-selector/components/schema-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/tool-selector/components/tool-item.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/tool-selector/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/tool-selector/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/trigger/event-detail-drawer.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/plugins/plugin-item/action.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-item/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-mutation-model/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/context.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/debug-info.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/empty/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-page/filter-management/category-filter.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/filter-management/tag-filter.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/install-plugin-dropdown.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-page/plugin-info.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/plugin-tasks/components/task-status-indicator.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/plugin-tasks/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/readme-panel/index.tsx": { + "react/unsupported-syntax": { + "count": 1 + } + }, + "web/app/components/plugins/readme-panel/store.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/plugins/reference-setting-modal/auto-update-setting/strategy-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/reference-setting-modal/auto-update-setting/tool-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/reference-setting-modal/auto-update-setting/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/plugins/reference-setting-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/types.ts": { + "erasable-syntax-only/enums": { + "count": 7 + }, + "ts/no-explicit-any": { + "count": 25 + } + }, + "web/app/components/plugins/update-plugin/from-market-place.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/editor/form/hidden-fields.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/editor/form/hooks.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/editor/form/initial-fields.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/editor/form/show-all-settings.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/editor/form/types.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/hooks.ts": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/label-right-content/global-inputs.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/test-run/preparation/document-processing/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/test-run/preparation/document-processing/options.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/components/panel/test-run/preparation/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/components/panel/test-run/result/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/test-run/result/result-preview/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/test-run/result/result-preview/utils.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/rag-pipeline/components/publish-as-knowledge-pipeline-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/rag-pipeline-children.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/rag-pipeline-header/publisher/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/rag-pipeline-header/run-mode.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/rag-pipeline-main.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/components/update-dsl-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/version-mismatch-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 9 + } + }, + "web/app/components/rag-pipeline/hooks/use-DSL.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/hooks/use-input-fields.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/hooks/use-nodes-sync-draft.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/hooks/use-pipeline-config.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/rag-pipeline/hooks/use-pipeline-init.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/hooks/use-pipeline-run.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/store/index.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/utils/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/utils/nodes.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/share/text-generation/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/share/text-generation/info-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/share/text-generation/menu-dropdown.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/share/text-generation/no-data/index.tsx": { + "ts/no-empty-object-type": { + "count": 1 + } + }, + "web/app/components/share/text-generation/result/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/share/text-generation/run-batch/csv-reader/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/share/text-generation/run-once/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/share/text-generation/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/share/utils.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/tools/edit-custom-collection-modal/config-credentials.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/edit-custom-collection-modal/get-schema.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/tools/edit-custom-collection-modal/index.tsx": { + "react/set-state-in-effect": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/tools/edit-custom-collection-modal/test-api.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/tools/labels/selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/mcp/create-card.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/tools/mcp/detail/content.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/tools/mcp/detail/operation-dropdown.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/mcp/detail/tool-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/mcp/mcp-server-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/tools/mcp/mcp-server-param-item.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/tools/mcp/mcp-service-card.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/mcp/modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/mcp/provider-card.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/tools/provider-list.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/tools/provider/empty.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/tools/setting/build-in/config-credentials.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/tools/types.ts": { + "erasable-syntax-only/enums": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/tools/workflow-tool/confirm-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/workflow-tool/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/workflow-tool/method-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow-app/components/workflow-children.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow-app/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 13 + } + }, + "web/app/components/workflow-app/hooks/use-DSL.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow-app/hooks/use-nodes-sync-draft.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow-app/hooks/use-workflow-init.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow-app/hooks/use-workflow-refresh-draft.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow-app/hooks/use-workflow-run.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow-app/hooks/use-workflow-template.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow-app/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow-app/store/workflow/workflow-slice.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/block-selector/all-start-blocks.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/blocks.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/featured-tools.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/block-selector/featured-triggers.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/block-selector/hooks.ts": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/index-bar.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/main.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/market-place-plugin/action.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/market-place-plugin/item.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/rag-tool-recommendations/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/start-blocks.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/tabs.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/tool-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/tool/action-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/tool/tool-list-flat-view/list.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/tool/tool.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/workflow/block-selector/trigger-plugin/action-item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/trigger-plugin/item.tsx": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/types.ts": { + "erasable-syntax-only/enums": { + "count": 4 + } + }, + "web/app/components/workflow/block-selector/use-check-vertical-scrollbar.ts": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/use-sticky-scroll.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/view-type-select.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/candidate-node-main.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/context.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/datasets-detail-store/provider.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/dsl-export-confirm-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/header/run-mode.tsx": { + "no-console": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/header/test-run-menu.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/header/version-history-button.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/header/view-history.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/workflow/header/view-workflow-history.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/hooks-store/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/workflow/hooks-store/provider.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/hooks-store/store.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/hooks/__tests__/use-checklist.spec.ts": { + "react/error-boundaries": { + "count": 1 + } + }, + "web/app/components/workflow/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 27 + } + }, + "web/app/components/workflow/hooks/use-checklist.ts": { + "ts/no-empty-object-type": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/hooks/use-dynamic-test-run-options.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/hooks/use-helpline.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/hooks/use-inspect-vars-crud-common.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/hooks/use-nodes-interactions.ts": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/workflow/hooks/use-serial-async-callback.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/hooks/use-workflow-interactions.ts": { + "no-barrel-files/no-barrel-files": { + "count": 5 + } + }, + "web/app/components/workflow/hooks/use-workflow-run-event/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 19 + } + }, + "web/app/components/workflow/hooks/use-workflow-run-event/use-workflow-agent-log.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/hooks/use-workflow-run-event/use-workflow-finished.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/hooks/use-workflow-search.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/hooks/use-workflow-variables.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/add-variable-popup-with-position.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/_base/components/agent-strategy-selector.tsx": { + "no-restricted-imports": { + "count": 3 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/_base/components/agent-strategy.tsx": { + "ts/no-empty-object-type": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/_base/components/before-run-form/form-item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 11 + } + }, + "web/app/components/workflow/nodes/_base/components/before-run-form/form.tsx": { + "ts/no-explicit-any": { + "count": 3 + }, + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/before-run-form/index.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/_base/components/collapse/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/config-vision.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/editor/code-editor/editor-support-vars.tsx": { + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/_base/components/editor/code-editor/index.tsx": { + "react-refresh/only-export-components": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/_base/components/entry-node-container.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/error-handle/default-value.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/error-handle/error-handle-on-panel.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/error-handle/error-handle-type-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/error-handle/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/field.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/form-input-item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/_base/components/form-input-type-switch.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/help-link.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/input-support-select-var.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/input-var-type-icon.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/layout/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 7 + }, + "react-refresh/only-export-components": { + "count": 7 + } + }, + "web/app/components/workflow/nodes/_base/components/mcp-tool-availability.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/mcp-tool-not-support-tooltip.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/memory-config.tsx": { + "unicorn/prefer-number-properties": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/mixed-variable-text-input/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/mixed-variable-text-input/placeholder.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/next-step/operator.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/node-control.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/node-handle.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/option-card.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/panel-operator/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/prompt/editor.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/_base/components/readonly-input-with-select-var.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/selector.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/_base/components/setting-item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/switch-plugin-version.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/constant-field.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/match-schema-type.ts": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/object-child-tree-panel/picker/field.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/output-var-list.tsx": { + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/utils.ts": { + "ts/no-explicit-any": { + "count": 32 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/var-list.tsx": { + "react/unsupported-syntax": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/var-reference-picker.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/var-reference-vars.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/var-type-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/variable-label/hooks.ts": { + "react/no-unnecessary-use-prefix": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/variable-label/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/_base/components/workflow-panel/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 3 + }, + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/_base/components/workflow-panel/last-run/index.tsx": { + "react/set-state-in-effect": { + "count": 7 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/_base/components/workflow-panel/last-run/use-last-run.ts": { + "react/no-unnecessary-use-prefix": { + "count": 2 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/workflow/nodes/_base/components/workflow-panel/tab.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/hooks/use-one-step-run.ts": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 22 + } + }, + "web/app/components/workflow/nodes/_base/hooks/use-output-var-list.ts": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/workflow/nodes/_base/hooks/use-toggle-expend.ts": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/hooks/use-var-list.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/_base/node.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/agent/components/model-bar.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-empty-object-type": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/agent/components/tool-icon.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/unsupported-syntax": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/agent/default.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/agent/node.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/agent/panel.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/agent/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/agent/use-config.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/workflow/nodes/agent/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/answer/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/assigner/components/operation-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/assigner/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/assigner/hooks.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/assigner/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/assigner/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/assigner/utils.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/code/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/code/dependency-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/code/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/code/use-config.ts": { + "react/set-state-in-effect": { + "count": 2 + }, + "regexp/no-useless-assertions": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/code/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/components.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/data-source-empty/hooks.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/data-source/default.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/data-source/hooks/use-before-run-form.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/data-source/hooks/use-config.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/data-source/panel.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/data-source/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-barrel-files/no-barrel-files": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/data-source/utils.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/document-extractor/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/document-extractor/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/end/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/end/node.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/http/components/authorization/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/http/components/curl-panel.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/http/components/key-value/key-value-edit/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/http/components/key-value/key-value-edit/item.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/http/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/http/types.ts": { + "erasable-syntax-only/enums": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/http/use-config.ts": { + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/http/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/human-input/components/button-style-dropdown.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/email-configure-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/method-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/method-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/recipient/email-input.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/recipient/member-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/test-email-sender.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + }, + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/upgrade-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/form-content-preview.tsx": { + "react/unsupported-syntax": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/form-content.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "react/no-nested-component-definitions": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/human-input/components/variable-in-markdown.tsx": { + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/human-input/panel.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/if-else/components/condition-add.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/components/condition-list/condition-input.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/components/condition-list/condition-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/components/condition-list/condition-operator.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/components/condition-list/condition-var-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/components/condition-number-input.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/components/condition-wrap.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/if-else/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/iteration-start/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/iteration/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/iteration/node.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/iteration/panel.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/iteration/use-config.ts": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/iteration/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/knowledge-base/components/chunk-structure/selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-base/components/index-method.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-base/components/retrieval-setting/hooks.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/knowledge-base/components/retrieval-setting/search-method-option.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-base/components/retrieval-setting/top-k-and-score-threshold.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-base/components/retrieval-setting/type.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/knowledge-base/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-barrel-files/no-barrel-files": { + "count": 8 + } + }, + "web/app/components/workflow/nodes/knowledge-base/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/knowledge-base/utils.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/add-condition.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/condition-list/condition-common-variable-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/condition-list/condition-item.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/condition-list/condition-operator.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/condition-list/condition-value-method.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/condition-list/condition-variable-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/metadata-filter/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/metadata-filter/metadata-filter-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/metadata-trigger.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/retrieval-config.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/node.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/types.ts": { + "erasable-syntax-only/enums": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/list-operator/components/filter-condition.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/list-operator/components/sub-variable-picker.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/list-operator/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/list-operator/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/config-prompt-item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/llm/components/config-prompt.tsx": { + "react/unsupported-syntax": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/code-editor.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/json-importer.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/json-schema-config.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/json-schema-generator/assets/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/json-schema-generator/generated-result.tsx": { + "style/multiline-ternary": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/json-schema-generator/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/json-schema-generator/prompt-editor.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/visual-editor/context.tsx": { + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/visual-editor/edit-card/actions.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/visual-editor/edit-card/auto-width-input.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/visual-editor/edit-card/type-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/visual-editor/hooks.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/structure-output.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/llm/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/llm/use-config.ts": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 9 + } + }, + "web/app/components/workflow/nodes/llm/utils.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/workflow/nodes/loop-start/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/condition-add.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/condition-list/condition-input.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/condition-list/condition-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/condition-list/condition-operator.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/condition-list/condition-var-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/condition-number-input.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/condition-wrap.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/loop-variables/form-item.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/loop/components/loop-variables/input-mode-selec.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/loop-variables/item.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/loop/components/loop-variables/variable-type-select.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/use-config.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/loop/use-single-run-form-params.helpers.ts": { + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/parameter-extractor/components/extract-parameter/__tests__/list.spec.tsx": { + "unused-imports/no-unused-vars": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/parameter-extractor/components/extract-parameter/update.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/parameter-extractor/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/parameter-extractor/panel.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/parameter-extractor/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/parameter-extractor/use-config.ts": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/parameter-extractor/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 9 + } + }, + "web/app/components/workflow/nodes/question-classifier/components/advanced-setting.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/question-classifier/components/class-item.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/question-classifier/components/class-list.tsx": { + "react/set-state-in-effect": { + "count": 1 + }, + "react/unsupported-syntax": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/question-classifier/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/question-classifier/node.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/question-classifier/use-config.ts": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/question-classifier/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/workflow/nodes/start/panel.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/start/use-config.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/start/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/template-transform/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/template-transform/use-config.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/template-transform/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/tool/components/copy-id.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/tool/components/input-var-list.tsx": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/workflow/nodes/tool/components/mixed-variable-text-input/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/tool/components/mixed-variable-text-input/placeholder.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/tool/components/tool-form/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/tool/components/tool-form/item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/tool/default.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/workflow/nodes/tool/hooks/use-config.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/tool/hooks/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/tool/output-schema-utils.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/tool/panel.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/tool/types.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/components/trigger-form/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/components/trigger-form/item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/default.ts": { + "ts/no-explicit-any": { + "count": 11 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/node.tsx": { + "react/unsupported-syntax": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/panel.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/types.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/use-config.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/utils/form-helpers.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/workflow/nodes/trigger-schedule/components/frequency-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-schedule/components/monthly-days-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-schedule/default.ts": { + "regexp/no-unused-capturing-group": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 10 + } + }, + "web/app/components/workflow/nodes/trigger-webhook/components/generic-table.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-webhook/components/parameter-table.tsx": { + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-webhook/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-webhook/panel.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/utils.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/variable-assigner/components/var-group-item.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/variable-assigner/default.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/variable-assigner/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/note-node/note-editor/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/workflow/note-node/note-editor/plugins/link-editor-plugin/component.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/note-node/note-editor/toolbar/color-picker.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/note-node/note-editor/toolbar/command.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/note-node/note-editor/toolbar/font-size-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/note-node/note-editor/toolbar/operator.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/note-node/note-editor/utils.ts": { + "regexp/no-useless-quantifier": { + "count": 1 + } + }, + "web/app/components/workflow/operator/add-block.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/operator/hooks.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/operator/more-actions.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/operator/tip-popup.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/operator/zoom-in-out.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/panel/chat-record/index.tsx": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/workflow/panel/chat-record/user-input.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/panel/chat-variable-panel/components/array-bool-list.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/panel/chat-variable-panel/components/array-value-list.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/panel/chat-variable-panel/components/object-value-item.tsx": { + "react-refresh/only-export-components": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 5 + }, + "unicorn/prefer-number-properties": { + "count": 2 + } + }, + "web/app/components/workflow/panel/chat-variable-panel/components/object-value-list.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/panel/chat-variable-panel/components/variable-type-select.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/panel/chat-variable-panel/type.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/workflow/panel/debug-and-preview/chat-wrapper.tsx": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/panel/debug-and-preview/conversation-variable-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/panel/debug-and-preview/hooks.ts": { + "ts/no-explicit-any": { + "count": 12 + } + }, + "web/app/components/workflow/panel/debug-and-preview/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/panel/env-panel/variable-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/panel/human-input-form-list.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/panel/inputs-panel.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/panel/version-history-panel/context-menu/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/panel/version-history-panel/delete-confirm-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/panel/version-history-panel/filter/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/panel/version-history-panel/restore-confirm-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/panel/workflow-preview.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/run/agent-log/agent-log-nav-more.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/run/agent-log/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/workflow/run/hooks.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/run/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/workflow/run/iteration-log/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/workflow/run/iteration-log/iteration-log-trigger.tsx": { + "unicorn/prefer-number-properties": { + "count": 1 + } + }, + "web/app/components/workflow/run/loop-log/__tests__/loop-log-trigger.spec.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/run/loop-log/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/workflow/run/loop-log/loop-log-trigger.tsx": { + "unicorn/prefer-number-properties": { + "count": 1 + } + }, + "web/app/components/workflow/run/node.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/run/output-panel.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/run/result-panel.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/run/result-text.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/run/retry-log/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/workflow/run/utils/format-log/agent/index.ts": { + "ts/no-explicit-any": { + "count": 11 + } + }, + "web/app/components/workflow/run/utils/format-log/graph-to-log-struct.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/workflow/run/utils/format-log/index.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/run/utils/format-log/iteration/index.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/run/utils/format-log/loop/index.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/run/utils/format-log/parallel/index.ts": { + "no-console": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/store/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/workflow/store/workflow/debug/inspect-vars-slice.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/store/workflow/workflow-draft-slice.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/store/workflow/workflow-slice.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/types.ts": { + "erasable-syntax-only/enums": { + "count": 16 + }, + "ts/no-empty-object-type": { + "count": 3 + }, + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/workflow/update-dsl-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/utils/__tests__/node-navigation.spec.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/utils/data-source.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/utils/debug.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/utils/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 9 + } + }, + "web/app/components/workflow/utils/node-navigation.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/utils/node.ts": { + "regexp/no-super-linear-backtracking": { + "count": 1 + } + }, + "web/app/components/workflow/utils/tool.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/utils/workflow-init.ts": { + "ts/no-explicit-any": { + "count": 12 + } + }, + "web/app/components/workflow/utils/workflow.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/variable-inspect/display-content.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/variable-inspect/group.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/variable-inspect/left.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/variable-inspect/listening.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/variable-inspect/panel.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/variable-inspect/right.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/variable-inspect/trigger.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/variable-inspect/utils.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/variable-inspect/value-content.tsx": { + "react/set-state-in-effect": { + "count": 5 + }, + "regexp/no-super-linear-backtracking": { + "count": 1 + }, + "regexp/no-unused-capturing-group": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/workflow-history-store.tsx": { + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/workflow/workflow-preview/components/nodes/base.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/workflow-preview/components/nodes/constants.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/workflow-preview/components/nodes/iteration-start/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/workflow-preview/components/nodes/loop-start/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/workflow-preview/components/zoom-in-out.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/education-apply/expire-notice-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/education-apply/hooks.ts": { + "react/set-state-in-effect": { + "count": 5 + } + }, + "web/app/education-apply/search-input.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/education-apply/verify-state-modal.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/forgot-password/ForgotPasswordForm.spec.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/init/InitPasswordPopup.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/install/installForm.spec.tsx": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/layout.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/reset-password/layout.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/signin/_header.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/signin/components/mail-and-password-auth.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/signin/invite-settings/page.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/signin/layout.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/signin/one-more-step.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/signup/layout.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/context/external-api-panel-context.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/context/external-knowledge-api-context.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/context/global-public-context.tsx": { + "react-refresh/only-export-components": { + "count": 3 + } + }, + "web/context/hooks/use-trigger-events-limit-modal.ts": { + "react/set-state-in-effect": { + "count": 3 + } + }, + "web/context/modal-context-provider.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/context/modal-context.test.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/context/modal-context.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/context/provider-context-provider.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/context/web-app-context.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/hooks/use-async-window-open.spec.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/hooks/use-format-time-from-now.spec.ts": { + "regexp/no-dupe-disjunctions": { + "count": 5 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/hooks/use-metadata.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/hooks/use-mitt.ts": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/hooks/use-oauth.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/hooks/use-pay.tsx": { + "react/set-state-in-effect": { + "count": 4 + } + }, + "web/i18n-config/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/i18n-config/lib.client.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/i18n/de-DE/billing.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/i18n/en-US/app-debug.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/i18n/fr-FR/app-debug.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/i18n/fr-FR/app.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/i18n/fr-FR/plugin-trigger.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/i18n/fr-FR/tools.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/i18n/fr-FR/workflow.json": { + "no-irregular-whitespace": { + "count": 2 + } + }, + "web/i18n/pt-BR/common.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/i18n/ru-RU/common.json": { + "no-irregular-whitespace": { + "count": 2 + } + }, + "web/i18n/uk-UA/app-debug.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/models/access-control.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/models/app.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/models/common.ts": { + "erasable-syntax-only/enums": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/models/datasets.ts": { + "erasable-syntax-only/enums": { + "count": 8 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/models/debug.ts": { + "erasable-syntax-only/enums": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/models/log.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/models/pipeline.ts": { + "erasable-syntax-only/enums": { + "count": 3 + }, + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/models/share.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/plugins/dev-proxy/server.spec.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/scripts/component-analyzer.js": { + "regexp/no-unused-capturing-group": { + "count": 6 + } + }, + "web/service/access-control.ts": { + "@tanstack/query/exhaustive-deps": { + "count": 1 + } + }, + "web/service/annotation.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/service/apps.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/service/base.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/service/client.spec.ts": { + "next/no-assign-module-variable": { + "count": 1 + } + }, + "web/service/common.ts": { + "ts/no-explicit-any": { + "count": 29 + } + }, + "web/service/datasets.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/service/debug.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/service/fetch.ts": { + "regexp/no-unused-capturing-group": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/service/knowledge/use-dataset.ts": { + "@tanstack/query/exhaustive-deps": { + "count": 1 + } + }, + "web/service/share.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/service/try-app.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/service/use-apps.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/service/use-common.ts": { + "ts/no-empty-object-type": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/service/use-endpoints.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/service/use-flow.ts": { + "react/no-unnecessary-use-prefix": { + "count": 1 + } + }, + "web/service/use-pipeline.ts": { + "@tanstack/query/exhaustive-deps": { + "count": 1 + } + }, + "web/service/use-plugins-auth.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/service/use-plugins.ts": { + "react/set-state-in-effect": { + "count": 1 + }, + "regexp/no-unused-capturing-group": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + }, + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/service/use-tools.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/service/use-workflow.ts": { + "@tanstack/query/exhaustive-deps": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/service/utils.spec.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/types/app.ts": { + "erasable-syntax-only/enums": { + "count": 9 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/types/assets.d.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/types/common.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/types/feature.ts": { + "erasable-syntax-only/enums": { + "count": 3 + } + }, + "web/types/lamejs.d.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/types/pipeline.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/types/react-18-input-autosize.d.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/types/workflow.ts": { + "ts/no-explicit-any": { + "count": 17 + } + }, + "web/utils/clipboard.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/utils/completion-params.spec.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/utils/completion-params.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/utils/context.ts": { + "react/component-hook-factories": { + "count": 1 + } + }, + "web/utils/error-parser.ts": { + "no-console": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/utils/get-icon.spec.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/utils/gtag.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/utils/index.spec.ts": { + "test/no-identical-title": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/utils/index.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/utils/mcp.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/utils/model-config.spec.ts": { + "ts/no-explicit-any": { + "count": 13 + } + }, + "web/utils/model-config.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/utils/navigation.spec.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/utils/tool-call.spec.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/utils/validators.ts": { + "ts/no-explicit-any": { + "count": 2 + } + } +} \ No newline at end of file diff --git a/eslint.config.mjs b/eslint.config.mjs new file mode 100644 index 0000000000..5e81e95f2f --- /dev/null +++ b/eslint.config.mjs @@ -0,0 +1,65 @@ +// @ts-check + +import antfu, { GLOB_MARKDOWN } from '@antfu/eslint-config' +import md from 'eslint-markdown' +import markdownPreferences from 'eslint-plugin-markdown-preferences' + +export default antfu( + { + ignores: original => [ + '**', + '!packages/**', + '!web/**', + '!e2e/**', + '!eslint.config.mjs', + '!package.json', + '!vite.config.ts', + ...original, + ], + typescript: { + overrides: { + 'ts/consistent-type-definitions': ['error', 'type'], + 'ts/no-explicit-any': 'error', + 'ts/no-redeclare': 'off', + }, + erasableOnly: true, + }, + test: { + overrides: { + 'test/prefer-lowercase-title': 'off', + }, + }, + stylistic: { + overrides: { + 'antfu/top-level-function': 'off', + }, + }, + e18e: false, + pnpm: false, + }, + markdownPreferences.configs.standard, + { + files: [GLOB_MARKDOWN], + plugins: { md }, + rules: { + 'md/no-url-trailing-slash': 'error', + 'markdown-preferences/prefer-link-reference-definitions': [ + 'error', + { + minLinks: 1, + }, + ], + 'markdown-preferences/ordered-list-marker-sequence': [ + 'error', + { increment: 'never' }, + ], + 'markdown-preferences/definitions-last': 'error', + 'markdown-preferences/sort-definitions': 'error', + }, + }, + { + rules: { + 'node/prefer-global/process': 'off', + }, + }, +) diff --git a/package.json b/package.json index ce3180214b..5a67b66a9c 100644 --- a/package.json +++ b/package.json @@ -1,14 +1,26 @@ { "name": "dify", + "type": "module", "private": true, - "scripts": { - "prepare": "vp config" - }, - "devDependencies": { - "vite-plus": "catalog:" - }, + "packageManager": "pnpm@10.33.0", "engines": { "node": "^22.22.1" }, - "packageManager": "pnpm@10.33.0" + "scripts": { + "prepare": "vp config", + "type-check": "vp run -r type-check", + "lint": "eslint --cache --concurrency=auto", + "lint:ci": "eslint --cache --cache-strategy content --concurrency 2", + "lint:fix": "vp run lint --fix", + "lint:quiet": "vp run lint --quiet" + }, + "devDependencies": { + "@antfu/eslint-config": "catalog:", + "eslint": "catalog:", + "eslint-markdown": "catalog:", + "eslint-plugin-markdown-preferences": "catalog:", + "eslint-plugin-no-barrel-files": "catalog:", + "vite": "catalog:", + "vite-plus": "catalog:" + } } diff --git a/packages/dify-ui/AGENTS.md b/packages/dify-ui/AGENTS.md new file mode 100644 index 0000000000..ecc968e130 --- /dev/null +++ b/packages/dify-ui/AGENTS.md @@ -0,0 +1,27 @@ +# @langgenius/dify-ui + +This package provides shared design tokens (colors, shadows, typography), the `cn()` utility, and a Tailwind CSS preset consumed by `web/`. + +## Border Radius: Figma Token → Tailwind Class Mapping + +The Figma design system uses `--radius/*` tokens whose scale is **offset by one step** from Tailwind CSS v4 defaults. When translating Figma specs to code, always use this mapping — never use `radius-*` as a CSS class, and never extend `borderRadius` in the preset. + +| Figma Token | Value | Tailwind Class | +| --------------- | ----- | ---------------- | +| `--radius/2xs` | 2px | `rounded-xs` | +| `--radius/xs` | 4px | `rounded-sm` | +| `--radius/sm` | 6px | `rounded-md` | +| `--radius/md` | 8px | `rounded-lg` | +| `--radius/lg` | 10px | `rounded-[10px]` | +| `--radius/xl` | 12px | `rounded-xl` | +| `--radius/2xl` | 16px | `rounded-2xl` | +| `--radius/3xl` | 20px | `rounded-[20px]` | +| `--radius/6xl` | 28px | `rounded-[28px]` | +| `--radius/full` | 999px | `rounded-full` | + +### Rules + +- **Do not** add custom `borderRadius` values to `tailwind-preset.ts`. We use Tailwind v4 defaults and arbitrary values (`rounded-[Npx]`) for sizes without a standard equivalent. +- **Do not** use `radius-*` as CSS class names. The old `@utility radius-*` definitions have been removed. +- When the Figma MCP returns `rounded-[var(--radius/sm, 6px)]`, convert it to the standard Tailwind class from the table above (e.g. `rounded-md`). +- For values without a standard Tailwind equivalent (10px, 20px, 28px), use arbitrary values like `rounded-[10px]`. diff --git a/packages/dify-ui/package.json b/packages/dify-ui/package.json new file mode 100644 index 0000000000..b54fde9b89 --- /dev/null +++ b/packages/dify-ui/package.json @@ -0,0 +1,29 @@ +{ + "name": "@langgenius/dify-ui", + "type": "module", + "version": "0.0.1", + "private": true, + "exports": { + "./styles.css": "./src/styles/styles.css", + "./tailwind-preset": { + "types": "./src/tailwind-preset.ts", + "import": "./src/tailwind-preset.ts" + }, + "./cn": { + "types": "./src/cn.ts", + "import": "./src/cn.ts" + } + }, + "scripts": { + "type-check": "tsc" + }, + "dependencies": { + "clsx": "catalog:", + "tailwind-merge": "catalog:" + }, + "devDependencies": { + "@dify/tsconfig": "workspace:*", + "tailwindcss": "catalog:", + "typescript": "catalog:" + } +} diff --git a/web/utils/classnames.ts b/packages/dify-ui/src/cn.ts similarity index 100% rename from web/utils/classnames.ts rename to packages/dify-ui/src/cn.ts diff --git a/packages/dify-ui/src/styles/components.css b/packages/dify-ui/src/styles/components.css new file mode 100644 index 0000000000..ab02be97fb --- /dev/null +++ b/packages/dify-ui/src/styles/components.css @@ -0,0 +1,69 @@ +[data-dify-scrollbar]::before, +[data-dify-scrollbar]::after { + content: ''; + position: absolute; + z-index: 1; + border-radius: 9999px; + pointer-events: none; + opacity: 0; + transition: opacity 150ms ease; +} + +[data-dify-scrollbar][data-orientation='vertical']::before { + left: 50%; + top: 4px; + width: 4px; + height: 12px; + transform: translateX(-50%); + background: linear-gradient(to bottom, var(--color-components-panel-bg), transparent); +} + +[data-dify-scrollbar][data-orientation='vertical']::after { + left: 50%; + bottom: 4px; + width: 4px; + height: 12px; + transform: translateX(-50%); + background: linear-gradient(to top, var(--color-components-panel-bg), transparent); +} + +[data-dify-scrollbar][data-orientation='horizontal']::before { + top: 50%; + left: 4px; + width: 12px; + height: 4px; + transform: translateY(-50%); + background: linear-gradient(to right, var(--color-components-panel-bg), transparent); +} + +[data-dify-scrollbar][data-orientation='horizontal']::after { + top: 50%; + right: 4px; + width: 12px; + height: 4px; + transform: translateY(-50%); + background: linear-gradient(to left, var(--color-components-panel-bg), transparent); +} + +[data-dify-scrollbar][data-orientation='vertical']:not([data-overflow-y-start])::before { + opacity: 1; +} + +[data-dify-scrollbar][data-orientation='vertical']:not([data-overflow-y-end])::after { + opacity: 1; +} + +[data-dify-scrollbar][data-orientation='horizontal']:not([data-overflow-x-start])::before { + opacity: 1; +} + +[data-dify-scrollbar][data-orientation='horizontal']:not([data-overflow-x-end])::after { + opacity: 1; +} + +@media (prefers-reduced-motion: reduce) { + [data-dify-scrollbar]::before, + [data-dify-scrollbar]::after { + transition: none; + } +} diff --git a/packages/dify-ui/src/styles/styles.css b/packages/dify-ui/src/styles/styles.css new file mode 100644 index 0000000000..fb410b2d5f --- /dev/null +++ b/packages/dify-ui/src/styles/styles.css @@ -0,0 +1,4 @@ +@import '../themes/light.css' layer(base); +@import '../themes/dark.css' layer(base); +@import './utilities.css'; +@import './components.css'; diff --git a/packages/dify-ui/src/styles/utilities.css b/packages/dify-ui/src/styles/utilities.css new file mode 100644 index 0000000000..69b15d4c10 --- /dev/null +++ b/packages/dify-ui/src/styles/utilities.css @@ -0,0 +1,272 @@ +@utility system-kbd { + font-size: 12px; + font-weight: 500; + line-height: 16px; +} + +@utility system-2xs-regular-uppercase { + font-size: 10px; + font-weight: 400; + text-transform: uppercase; + line-height: 12px; +} + +@utility system-2xs-regular { + font-size: 10px; + font-weight: 400; + line-height: 12px; +} + +@utility system-2xs-medium { + font-size: 10px; + font-weight: 500; + line-height: 12px; +} + +@utility system-2xs-medium-uppercase { + font-size: 10px; + font-weight: 500; + text-transform: uppercase; + line-height: 12px; +} + +@utility system-2xs-semibold-uppercase { + font-size: 10px; + font-weight: 600; + text-transform: uppercase; + line-height: 12px; +} + +@utility system-xs-regular { + font-size: 12px; + font-weight: 400; + line-height: 16px; +} + +@utility system-xs-regular-uppercase { + font-size: 12px; + font-weight: 400; + text-transform: uppercase; + line-height: 16px; +} + +@utility system-xs-medium { + font-size: 12px; + font-weight: 500; + line-height: 16px; +} + +@utility system-xs-medium-uppercase { + font-size: 12px; + font-weight: 500; + text-transform: uppercase; + line-height: 16px; +} + +@utility system-xs-semibold { + font-size: 12px; + font-weight: 600; + line-height: 16px; +} + +@utility system-xs-semibold-uppercase { + font-size: 12px; + font-weight: 600; + text-transform: uppercase; + line-height: 16px; +} + +@utility system-sm-regular { + font-size: 13px; + font-weight: 400; + line-height: 16px; +} + +@utility system-sm-medium { + font-size: 13px; + font-weight: 500; + line-height: 16px; +} + +@utility system-sm-medium-uppercase { + font-size: 13px; + font-weight: 500; + text-transform: uppercase; + line-height: 16px; +} + +@utility system-sm-semibold { + font-size: 13px; + font-weight: 600; + line-height: 16px; +} + +@utility system-sm-semibold-uppercase { + font-size: 13px; + font-weight: 600; + text-transform: uppercase; + line-height: 16px; +} + +@utility system-md-regular { + font-size: 14px; + font-weight: 400; + line-height: 20px; +} + +@utility system-md-medium { + font-size: 14px; + font-weight: 500; + line-height: 20px; +} + +@utility system-md-semibold { + font-size: 14px; + font-weight: 600; + line-height: 20px; +} + +@utility system-md-semibold-uppercase { + font-size: 14px; + font-weight: 600; + text-transform: uppercase; + line-height: 20px; +} + +@utility system-xl-medium { + font-size: 16px; + font-weight: 500; + line-height: 24px; +} + +@utility system-xl-semibold { + font-size: 16px; + font-weight: 600; + line-height: 24px; +} + +@utility code-xs-regular { + font-size: 12px; + font-weight: 400; + line-height: 1.5; +} + +@utility code-sm-regular { + font-size: 13px; + font-weight: 400; + line-height: 1.5; +} + +@utility code-sm-semibold { + font-size: 13px; + font-weight: 600; + line-height: 1.5; +} + +@utility body-xs-regular { + font-size: 12px; + font-weight: 400; + line-height: 16px; +} + +@utility body-xs-medium { + font-size: 12px; + font-weight: 500; + line-height: 16px; +} + +@utility body-sm-regular { + font-size: 13px; + font-weight: 400; + line-height: 16px; +} + +@utility body-sm-medium { + font-size: 13px; + font-weight: 500; + line-height: 16px; +} + +@utility body-md-regular { + font-size: 14px; + font-weight: 400; + line-height: 20px; +} + +@utility body-md-medium { + font-size: 14px; + font-weight: 500; + line-height: 20px; +} + +@utility body-lg-regular { + font-size: 15px; + font-weight: 400; + line-height: 20px; +} + +@utility body-2xl-regular { + font-size: 18px; + font-weight: 400; + line-height: 1.5; +} + +@utility title-xs-semi-bold { + font-size: 12px; + font-weight: 600; + line-height: 16px; +} + +@utility title-sm-semi-bold { + font-size: 13px; + font-weight: 600; + line-height: 16px; +} + +@utility title-md-semi-bold { + font-size: 14px; + font-weight: 600; + line-height: 20px; +} + +@utility title-lg-bold { + font-size: 15px; + font-weight: 700; + line-height: 1.2; +} + +@utility title-xl-semi-bold { + font-size: 16px; + font-weight: 600; + line-height: 1.2; +} + +@utility title-2xl-semi-bold { + font-size: 18px; + font-weight: 600; + line-height: 1.2; +} + +@utility title-3xl-semi-bold { + font-size: 20px; + font-weight: 600; + line-height: 1.2; +} + +@utility title-3xl-bold { + font-size: 20px; + font-weight: 700; + line-height: 1.2; +} + +@utility title-4xl-semi-bold { + font-size: 24px; + font-weight: 600; + line-height: 1.2; +} + +@utility title-5xl-bold { + font-size: 30px; + font-weight: 700; + line-height: 1.2; +} diff --git a/packages/dify-ui/src/tailwind-preset.ts b/packages/dify-ui/src/tailwind-preset.ts new file mode 100644 index 0000000000..2dbf4781b0 --- /dev/null +++ b/packages/dify-ui/src/tailwind-preset.ts @@ -0,0 +1,87 @@ +import tailwindThemeVarDefine from './themes/tailwind-theme-var-define' + +const difyUIPreset = { + theme: { + extend: { + colors: { + gray: { + 25: '#fcfcfd', + 50: '#f9fafb', + 100: '#f2f4f7', + 200: '#eaecf0', + 300: '#d0d5dd', + 400: '#98a2b3', + 500: '#667085', + 600: '#344054', + 700: '#475467', + 800: '#1d2939', + 900: '#101828', + }, + primary: { + 25: '#f5f8ff', + 50: '#eff4ff', + 100: '#d1e0ff', + 200: '#b2ccff', + 300: '#84adff', + 400: '#528bff', + 500: '#2970ff', + 600: '#155eef', + 700: '#004eeb', + 800: '#0040c1', + 900: '#00359e', + }, + blue: { + 500: '#E1EFFE', + }, + green: { + 50: '#F3FAF7', + 100: '#DEF7EC', + 800: '#03543F', + }, + yellow: { + 100: '#FDF6B2', + 800: '#723B13', + }, + purple: { + 50: '#F6F5FF', + 200: '#DCD7FE', + }, + indigo: { + 25: '#F5F8FF', + 50: '#EEF4FF', + 100: '#E0EAFF', + 300: '#A4BCFD', + 400: '#8098F9', + 600: '#444CE7', + 800: '#2D31A6', + }, + ...tailwindThemeVarDefine, + }, + boxShadow: { + 'xs': '0px 1px 2px 0px rgba(16, 24, 40, 0.05)', + 'sm': '0px 1px 2px 0px rgba(16, 24, 40, 0.06), 0px 1px 3px 0px rgba(16, 24, 40, 0.10)', + 'sm-no-bottom': '0px -1px 2px 0px rgba(16, 24, 40, 0.06), 0px -1px 3px 0px rgba(16, 24, 40, 0.10)', + 'md': '0px 2px 4px -2px rgba(16, 24, 40, 0.06), 0px 4px 8px -2px rgba(16, 24, 40, 0.10)', + 'lg': '0px 4px 6px -2px rgba(16, 24, 40, 0.03), 0px 12px 16px -4px rgba(16, 24, 40, 0.08)', + 'xl': '0px 8px 8px -4px rgba(16, 24, 40, 0.03), 0px 20px 24px -4px rgba(16, 24, 40, 0.08)', + '2xl': '0px 24px 48px -12px rgba(16, 24, 40, 0.18)', + '3xl': '0px 32px 64px -12px rgba(16, 24, 40, 0.14)', + 'status-indicator-green-shadow': '0px 2px 6px 0px var(--color-components-badge-status-light-success-halo), 0px 0px 0px 1px var(--color-components-badge-status-light-border-outer)', + 'status-indicator-warning-shadow': '0px 2px 6px 0px var(--color-components-badge-status-light-warning-halo), 0px 0px 0px 1px var(--color-components-badge-status-light-border-outer)', + 'status-indicator-red-shadow': '0px 2px 6px 0px var(--color-components-badge-status-light-error-halo), 0px 0px 0px 1px var(--color-components-badge-status-light-border-outer)', + 'status-indicator-blue-shadow': '0px 2px 6px 0px var(--color-components-badge-status-light-normal-halo), 0px 0px 0px 1px var(--color-components-badge-status-light-border-outer)', + 'status-indicator-gray-shadow': '0px 1px 2px 0px var(--color-components-badge-status-light-disabled-halo), 0px 0px 0px 1px var(--color-components-badge-status-light-border-outer)', + }, + opacity: { + 2: '0.02', + 8: '0.08', + }, + fontSize: { + '2xs': '0.625rem', + }, + }, + }, + plugins: [], +} + +export default difyUIPreset diff --git a/web/themes/dark.css b/packages/dify-ui/src/themes/dark.css similarity index 100% rename from web/themes/dark.css rename to packages/dify-ui/src/themes/dark.css diff --git a/web/themes/light.css b/packages/dify-ui/src/themes/light.css similarity index 100% rename from web/themes/light.css rename to packages/dify-ui/src/themes/light.css diff --git a/web/themes/tailwind-theme-var-define.ts b/packages/dify-ui/src/themes/tailwind-theme-var-define.ts similarity index 100% rename from web/themes/tailwind-theme-var-define.ts rename to packages/dify-ui/src/themes/tailwind-theme-var-define.ts diff --git a/packages/dify-ui/tsconfig.json b/packages/dify-ui/tsconfig.json new file mode 100644 index 0000000000..b31c48ead6 --- /dev/null +++ b/packages/dify-ui/tsconfig.json @@ -0,0 +1,14 @@ +{ + "extends": "@dify/tsconfig/base.json", + "compilerOptions": { + "rootDir": "src", + "declaration": true, + "declarationMap": true, + "noEmit": false, + "outDir": "dist", + "sourceMap": true, + "isolatedModules": true, + "verbatimModuleSyntax": true + }, + "include": ["src"] +} diff --git a/packages/iconify-collections/assets/public/common/enter-key.svg b/packages/iconify-collections/assets/public/common/enter-key.svg new file mode 100644 index 0000000000..edfddfc188 --- /dev/null +++ b/packages/iconify-collections/assets/public/common/enter-key.svg @@ -0,0 +1,4 @@ + + + + diff --git a/packages/iconify-collections/assets/public/other/comment.svg b/packages/iconify-collections/assets/public/other/comment.svg new file mode 100644 index 0000000000..0f0609f0b6 --- /dev/null +++ b/packages/iconify-collections/assets/public/other/comment.svg @@ -0,0 +1,3 @@ + + + diff --git a/packages/iconify-collections/custom-public/index.d.ts b/packages/iconify-collections/custom-public/index.d.ts index ecca5633d4..be2442726c 100644 --- a/packages/iconify-collections/custom-public/index.d.ts +++ b/packages/iconify-collections/custom-public/index.d.ts @@ -1,4 +1,4 @@ -export interface IconifyJSON { +export type IconifyJSON = { prefix: string icons: Record aliases?: Record @@ -7,7 +7,7 @@ export interface IconifyJSON { lastModified?: number } -export interface IconifyIcon { +export type IconifyIcon = { body: string left?: number top?: number @@ -18,11 +18,11 @@ export interface IconifyIcon { vFlip?: boolean } -export interface IconifyAlias extends Omit { +export type IconifyAlias = { parent: string -} +} & Omit -export interface IconifyInfo { +export type IconifyInfo = { prefix: string name: string total: number @@ -40,11 +40,11 @@ export interface IconifyInfo { palette?: boolean } -export interface IconifyMetaData { +export type IconifyMetaData = { [key: string]: unknown } -export interface IconifyChars { +export type IconifyChars = { [key: string]: string } @@ -52,4 +52,3 @@ export declare const icons: IconifyJSON export declare const info: IconifyInfo export declare const metadata: IconifyMetaData export declare const chars: IconifyChars - diff --git a/packages/iconify-collections/custom-public/index.js b/packages/iconify-collections/custom-public/index.js index 81c1d0f5c4..aa7f8d5058 100644 --- a/packages/iconify-collections/custom-public/index.js +++ b/packages/iconify-collections/custom-public/index.js @@ -1,9 +1,8 @@ 'use strict' +const chars = require('./chars.json') const icons = require('./icons.json') const info = require('./info.json') const metadata = require('./metadata.json') -const chars = require('./chars.json') module.exports = { icons, info, metadata, chars } - diff --git a/packages/iconify-collections/custom-public/index.mjs b/packages/iconify-collections/custom-public/index.mjs index 6c1108a92d..8e1c022130 100644 --- a/packages/iconify-collections/custom-public/index.mjs +++ b/packages/iconify-collections/custom-public/index.mjs @@ -1,7 +1,6 @@ +import chars from './chars.json' with { type: 'json' } import icons from './icons.json' with { type: 'json' } import info from './info.json' with { type: 'json' } import metadata from './metadata.json' with { type: 'json' } -import chars from './chars.json' with { type: 'json' } - -export { icons, info, metadata, chars } +export { chars, icons, info, metadata } diff --git a/packages/iconify-collections/custom-vender/index.d.ts b/packages/iconify-collections/custom-vender/index.d.ts index ecca5633d4..be2442726c 100644 --- a/packages/iconify-collections/custom-vender/index.d.ts +++ b/packages/iconify-collections/custom-vender/index.d.ts @@ -1,4 +1,4 @@ -export interface IconifyJSON { +export type IconifyJSON = { prefix: string icons: Record aliases?: Record @@ -7,7 +7,7 @@ export interface IconifyJSON { lastModified?: number } -export interface IconifyIcon { +export type IconifyIcon = { body: string left?: number top?: number @@ -18,11 +18,11 @@ export interface IconifyIcon { vFlip?: boolean } -export interface IconifyAlias extends Omit { +export type IconifyAlias = { parent: string -} +} & Omit -export interface IconifyInfo { +export type IconifyInfo = { prefix: string name: string total: number @@ -40,11 +40,11 @@ export interface IconifyInfo { palette?: boolean } -export interface IconifyMetaData { +export type IconifyMetaData = { [key: string]: unknown } -export interface IconifyChars { +export type IconifyChars = { [key: string]: string } @@ -52,4 +52,3 @@ export declare const icons: IconifyJSON export declare const info: IconifyInfo export declare const metadata: IconifyMetaData export declare const chars: IconifyChars - diff --git a/packages/iconify-collections/custom-vender/index.js b/packages/iconify-collections/custom-vender/index.js index 81c1d0f5c4..aa7f8d5058 100644 --- a/packages/iconify-collections/custom-vender/index.js +++ b/packages/iconify-collections/custom-vender/index.js @@ -1,9 +1,8 @@ 'use strict' +const chars = require('./chars.json') const icons = require('./icons.json') const info = require('./info.json') const metadata = require('./metadata.json') -const chars = require('./chars.json') module.exports = { icons, info, metadata, chars } - diff --git a/packages/iconify-collections/custom-vender/index.mjs b/packages/iconify-collections/custom-vender/index.mjs index 6c1108a92d..8e1c022130 100644 --- a/packages/iconify-collections/custom-vender/index.mjs +++ b/packages/iconify-collections/custom-vender/index.mjs @@ -1,7 +1,6 @@ +import chars from './chars.json' with { type: 'json' } import icons from './icons.json' with { type: 'json' } import info from './info.json' with { type: 'json' } import metadata from './metadata.json' with { type: 'json' } -import chars from './chars.json' with { type: 'json' } - -export { icons, info, metadata, chars } +export { chars, icons, info, metadata } diff --git a/packages/iconify-collections/package.json b/packages/iconify-collections/package.json index 3bd7285f1a..07c29f0a07 100644 --- a/packages/iconify-collections/package.json +++ b/packages/iconify-collections/package.json @@ -1,12 +1,12 @@ { "name": "@dify/iconify-collections", - "private": true, "version": "0.0.0-private", + "private": true, "exports": { "./custom-public": { "types": "./custom-public/index.d.ts", - "require": "./custom-public/index.js", - "import": "./custom-public/index.mjs" + "import": "./custom-public/index.mjs", + "require": "./custom-public/index.js" }, "./custom-public/icons.json": "./custom-public/icons.json", "./custom-public/info.json": "./custom-public/info.json", @@ -14,8 +14,8 @@ "./custom-public/chars.json": "./custom-public/chars.json", "./custom-vender": { "types": "./custom-vender/index.d.ts", - "require": "./custom-vender/index.js", - "import": "./custom-vender/index.mjs" + "import": "./custom-vender/index.mjs", + "require": "./custom-vender/index.js" }, "./custom-vender/icons.json": "./custom-vender/icons.json", "./custom-vender/info.json": "./custom-vender/info.json", diff --git a/packages/migrate-no-unchecked-indexed-access/bin/migrate-no-unchecked-indexed-access.js b/packages/migrate-no-unchecked-indexed-access/bin/migrate-no-unchecked-indexed-access.js new file mode 100755 index 0000000000..2f2b0d72a9 --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/bin/migrate-no-unchecked-indexed-access.js @@ -0,0 +1,28 @@ +#!/usr/bin/env node + +import { spawnSync } from 'node:child_process' +import fs from 'node:fs' +import path from 'node:path' +import process from 'node:process' +import { fileURLToPath } from 'node:url' + +const packageRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)), '..') +const entryFile = path.join(packageRoot, 'dist', 'cli.mjs') + +if (!fs.existsSync(entryFile)) + throw new Error(`Built CLI entry not found at ${entryFile}. Run "pnpm --filter migrate-no-unchecked-indexed-access build" first.`) + +const result = spawnSync( + process.execPath, + [entryFile, ...process.argv.slice(2)], + { + cwd: process.cwd(), + env: process.env, + stdio: 'inherit', + }, +) + +if (result.error) + throw result.error + +process.exit(result.status ?? 1) diff --git a/packages/migrate-no-unchecked-indexed-access/package.json b/packages/migrate-no-unchecked-indexed-access/package.json new file mode 100644 index 0000000000..5da8d4cb50 --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/package.json @@ -0,0 +1,22 @@ +{ + "name": "migrate-no-unchecked-indexed-access", + "type": "module", + "version": "0.0.0-private", + "private": true, + "bin": { + "migrate-no-unchecked-indexed-access": "./bin/migrate-no-unchecked-indexed-access.js" + }, + "scripts": { + "build": "vp pack", + "type-check": "tsc" + }, + "dependencies": { + "typescript": "catalog:" + }, + "devDependencies": { + "@dify/tsconfig": "workspace:*", + "@types/node": "catalog:", + "vite": "catalog:", + "vite-plus": "catalog:" + } +} diff --git a/packages/migrate-no-unchecked-indexed-access/src/cli.ts b/packages/migrate-no-unchecked-indexed-access/src/cli.ts new file mode 100644 index 0000000000..99142c388f --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/src/cli.ts @@ -0,0 +1,46 @@ +import process from 'node:process' +import { runBatchMigrationCommand } from './no-unchecked-indexed-access/run' + +function printUsage() { + console.log(`Usage: + migrate-no-unchecked-indexed-access [options] + +Options: + --project + --batch-size + --batch-iterations + --max-rounds + --verbose`) +} + +async function flushStandardStreams() { + await Promise.all([ + new Promise(resolve => process.stdout.write('', () => resolve())), + new Promise(resolve => process.stderr.write('', () => resolve())), + ]) +} + +async function main() { + const argv = process.argv.slice(2) + if (argv.includes('help') || argv.includes('--help') || argv.includes('-h')) { + printUsage() + return + } + + await runBatchMigrationCommand(argv) +} + +let exitCode = 0 + +try { + await main() + const currentExitCode = process.exitCode + exitCode = typeof currentExitCode === 'number' ? currentExitCode : 0 +} +catch (error) { + console.error(error instanceof Error ? error.message : error) + exitCode = 1 +} + +await flushStandardStreams() +process.exit(exitCode) diff --git a/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/migrate.ts b/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/migrate.ts new file mode 100644 index 0000000000..6b79e214bb --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/migrate.ts @@ -0,0 +1,1835 @@ +import fs from 'node:fs/promises' +import path from 'node:path' +import process from 'node:process' +import ts from 'typescript' + +const SUPPORTED_EXTENSIONS = new Set(['.ts', '.tsx', '.mts', '.cts']) +export const SUPPORTED_DIAGNOSTIC_CODES = new Set([2322, 2339, 2345, 2488, 2532, 2538, 2604, 2722, 2769, 2786, 7006, 18047, 18048]) +const DEFAULT_MAX_ITERATIONS = 10 +const ACCESS_DIAGNOSTIC_CODES = new Set([2339, 2532, 18047, 18048]) +const ASSIGNABILITY_DIAGNOSTIC_CODES = new Set([2322, 2345, 2769]) +const parsedConfigCache = new Map() + +type CliOptions = { + files: string[] + maxIterations: number + project: string + useFullProjectRoots?: boolean + verbose: boolean + write: boolean +} + +type TextEdit = { + end: number + expectedText?: string + replacement: string + start: number +} + +type EditTarget + = { expression: ts.Expression, kind: 'expression', sourceFile: ts.SourceFile } + | { end: number, kind: 'direct-edit', replacement: string, sourceFile: ts.SourceFile, start: number } + | { kind: 'shorthand-property', property: ts.ShorthandPropertyAssignment, sourceFile: ts.SourceFile } + +export function parseArgs(argv: string[]): CliOptions { + const options: CliOptions = { + files: [], + maxIterations: DEFAULT_MAX_ITERATIONS, + project: 'tsconfig.json', + verbose: false, + write: false, + } + + for (let i = 0; i < argv.length; i += 1) { + const arg = argv[i] + if (!arg) + continue + + if (arg === '--') + continue + + if (arg === '--write') { + options.write = true + continue + } + + if (arg === '--verbose') { + options.verbose = true + continue + } + + if (arg === '--project') { + const value = argv[i + 1] + if (!value) + throw new Error('Missing value for --project') + + options.project = value + i += 1 + continue + } + + if (arg === '--max-iterations') { + const value = argv[i + 1] + if (!value) + throw new Error('Missing value for --max-iterations') + + const parsed = Number(value) + if (!Number.isInteger(parsed) || parsed <= 0) + throw new Error(`Invalid --max-iterations value: ${value}`) + + options.maxIterations = parsed + i += 1 + continue + } + + if (arg === '--files') { + const value = argv[i + 1] + if (!value) + throw new Error('Missing value for --files') + + options.files.push(...splitFilesArgument(value)) + i += 1 + continue + } + + if (arg.startsWith('--')) + throw new Error(`Unknown option: ${arg}`) + + options.files.push(...splitFilesArgument(arg)) + } + + return options +} + +function splitFilesArgument(value: string): string[] { + return value + .split(',') + .map(item => item.trim()) + .filter(Boolean) +} + +function parseTsConfig(projectPath: string): ts.ParsedCommandLine { + const cached = parsedConfigCache.get(projectPath) + if (cached) + return cached + + const configFile = ts.readConfigFile(projectPath, ts.sys.readFile) + if (configFile.error) + throw new Error(formatDiagnostic(configFile.error)) + + const configDirectory = path.dirname(projectPath) + const parsedConfig = ts.parseJsonConfigFileContent( + configFile.config, + ts.sys, + configDirectory, + undefined, + projectPath, + ) + parsedConfigCache.set(projectPath, parsedConfig) + return parsedConfig +} + +function createMigrationProgram( + rootNames: string[], + parsedConfig: ts.ParsedCommandLine, + fileTexts: Map, + oldProgram?: ts.Program, +): ts.Program { + const compilerHost = ts.createCompilerHost(parsedConfig.options, true) + const originalGetSourceFile = compilerHost.getSourceFile.bind(compilerHost) + + compilerHost.readFile = (fileName) => { + return fileTexts.get(fileName) ?? ts.sys.readFile(fileName) + } + + compilerHost.getSourceFile = (fileName, languageVersion, onError, shouldCreateNewSourceFile) => { + const text = fileTexts.get(fileName) + if (text !== undefined) + return ts.createSourceFile(fileName, text, languageVersion, true) + + return originalGetSourceFile(fileName, languageVersion, onError, shouldCreateNewSourceFile) + } + + return ts.createProgram({ + oldProgram, + host: compilerHost, + options: parsedConfig.options, + projectReferences: parsedConfig.projectReferences, + rootNames, + }) +} + +function isTargetFile(fileName: string): boolean { + const extension = path.extname(fileName) + if (!SUPPORTED_EXTENSIONS.has(extension)) + return false + + if (fileName.endsWith('.d.ts')) + return false + + return !fileName.includes(`${path.sep}.next${path.sep}`) +} + +function normalizeFileName(fileName: string): string { + return path.resolve(fileName) +} + +function isDeclarationSupportFile(fileName: string): boolean { + return fileName.endsWith('.d.ts') +} + +function isSetupSupportFile(fileName: string): boolean { + const baseName = path.basename(fileName) + return baseName === 'vitest.setup.ts' + || baseName === 'vitest.setup.tsx' + || baseName === 'jest.setup.ts' + || baseName === 'jest.setup.tsx' + || baseName === 'setupTests.ts' + || baseName === 'setupTests.tsx' + || baseName === 'test.setup.ts' + || baseName === 'test.setup.tsx' +} + +function getMigrationRootNames( + parsedConfig: ts.ParsedCommandLine, + targetFiles: string[], +): string[] { + const rootNames = new Set(targetFiles) + + for (const fileName of parsedConfig.fileNames.map(normalizeFileName)) { + if (isDeclarationSupportFile(fileName) || isSetupSupportFile(fileName)) + rootNames.add(fileName) + } + + return Array.from(rootNames) +} + +function createFileMatcher(filePatterns: string[]): (fileName: string) => boolean { + if (filePatterns.length === 0) + return () => true + + const patterns = filePatterns.map(pattern => ({ + absolute: normalizeFileName(pattern), + raw: pattern.split(path.sep).join('/'), + })) + return (fileName: string) => { + const normalized = normalizeFileName(fileName) + const unixStyle = normalized.split(path.sep).join('/') + return patterns.some(pattern => normalized === pattern.absolute || unixStyle.endsWith(pattern.raw)) + } +} + +function formatDiagnostic(diagnostic: ts.Diagnostic): string { + const message = ts.flattenDiagnosticMessageText(diagnostic.messageText, '\n') + if (!diagnostic.file || diagnostic.start === undefined) + return message + + const position = diagnostic.file.getLineAndCharacterOfPosition(diagnostic.start) + return `${diagnostic.file.fileName}:${position.line + 1}:${position.character + 1} TS${diagnostic.code}: ${message}` +} + +function ensureTrailingNonNullAssertion(expression: string): string { + const trimmedExpression = expression.trimEnd() + return trimmedExpression.endsWith('!') + ? trimmedExpression + : `${trimmedExpression}!` +} + +function hasOptionalChainDescendant(node: ts.Node): boolean { + let found = false + + const visit = (current: ts.Node) => { + if (found) + return + + if (ts.isOptionalChain(current)) { + found = true + return + } + + current.forEachChild(visit) + } + + visit(node) + return found +} + +function shouldPrintInlineNonNullAssertion(expression: ts.Expression): boolean { + return ts.isOptionalChain(expression) + || (ts.isParenthesizedExpression(expression) && hasOptionalChainDescendant(expression.expression)) +} + +function normalizeOptionalChainNonNullContinuations(text: string): string { + const sourceFile = ts.createSourceFile('normalize.tsx', text, ts.ScriptTarget.Latest, true, ts.ScriptKind.TSX) + const edits: TextEdit[] = [] + + const visit = (node: ts.Node) => { + if ( + ts.isNonNullExpression(node) + && ts.isParenthesizedExpression(node.expression) + && hasOptionalChainDescendant(node.expression.expression) + ) { + edits.push({ + end: node.getEnd(), + replacement: `${node.expression.expression.getText(sourceFile)}!`, + start: node.getStart(sourceFile), + }) + return + } + + node.forEachChild(visit) + } + + visit(sourceFile) + + if (edits.length === 0) + return text + + return applyEdits(text, edits).text +} + +function collapseRepeatedInlineComments(text: string): string { + return text + .split('\n') + .map((line) => { + const commentIndex = line.indexOf('//') + if (commentIndex < 0) + return line + + const prefix = line.slice(0, commentIndex).trimEnd() + const comment = line.slice(commentIndex + 2).trim() + const segments = comment + .split(/\s+\/\/\s+/) + .map(item => item.trim()) + .filter(Boolean) + + if (segments.length < 2) + return line + + const lastSegment = segments[segments.length - 1]! + const stableSegments = segments.slice(0, -1) + const repeatedSameComment = stableSegments.length > 0 + && stableSegments.every(segment => segment === segments[0]) + && (lastSegment === segments[0] || segments[0]!.startsWith(lastSegment) || lastSegment.startsWith(segments[0]!)) + + if (!repeatedSameComment) + return line.replace(/!{2,}$/g, '!') + + const normalizedComment = segments[0]!.replace(/!{2,}$/g, '!') + return prefix ? `${prefix} // ${normalizedComment}` : `// ${normalizedComment}` + }) + .join('\n') +} + +export function normalizeMalformedAssertions(text: string): string { + const normalizedText = text + .replace(/\n(\s*)! (\s*\/\/[^\n]*)\n/g, '! $2\n') + .replace(/\.not!+(?=[.(])/g, '.not') + .replace(/(\(|,\s*)([A-Za-z_$][\w$]*)\s*:\s*any\s*=>/g, '$1($2: any) =>') + .replace(/([,{]\s*)([A-Z_$][\w$]*)!=\{/g, '$1$2={') + .replace(/\b([A-Z_$][\w$]*)!!,/gi, '$1: $1!,') + .replace(/\b([A-Z_$][\w$]*)!!:/gi, '$1:') + .replace(/([,{]\s*)([A-Z_$][\w$]*)!:/gi, '$1$2:') + .replace(/\b(const|let|var)\s+\{([^=\n]+)\}\s*=\s*([^\n;]+)/g, (fullMatch, keyword: string, bindings: string, expression: string) => { + if (!bindings.includes('!')) + return fullMatch + + const normalizedBindings = bindings.replace(/!([,\s}:])/g, '$1') + return `${keyword} {${normalizedBindings}} = ${ensureTrailingNonNullAssertion(expression)}` + }) + + return collapseRepeatedInlineComments(normalizeOptionalChainNonNullContinuations(normalizedText)) +} + +function isExpressionTarget(target: EditTarget): target is Extract { + return target.kind === 'expression' +} + +function createExpressionTarget(expression: ts.Expression): EditTarget { + return { + expression, + kind: 'expression', + sourceFile: expression.getSourceFile(), + } +} + +function createShorthandPropertyTarget(property: ts.ShorthandPropertyAssignment): EditTarget { + return { + kind: 'shorthand-property', + property, + sourceFile: property.getSourceFile(), + } +} + +function createDirectEditTarget( + sourceFile: ts.SourceFile, + start: number, + end: number, + replacement: string, +): EditTarget { + return { + end, + kind: 'direct-edit', + replacement, + sourceFile, + start, + } +} + +function createIterableFallbackReplacement( + expression: ts.Expression, + sourceFile: ts.SourceFile, +): string { + return `(${expression.getText(sourceFile)} ?? [])` +} + +function createIterableFallbackTarget(expression: ts.Expression): EditTarget { + return createDirectEditTarget( + expression.getSourceFile(), + expression.getStart(expression.getSourceFile()), + expression.getEnd(), + createIterableFallbackReplacement(expression, expression.getSourceFile()), + ) +} + +function createArrayLiteralIterableFallbackTarget( + arrayLiteral: ts.ArrayLiteralExpression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const sourceFile = arrayLiteral.getSourceFile() + const start = arrayLiteral.getStart(sourceFile) + const end = arrayLiteral.getEnd() + const originalText = sourceFile.text.slice(start, end) + const edits: TextEdit[] = [] + + for (const element of arrayLiteral.elements) { + if (!ts.isSpreadElement(element)) + continue + + if (isAlreadyNonNull(element.expression)) + continue + + if (!typeIncludesUndefined(checker.getTypeAtLocation(element.expression))) + continue + + edits.push({ + end: element.expression.getEnd() - start, + replacement: createIterableFallbackReplacement(element.expression, sourceFile), + start: element.expression.getStart(sourceFile) - start, + }) + } + + if (edits.length === 0) + return undefined + + return createDirectEditTarget( + sourceFile, + start, + end, + applyEdits(originalText, edits).text, + ) +} + +function getTokenAtPosition(sourceFile: ts.SourceFile, position: number): ts.Node { + let current: ts.Node = sourceFile + + while (true) { + let next: ts.Node | undefined + current.forEachChild((child) => { + if (!next && position >= child.getFullStart() && position < child.getEnd()) + next = child + }) + + if (!next) + return current + + current = next + } +} + +function findAncestor( + node: ts.Node | undefined, + predicate: (candidate: ts.Node) => candidate is NodeType, +): NodeType | undefined { + let current = node + + while (current) { + if (predicate(current)) + return current + + current = current.parent + } + + return undefined +} + +function findTightestExpression(sourceFile: ts.SourceFile, start: number, end: number): ts.Expression | undefined { + let node: ts.Node | undefined = getTokenAtPosition(sourceFile, start) + + while (node) { + if (ts.isExpression(node)) { + const nodeStart = node.getStart(sourceFile) + const nodeEnd = node.getEnd() + if (nodeStart <= start && end <= nodeEnd) + return node + } + + node = node.parent + } + + return undefined +} + +function isAssignmentOperator(token: ts.SyntaxKind): boolean { + return token >= ts.SyntaxKind.FirstAssignment && token <= ts.SyntaxKind.LastAssignment +} + +function typeIncludesUndefined(type: ts.Type): boolean { + if ((type.flags & ts.TypeFlags.Undefined) !== 0) + return true + + if (!type.isUnion()) + return false + + return type.types.some(typeIncludesUndefined) +} + +function skipOuterExpressions(expression: ts.Expression): ts.Expression { + let current = expression + + while (ts.isParenthesizedExpression(current) || ts.isNonNullExpression(current)) + current = current.expression + + return current +} + +function isAlreadyNonNull(expression: ts.Expression): boolean { + let current = expression + + while (ts.isParenthesizedExpression(current)) + current = current.expression + + return ts.isNonNullExpression(current) +} + +function findAssignmentLikeCandidate( + token: ts.Node, + sourceFile: ts.SourceFile, + start: number, + end: number, +): ts.Expression | undefined { + let current: ts.Node | undefined = token + + while (current) { + if (ts.isVariableDeclaration(current) && current.initializer) + return current.initializer + + if (ts.isPropertyDeclaration(current) && current.initializer) + return current.initializer + + if (ts.isPropertyAssignment(current)) + return current.initializer + + if (ts.isShorthandPropertyAssignment(current)) + return current.name + + if (ts.isParameter(current) && current.initializer) + return current.initializer + + if (ts.isReturnStatement(current) && current.expression) + return current.expression + + if (ts.isBinaryExpression(current) && isAssignmentOperator(current.operatorToken.kind)) + return current.right + + if (ts.isJsxAttribute(current) && current.initializer && ts.isJsxExpression(current.initializer) && current.initializer.expression) + return current.initializer.expression + + if (ts.isJsxSpreadAttribute(current)) + return current.expression + + current = current.parent + } + + return findTightestExpression(sourceFile, start, end) +} + +function findArgumentCandidate( + token: ts.Node, + sourceFile: ts.SourceFile, + start: number, + end: number, +): ts.Expression | undefined { + let current: ts.Node | undefined = token + + while (current) { + if ((ts.isCallExpression(current) || ts.isNewExpression(current)) && current.arguments) { + const argument = current.arguments.find((item) => { + const itemStart = item.getStart(sourceFile) + const itemEnd = item.getEnd() + return itemStart <= start && end <= itemEnd + }) + if (argument) + return argument + } + + current = current.parent + } + + return findTightestExpression(sourceFile, start, end) +} + +function getExpressionFromJsxAttribute(attribute: ts.JsxAttribute): ts.Expression | undefined { + return attribute.initializer && ts.isJsxExpression(attribute.initializer) + ? attribute.initializer.expression + : undefined +} + +function findTargetFromExpression( + expression: ts.Expression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const referencedDeclarationTarget = findReferencedDeclarationInitializerTarget(expression, checker) + if (referencedDeclarationTarget) + return referencedDeclarationTarget + + const nestedTarget = findNestedContainerTarget(expression, checker) + if (nestedTarget) + return nestedTarget + + const innerExpression = skipOuterExpressions(expression) + if (ts.isConditionalExpression(innerExpression)) { + return findTargetFromExpression(innerExpression.whenTrue, checker) + ?? findTargetFromExpression(innerExpression.whenFalse, checker) + } + + if ( + ts.isBinaryExpression(innerExpression) + && ( + innerExpression.operatorToken.kind === ts.SyntaxKind.BarBarToken + || innerExpression.operatorToken.kind === ts.SyntaxKind.QuestionQuestionToken + || innerExpression.operatorToken.kind === ts.SyntaxKind.AmpersandAmpersandToken + ) + ) { + return findTargetFromExpression(innerExpression.left, checker) + ?? findTargetFromExpression(innerExpression.right, checker) + } + + if (ts.isArrowFunction(innerExpression) || ts.isFunctionExpression(innerExpression)) { + const functionTarget = findFunctionLikeReturnTarget(innerExpression, checker) + if (functionTarget) + return functionTarget + } + + if (ts.isPropertyAccessExpression(innerExpression)) { + const namedPropertyTarget = findNamedPropertyTarget(innerExpression.expression, innerExpression.name.text, checker) + if (namedPropertyTarget) + return namedPropertyTarget + } + + if (ts.isCallExpression(innerExpression)) { + const collectionCallbackTarget = findCollectionCallbackTarget(innerExpression, checker) + if (collectionCallbackTarget) + return collectionCallbackTarget + + const callbackArgumentTarget = findCallbackArgumentTarget(innerExpression, checker) + if (callbackArgumentTarget) + return callbackArgumentTarget + + const callExpressionTarget = findCallExpressionDeclarationTarget(innerExpression, checker) + if (callExpressionTarget) + return callExpressionTarget + } + + if (!typeIncludesUndefined(checker.getTypeAtLocation(expression))) + return undefined + + return createExpressionTarget(expression) +} + +function findJsxSpreadAttributeTarget( + token: ts.Node, + checker: ts.TypeChecker, +): EditTarget | undefined { + const spreadAttribute = findAncestor(token, ts.isJsxSpreadAttribute) + if (spreadAttribute) + return findTargetFromExpression(spreadAttribute.expression, checker) + + const openingLikeElement = findAncestor(token, node => + ts.isJsxOpeningElement(node) || ts.isJsxSelfClosingElement(node)) + + if (!openingLikeElement) + return undefined + + for (const attribute of openingLikeElement.attributes.properties) { + if (!ts.isJsxSpreadAttribute(attribute)) + continue + + const target = findTargetFromExpression(attribute.expression, checker) + if (target) + return target + } + + return undefined +} + +function findShorthandPropertyTarget( + token: ts.Node, + checker: ts.TypeChecker, +): EditTarget | undefined { + const property = findAncestor(token, ts.isShorthandPropertyAssignment) + if (!property) + return undefined + + return typeIncludesUndefined(checker.getTypeAtLocation(property.name)) + ? createShorthandPropertyTarget(property) + : undefined +} + +function findPropertyAssignmentInitializerTarget( + token: ts.Node, + start: number, + checker: ts.TypeChecker, +): EditTarget | undefined { + const propertyAssignment = findAncestor(token, ts.isPropertyAssignment) + if (!propertyAssignment) + return undefined + + const propertyNameStart = propertyAssignment.name.getStart() + const propertyNameEnd = propertyAssignment.name.getEnd() + if (start < propertyNameStart || start >= propertyNameEnd) + return undefined + + const directTarget = findTargetFromExpression(propertyAssignment.initializer, checker) + if (directTarget) + return directTarget + + const nestedTarget = findNestedContainerTarget(propertyAssignment.initializer, checker) + if (nestedTarget) + return nestedTarget + + if (!typeIncludesUndefined(checker.getTypeAtLocation(propertyAssignment.initializer))) + return undefined + + return createExpressionTarget(propertyAssignment.initializer) +} + +function findPropertyAccessExpressionTarget( + token: ts.Node, + start: number, +): EditTarget | undefined { + const propertyAccess = findAncestor(token, ts.isPropertyAccessExpression) + if (!propertyAccess) + return undefined + + if (start >= propertyAccess.name.getStart() && start < propertyAccess.name.getEnd()) + return createExpressionTarget(propertyAccess.expression) + + return undefined +} + +function findUndefinedAccessTarget( + token: ts.Node, + checker: ts.TypeChecker, +): EditTarget | undefined { + let current: ts.Node | undefined = token + let bestTarget: EditTarget | undefined + + while (current) { + if (ts.isPropertyAccessExpression(current)) { + const expression = current.expression + if (typeIncludesUndefined(checker.getTypeAtLocation(expression)) && !isAlreadyNonNull(expression)) + bestTarget = createExpressionTarget(expression) + } + + if (ts.isElementAccessExpression(current)) { + const expression = current.expression + if (typeIncludesUndefined(checker.getTypeAtLocation(expression)) && !isAlreadyNonNull(expression)) + bestTarget = createExpressionTarget(expression) + } + + current = current.parent + } + + return bestTarget +} + +function findElementAccessArgumentTarget(token: ts.Node): EditTarget | undefined { + let current = token + let matchingElementAccess: ts.ElementAccessExpression | undefined + + while (current) { + if (ts.isElementAccessExpression(current) && current.argumentExpression) + matchingElementAccess = current + + current = current.parent + } + + if (!matchingElementAccess?.argumentExpression) + return undefined + + return createExpressionTarget(matchingElementAccess.argumentExpression) +} + +function findIterableTarget( + sourceFile: ts.SourceFile, + token: ts.Node, + start: number, + end: number, + checker: ts.TypeChecker, +): EditTarget | undefined { + const arrayLiteral = findAncestor(token, ts.isArrayLiteralExpression) + if (arrayLiteral) { + const arrayLiteralTarget = createArrayLiteralIterableFallbackTarget(arrayLiteral, checker) + if (arrayLiteralTarget) + return arrayLiteralTarget + } + + const spreadElement = findAncestor(token, ts.isSpreadElement) + if (spreadElement && !isAlreadyNonNull(spreadElement.expression)) + return createIterableFallbackTarget(spreadElement.expression) + + const variableDeclaration = findAncestor(token, ts.isVariableDeclaration) + if ( + variableDeclaration?.initializer + && typeIncludesUndefined(checker.getTypeAtLocation(variableDeclaration.initializer)) + && !isAlreadyNonNull(variableDeclaration.initializer) + ) { + return createExpressionTarget(variableDeclaration.initializer) + } + + const binaryExpression = findAncestor(token, ts.isBinaryExpression) + if ( + binaryExpression + && isAssignmentOperator(binaryExpression.operatorToken.kind) + && typeIncludesUndefined(checker.getTypeAtLocation(binaryExpression.right)) + && !isAlreadyNonNull(binaryExpression.right) + ) { + return createExpressionTarget(binaryExpression.right) + } + + return undefined +} + +function findImplicitAnyParameterTarget(token: ts.Node): EditTarget | undefined { + const parameter = findAncestor(token, ts.isParameter) + if (!parameter || parameter.type || !ts.isIdentifier(parameter.name)) + return undefined + + const sourceFile = parameter.getSourceFile() + const replacement = ts.isArrowFunction(parameter.parent) && parameter.parent.parameters.length === 1 + ? `(${parameter.name.getText(sourceFile)}: any)` + : `${parameter.name.getText(sourceFile)}: any` + + return createDirectEditTarget( + sourceFile, + parameter.getStart(sourceFile), + parameter.getEnd(), + replacement, + ) +} + +function getArrayPatternElementTypeText( + element: ts.ArrayBindingElement | ts.Expression, + checker: ts.TypeChecker, +): string { + if (ts.isOmittedExpression(element)) + return 'unknown' + + const targetNode = ts.isBindingElement(element) + ? element.name + : element + + const targetType = checker.getNonNullableType(checker.getTypeAtLocation(targetNode)) + const typeText = checker.typeToString(targetType) + return typeText === 'never' ? 'unknown' : typeText +} + +function createArrayDestructuringReplacement( + sourceFile: ts.SourceFile, + expression: ts.Expression, + elements: readonly (ts.ArrayBindingElement | ts.Expression)[], + checker: ts.TypeChecker, + options?: { + fallbackToEmptyArray?: boolean + }, +): string | undefined { + if (elements.length === 0) + return undefined + + const tupleTypes = elements.map(element => getArrayPatternElementTypeText(element, checker)) + const expressionText = options?.fallbackToEmptyArray + ? `(${expression.getText(sourceFile)} ?? [])` + : `(${expression.getText(sourceFile)})` + return `${expressionText} as [${tupleTypes.join(', ')}]` +} + +function findArrayDestructuringTarget( + token: ts.Node, + checker: ts.TypeChecker, +): EditTarget | undefined { + const binaryExpression = findAncestor(token, ts.isBinaryExpression) + if (binaryExpression && isAssignmentOperator(binaryExpression.operatorToken.kind) && ts.isArrayLiteralExpression(binaryExpression.left)) { + const replacement = createArrayDestructuringReplacement( + binaryExpression.getSourceFile(), + binaryExpression.right, + binaryExpression.left.elements, + checker, + { + fallbackToEmptyArray: typeIncludesUndefined(checker.getTypeAtLocation(binaryExpression.right)), + }, + ) + if (replacement) { + return createDirectEditTarget( + binaryExpression.getSourceFile(), + binaryExpression.right.getStart(binaryExpression.getSourceFile()), + binaryExpression.right.getEnd(), + replacement, + ) + } + } + + const variableDeclaration = findAncestor(token, ts.isVariableDeclaration) + if (variableDeclaration?.initializer && ts.isArrayBindingPattern(variableDeclaration.name)) { + const replacement = createArrayDestructuringReplacement( + variableDeclaration.getSourceFile(), + variableDeclaration.initializer, + variableDeclaration.name.elements, + checker, + { + fallbackToEmptyArray: typeIncludesUndefined(checker.getTypeAtLocation(variableDeclaration.initializer)), + }, + ) + if (replacement) { + return createDirectEditTarget( + variableDeclaration.getSourceFile(), + variableDeclaration.initializer.getStart(variableDeclaration.getSourceFile()), + variableDeclaration.initializer.getEnd(), + replacement, + ) + } + } + + return undefined +} + +function findVariableDeclarationInitializerTarget( + sourceFile: ts.SourceFile, + token: ts.Node, + checker: ts.TypeChecker, +): EditTarget | undefined { + const variableDeclaration = findAncestor(token, ts.isVariableDeclaration) + if (!variableDeclaration?.initializer) + return undefined + + const nestedTarget = findNestedContainerTarget(variableDeclaration.initializer, checker) + if (nestedTarget) + return nestedTarget + + if (!typeIncludesUndefined(checker.getTypeAtLocation(variableDeclaration.initializer))) + return undefined + + return createExpressionTarget(variableDeclaration.initializer) +} + +function getResolvedValueDeclaration( + symbol: ts.Symbol | undefined, + checker: ts.TypeChecker, +): ts.Declaration | undefined { + if (!symbol) + return undefined + + const resolvedSymbol = symbol.flags & ts.SymbolFlags.Alias + ? checker.getAliasedSymbol(symbol) + : symbol + + return resolvedSymbol.valueDeclaration ?? resolvedSymbol.declarations?.[0] +} + +function getFunctionLikeDeclaration( + declaration: ts.Declaration, +): ts.FunctionLikeDeclarationBase | undefined { + if ( + ts.isFunctionDeclaration(declaration) + || ts.isMethodDeclaration(declaration) + || ts.isFunctionExpression(declaration) + || ts.isArrowFunction(declaration) + ) { + return declaration + } + + if ( + ts.isVariableDeclaration(declaration) + && declaration.initializer + && (ts.isArrowFunction(declaration.initializer) || ts.isFunctionExpression(declaration.initializer)) + ) { + return declaration.initializer + } + + return undefined +} + +function getPropertyNameText(name: ts.PropertyName | ts.BindingName): string | undefined { + if (ts.isIdentifier(name) || ts.isStringLiteral(name) || ts.isNumericLiteral(name)) + return name.text + + return undefined +} + +function getCallExpressionPropertyAccess(callExpression: ts.CallExpression): ts.PropertyAccessExpression | undefined { + const callee = skipOuterExpressions(callExpression.expression) + return ts.isPropertyAccessExpression(callee) ? callee : undefined +} + +function getFunctionExpressionArgument(callExpression: ts.CallExpression, index = 0): ts.ArrowFunction | ts.FunctionExpression | undefined { + const callback = callExpression.arguments[index] + return callback && (ts.isArrowFunction(callback) || ts.isFunctionExpression(callback)) + ? callback + : undefined +} + +function findTargetInFunctionBody( + body: ts.ConciseBody, + resolveExpression: (expression: ts.Expression) => EditTarget | undefined, +): EditTarget | undefined { + if (ts.isBlock(body)) { + for (const expression of findReturnStatementExpressions(body)) { + const target = resolveExpression(expression) + if (target) + return target + } + + return undefined + } + + return resolveExpression(body) +} + +function getParameterCollectionExpression( + declaration: ts.ParameterDeclaration, +): ts.Expression | undefined { + const functionLikeDeclaration = declaration.parent + if ( + !(ts.isArrowFunction(functionLikeDeclaration) || ts.isFunctionExpression(functionLikeDeclaration)) + || !ts.isCallExpression(functionLikeDeclaration.parent) + || functionLikeDeclaration.parent.arguments[0] !== functionLikeDeclaration + ) { + return undefined + } + + const callee = getCallExpressionPropertyAccess(functionLikeDeclaration.parent) + return callee?.expression +} + +function findObjectLiteralNamedPropertyTarget( + objectLiteral: ts.ObjectLiteralExpression, + propertyName: string, + checker: ts.TypeChecker, +): EditTarget | undefined { + for (const property of objectLiteral.properties) { + if (ts.isSpreadAssignment(property)) + continue + + if (ts.isShorthandPropertyAssignment(property) && property.name.text === propertyName) + return createShorthandPropertyTarget(property) + + if (ts.isPropertyAssignment(property)) { + const currentPropertyName = getPropertyNameText(property.name) + if (currentPropertyName !== propertyName) + continue + + return findTargetFromExpression(property.initializer, checker) + ?? createExpressionTarget(property.initializer) + } + } + + return undefined +} + +function findFunctionLikeNamedReturnTarget( + declaration: ts.FunctionLikeDeclarationBase, + propertyName: string, + checker: ts.TypeChecker, +): EditTarget | undefined { + if (!declaration.body) + return undefined + + return findTargetInFunctionBody( + declaration.body, + expression => findNamedPropertyTarget(expression, propertyName, checker), + ) +} + +function findCollectionPropertyTarget( + expression: ts.Expression, + propertyName: string, + checker: ts.TypeChecker, +): EditTarget | undefined { + const innerExpression = skipOuterExpressions(expression) + + if (ts.isIdentifier(innerExpression)) + return findNamedPropertyTarget(innerExpression, propertyName, checker) + + if (!ts.isCallExpression(innerExpression)) + return undefined + + const callee = getCallExpressionPropertyAccess(innerExpression) + if (!callee) + return undefined + + if (callee.name.text === 'map' || callee.name.text === 'flatMap') { + const callback = getFunctionExpressionArgument(innerExpression) + if (!callback) + return undefined + + return findTargetInFunctionBody( + callback.body, + returnedExpression => findNamedPropertyTarget(returnedExpression, propertyName, checker), + ) + } + + if (callee.name.text === 'filter') + return findCollectionPropertyTarget(callee.expression, propertyName, checker) + + return undefined +} + +function findNamedPropertyTarget( + expression: ts.Expression, + propertyName: string, + checker: ts.TypeChecker, +): EditTarget | undefined { + const innerExpression = skipOuterExpressions(expression) + + if (ts.isObjectLiteralExpression(innerExpression)) + return findObjectLiteralNamedPropertyTarget(innerExpression, propertyName, checker) + + if (ts.isIdentifier(innerExpression)) { + const declaration = getResolvedValueDeclaration(checker.getSymbolAtLocation(innerExpression), checker) + if (!declaration) + return undefined + + if (ts.isParameter(declaration)) { + const collectionExpression = getParameterCollectionExpression(declaration) + if (collectionExpression) + return findCollectionPropertyTarget(collectionExpression, propertyName, checker) + } + + const functionLikeDeclaration = getFunctionLikeDeclaration(declaration) + if (functionLikeDeclaration) + return findFunctionLikeNamedReturnTarget(functionLikeDeclaration, propertyName, checker) + + if (ts.isVariableDeclaration(declaration) && declaration.initializer) + return findNamedPropertyTarget(declaration.initializer, propertyName, checker) + + return undefined + } + + if (ts.isCallExpression(innerExpression)) { + const collectionPropertyTarget = findCollectionPropertyTarget(innerExpression, propertyName, checker) + if (collectionPropertyTarget) + return collectionPropertyTarget + + const declaration = getResolvedValueDeclaration(checker.getSymbolAtLocation(skipOuterExpressions(innerExpression.expression)), checker) + if (!declaration) + return undefined + + const functionLikeDeclaration = getFunctionLikeDeclaration(declaration) + if (!functionLikeDeclaration) + return undefined + + return findFunctionLikeNamedReturnTarget(functionLikeDeclaration, propertyName, checker) + } + + return undefined +} + +function findReturnStatementExpressions(node: ts.Node): ts.Expression[] { + const expressions: ts.Expression[] = [] + + const visit = (current: ts.Node) => { + if ( + current !== node + && ( + ts.isArrowFunction(current) + || ts.isFunctionExpression(current) + || ts.isFunctionDeclaration(current) + || ts.isMethodDeclaration(current) + ) + ) { + return + } + + if (ts.isReturnStatement(current) && current.expression) + expressions.push(current.expression) + + current.forEachChild(visit) + } + + visit(node) + return expressions +} + +function findFunctionLikeReturnTarget( + declaration: ts.FunctionLikeDeclarationBase, + checker: ts.TypeChecker, +): EditTarget | undefined { + if (!declaration.body) + return undefined + + return findTargetInFunctionBody( + declaration.body, + expression => findTargetFromExpression(expression, checker), + ) +} + +function findCallExpressionDeclarationTarget( + callExpression: ts.CallExpression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const declaration = getResolvedValueDeclaration(checker.getSymbolAtLocation(skipOuterExpressions(callExpression.expression)), checker) + if (!declaration) + return undefined + + const functionLikeDeclaration = getFunctionLikeDeclaration(declaration) + if (!functionLikeDeclaration) + return undefined + + return findFunctionLikeReturnTarget(functionLikeDeclaration, checker) +} + +function findCallbackArgumentTarget( + callExpression: ts.CallExpression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const callee = skipOuterExpressions(callExpression.expression) + const calleeName = ts.isIdentifier(callee) ? callee.text : getCallExpressionPropertyAccess(callExpression)?.name.text + + if (calleeName !== 'useCallback' && calleeName !== 'useMemo') + return undefined + + const callback = getFunctionExpressionArgument(callExpression) + if (!callback) + return undefined + + return findFunctionLikeReturnTarget(callback, checker) +} + +function findReferencedDeclarationInitializerTarget( + expression: ts.Expression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const innerExpression = skipOuterExpressions(expression) + if (!ts.isIdentifier(innerExpression)) + return undefined + + const declaration = getResolvedValueDeclaration(checker.getSymbolAtLocation(innerExpression), checker) + if (!declaration) + return undefined + + if (ts.isBindingElement(declaration)) { + const propertyName = declaration.propertyName + ? getPropertyNameText(declaration.propertyName) + : getPropertyNameText(declaration.name) + + const variableDeclaration = declaration.parent.parent + if (propertyName && ts.isVariableDeclaration(variableDeclaration) && variableDeclaration.initializer) { + const namedPropertyTarget = findNamedPropertyTarget(variableDeclaration.initializer, propertyName, checker) + if (namedPropertyTarget) + return namedPropertyTarget + } + } + + if (ts.isParameter(declaration)) { + const collectionExpression = getParameterCollectionExpression(declaration) + if (collectionExpression) { + const collectionTarget = findTargetFromExpression(collectionExpression, checker) + if (collectionTarget) + return collectionTarget + } + } + + const functionLikeDeclaration = getFunctionLikeDeclaration(declaration) + if (functionLikeDeclaration) { + const functionTarget = findFunctionLikeReturnTarget(functionLikeDeclaration, checker) + if (functionTarget) + return functionTarget + } + + if (!ts.isVariableDeclaration(declaration) || !declaration.initializer) + return undefined + + const collectionCallbackTarget = findCollectionCallbackTarget(declaration.initializer, checker) + if (collectionCallbackTarget) + return collectionCallbackTarget + + const initializerTarget = findTargetFromExpression(declaration.initializer, checker) + if (initializerTarget) + return initializerTarget + + const nestedTarget = findNestedContainerTarget(declaration.initializer, checker) + if (nestedTarget) + return nestedTarget + + if (!typeIncludesUndefined(checker.getTypeAtLocation(declaration.initializer))) + return undefined + + return createExpressionTarget(declaration.initializer) +} + +function findCollectionCallbackTarget( + expression: ts.Expression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const innerExpression = skipOuterExpressions(expression) + if (!ts.isCallExpression(innerExpression)) + return undefined + + const callee = getCallExpressionPropertyAccess(innerExpression) + if (!callee) + return undefined + + if (callee.name.text !== 'map' && callee.name.text !== 'flatMap') + return undefined + + const callback = getFunctionExpressionArgument(innerExpression) + if (!callback) + return undefined + + return findFunctionLikeReturnTarget(callback, checker) +} + +function findJsxComponentDeclarationTarget( + token: ts.Node, + checker: ts.TypeChecker, +): EditTarget | undefined { + const openingLikeElement = findAncestor(token, node => + ts.isJsxOpeningElement(node) || ts.isJsxSelfClosingElement(node)) + if (!openingLikeElement) + return undefined + + const tagName = openingLikeElement.tagName + if (!ts.isIdentifier(tagName)) + return undefined + + const symbol = checker.getSymbolAtLocation(tagName) + const declaration = symbol?.valueDeclaration + if (!declaration || !ts.isVariableDeclaration(declaration) || !declaration.initializer) + return undefined + + if (!typeIncludesUndefined(checker.getTypeAtLocation(declaration.initializer))) + return undefined + + return createExpressionTarget(declaration.initializer) +} + +function findObjectLiteralPropertyTarget( + objectLiteral: ts.ObjectLiteralExpression, + checker: ts.TypeChecker, +): EditTarget | undefined { + for (const property of objectLiteral.properties) { + if (ts.isSpreadAssignment(property)) { + const directTarget = findTargetFromExpression(property.expression, checker) + if (directTarget) + return directTarget + + if (typeIncludesUndefined(checker.getTypeAtLocation(property.expression))) + return createExpressionTarget(property.expression) + continue + } + + if (ts.isShorthandPropertyAssignment(property)) { + if (typeIncludesUndefined(checker.getTypeAtLocation(property.name))) + return createShorthandPropertyTarget(property) + continue + } + + if (ts.isPropertyAssignment(property)) { + const directTarget = findTargetFromExpression(property.initializer, checker) + if (directTarget) + return directTarget + + const nestedTarget = findNestedContainerTarget(property.initializer, checker) + if (nestedTarget) + return nestedTarget + + if (typeIncludesUndefined(checker.getTypeAtLocation(property.initializer))) + return createExpressionTarget(property.initializer) + } + } + + return undefined +} + +function findArrayLiteralElementTarget( + arrayLiteral: ts.ArrayLiteralExpression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const iterableFallbackTarget = createArrayLiteralIterableFallbackTarget(arrayLiteral, checker) + if (iterableFallbackTarget) + return iterableFallbackTarget + + for (const element of arrayLiteral.elements) { + if (ts.isSpreadElement(element)) { + const directTarget = findTargetFromExpression(element.expression, checker) + if (directTarget) + return directTarget + + if (typeIncludesUndefined(checker.getTypeAtLocation(element.expression))) + return createExpressionTarget(element.expression) + continue + } + + const directTarget = findTargetFromExpression(element, checker) + if (directTarget) + return directTarget + + const nestedTarget = findNestedContainerTarget(element, checker) + if (nestedTarget) + return nestedTarget + + if (typeIncludesUndefined(checker.getTypeAtLocation(element))) + return createExpressionTarget(element) + } + + for (let index = arrayLiteral.elements.length - 1; index >= 0; index -= 1) { + const element = arrayLiteral.elements[index] + if (!element) + continue + + if (ts.isSpreadElement(element)) + continue + + if (!isAlreadyNonNull(element)) + return createExpressionTarget(element) + } + + return undefined +} + +function findNestedContainerTarget( + expression: ts.Expression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const innerExpression = skipOuterExpressions(expression) + if (ts.isObjectLiteralExpression(innerExpression)) + return findObjectLiteralPropertyTarget(innerExpression, checker) + + if (ts.isArrayLiteralExpression(innerExpression)) + return findArrayLiteralElementTarget(innerExpression, checker) + + return undefined +} + +function findAccessDiagnosticTarget( + sourceFile: ts.SourceFile, + token: ts.Node, + start: number, + end: number, + checker: ts.TypeChecker, +): EditTarget | undefined { + const directExpression = findTightestExpression(sourceFile, start, end) + if (directExpression) { + if (typeIncludesUndefined(checker.getTypeAtLocation(directExpression)) && !isAlreadyNonNull(directExpression)) + return createExpressionTarget(directExpression) + + const referencedDeclarationTarget = findReferencedDeclarationInitializerTarget(directExpression, checker) + if (referencedDeclarationTarget && isExpressionTarget(referencedDeclarationTarget) && !isAlreadyNonNull(referencedDeclarationTarget.expression)) + return referencedDeclarationTarget + } + + const bindingPatternTarget = findVariableDeclarationInitializerTarget(sourceFile, token, checker) + if (bindingPatternTarget && isExpressionTarget(bindingPatternTarget) && !isAlreadyNonNull(bindingPatternTarget.expression)) + return bindingPatternTarget + + const accessTarget = findUndefinedAccessTarget(token, checker) + if (accessTarget && isExpressionTarget(accessTarget) && !isAlreadyNonNull(accessTarget.expression)) + return accessTarget + + const propertyAccessTarget = findPropertyAccessExpressionTarget(token, start) + if (propertyAccessTarget && isExpressionTarget(propertyAccessTarget) && !isAlreadyNonNull(propertyAccessTarget.expression)) + return propertyAccessTarget + + return undefined +} + +function findDiagnosticCandidate( + sourceFile: ts.SourceFile, + token: ts.Node, + start: number, + end: number, + diagnosticCode: number, + checker: ts.TypeChecker, +): ts.Expression | undefined { + if (diagnosticCode === 2322) { + const directExpression = findTightestExpression(sourceFile, start, end) + if (directExpression && typeIncludesUndefined(checker.getTypeAtLocation(directExpression))) + return directExpression + + return findAssignmentLikeCandidate(token, sourceFile, start, end) + } + + if (diagnosticCode === 2345) + return findArgumentCandidate(token, sourceFile, start, end) + + if (diagnosticCode === 2722) { + const current = findTightestExpression(sourceFile, start, end) + if (current && ts.isCallExpression(current)) + return current.expression + + return findTightestExpression(sourceFile, start, end) + } + + return findTightestExpression(sourceFile, start, end) +} + +function resolveEditTarget( + sourceFile: ts.SourceFile, + diagnostic: ts.DiagnosticWithLocation, + checker: ts.TypeChecker, +): EditTarget | undefined { + const start = diagnostic.start + const end = diagnostic.start + diagnostic.length + const token = getTokenAtPosition(sourceFile, start) + + const shorthandTarget = findShorthandPropertyTarget(token, checker) + if (shorthandTarget) + return shorthandTarget + + const propertyAssignmentTarget = findPropertyAssignmentInitializerTarget(token, start, checker) + if (propertyAssignmentTarget) + return propertyAssignmentTarget + + const jsxSpreadTarget = findJsxSpreadAttributeTarget(token, checker) + if (jsxSpreadTarget && isExpressionTarget(jsxSpreadTarget) && !isAlreadyNonNull(jsxSpreadTarget.expression)) + return jsxSpreadTarget + + const jsxAttribute = findAncestor(token, ts.isJsxAttribute) + const jsxExpression = jsxAttribute ? getExpressionFromJsxAttribute(jsxAttribute) : undefined + + if ( + ASSIGNABILITY_DIAGNOSTIC_CODES.has(diagnostic.code) + && jsxExpression + && typeIncludesUndefined(checker.getTypeAtLocation(jsxExpression)) + && !isAlreadyNonNull(jsxExpression) + ) { + return findTargetFromExpression(jsxExpression, checker) + ?? createExpressionTarget(jsxExpression) + } + + if (ACCESS_DIAGNOSTIC_CODES.has(diagnostic.code)) + return findAccessDiagnosticTarget(sourceFile, token, start, end, checker) + + if (diagnostic.code === 2322 || diagnostic.code === 2488) { + const arrayDestructuringTarget = findArrayDestructuringTarget(token, checker) + if (arrayDestructuringTarget) + return arrayDestructuringTarget + } + + if (diagnostic.code === 2538) { + const elementAccessTarget = findElementAccessArgumentTarget(token) + if (elementAccessTarget && isExpressionTarget(elementAccessTarget) && !isAlreadyNonNull(elementAccessTarget.expression)) + return elementAccessTarget + } + + if (diagnostic.code === 7006) + return findImplicitAnyParameterTarget(token) + + if (diagnostic.code === 2488) { + const iterableTarget = findIterableTarget(sourceFile, token, start, end, checker) + if (iterableTarget && (!isExpressionTarget(iterableTarget) || !isAlreadyNonNull(iterableTarget.expression))) + return iterableTarget + } + + if (diagnostic.code === 2604 || diagnostic.code === 2786) { + const jsxComponentTarget = findJsxComponentDeclarationTarget(token, checker) + if (jsxComponentTarget && isExpressionTarget(jsxComponentTarget) && !isAlreadyNonNull(jsxComponentTarget.expression)) + return jsxComponentTarget + } + + const candidate = findDiagnosticCandidate(sourceFile, token, start, end, diagnostic.code, checker) + + if (!candidate) { + return jsxExpression && !isAlreadyNonNull(jsxExpression) + ? createExpressionTarget(jsxExpression) + : undefined + } + + if (ASSIGNABILITY_DIAGNOSTIC_CODES.has(diagnostic.code)) { + if ( + diagnostic.code === 2345 + && typeIncludesUndefined(checker.getTypeAtLocation(candidate)) + && !isAlreadyNonNull(candidate) + && ( + ts.isIdentifier(candidate) + || ts.isElementAccessExpression(candidate) + || ts.isPropertyAccessExpression(candidate) + ) + ) { + return createExpressionTarget(candidate) + } + + const referencedDeclarationTarget = findReferencedDeclarationInitializerTarget(candidate, checker) + if (referencedDeclarationTarget && isExpressionTarget(referencedDeclarationTarget) && !isAlreadyNonNull(referencedDeclarationTarget.expression)) + return referencedDeclarationTarget + + const collectionCallbackTarget = findCollectionCallbackTarget(candidate, checker) + if (collectionCallbackTarget && isExpressionTarget(collectionCallbackTarget) && !isAlreadyNonNull(collectionCallbackTarget.expression)) + return collectionCallbackTarget + } + + const targetFromCandidate = findTargetFromExpression(candidate, checker) + if (targetFromCandidate && (!isExpressionTarget(targetFromCandidate) || !isAlreadyNonNull(targetFromCandidate.expression))) + return targetFromCandidate + + if (ASSIGNABILITY_DIAGNOSTIC_CODES.has(diagnostic.code) && (ts.isArrowFunction(candidate) || ts.isFunctionExpression(candidate))) { + const functionTarget = findFunctionLikeReturnTarget(candidate, checker) + if (functionTarget && isExpressionTarget(functionTarget) && !isAlreadyNonNull(functionTarget.expression)) + return functionTarget + } + + const nestedContainerTarget = findNestedContainerTarget(candidate, checker) + if (nestedContainerTarget) + return nestedContainerTarget + + if (isAlreadyNonNull(candidate)) + return undefined + + if (ASSIGNABILITY_DIAGNOSTIC_CODES.has(diagnostic.code) && ts.isObjectLiteralExpression(candidate)) { + const objectLiteralTarget = findObjectLiteralPropertyTarget(candidate, checker) + if (objectLiteralTarget) + return objectLiteralTarget + } + + if (diagnostic.code === 2322) { + const declarationInitializerTarget = findVariableDeclarationInitializerTarget(sourceFile, token, checker) + if (declarationInitializerTarget && isExpressionTarget(declarationInitializerTarget) && !isAlreadyNonNull(declarationInitializerTarget.expression)) + return declarationInitializerTarget + } + + if (ASSIGNABILITY_DIAGNOSTIC_CODES.has(diagnostic.code) && !typeIncludesUndefined(checker.getTypeAtLocation(candidate))) + return undefined + + return createExpressionTarget(candidate) +} + +function createEditForTarget( + target: EditTarget, + printer: ts.Printer, +): TextEdit { + const sourceFile = target.sourceFile + + if (target.kind === 'direct-edit') { + return { + end: target.end, + expectedText: sourceFile.text.slice(target.start, target.end), + replacement: target.replacement, + start: target.start, + } + } + + if (target.kind === 'shorthand-property') { + const name = target.property.name + const nonNullName = printer.printNode( + ts.EmitHint.Expression, + ts.factory.createNonNullExpression(name), + sourceFile, + ) + return { + end: target.property.getEnd(), + expectedText: sourceFile.text.slice(target.property.getStart(sourceFile), target.property.getEnd()), + replacement: `${name.getText(sourceFile)}: ${nonNullName}`, + start: target.property.getStart(sourceFile), + } + } + + const replacement = shouldPrintInlineNonNullAssertion(target.expression) + ? `${target.expression.getText(sourceFile)}!` + : printer.printNode( + ts.EmitHint.Expression, + ts.factory.createNonNullExpression(target.expression), + sourceFile, + ) + + return { + end: target.expression.getEnd(), + expectedText: sourceFile.text.slice(target.expression.getStart(sourceFile), target.expression.getEnd()), + replacement, + start: target.expression.getStart(sourceFile), + } +} + +function hasOverlap(existingEdits: TextEdit[], nextEdit: TextEdit): boolean { + return existingEdits.some(edit => nextEdit.start < edit.end && edit.start < nextEdit.end) +} + +function applyEdits(text: string, edits: TextEdit[]): { appliedEditCount: number, text: string } { + let currentText = text + let appliedEditCount = 0 + + for (const edit of edits.sort((left, right) => right.start - left.start)) { + if (edit.replacement.length > currentText.length * 4) + continue + + try { + currentText = `${currentText.slice(0, edit.start)}${edit.replacement}${currentText.slice(edit.end)}` + appliedEditCount += 1 + } + catch { + continue + } + } + + return { + appliedEditCount, + text: currentText, + } +} + +function isValidEditRange(text: string, edit: TextEdit): boolean { + return Number.isInteger(edit.start) + && Number.isInteger(edit.end) + && edit.start >= 0 + && edit.end >= edit.start + && edit.end <= text.length +} + +function filterApplicableEdits(text: string, edits: TextEdit[]): TextEdit[] { + return edits.filter(edit => isValidEditRange(text, edit) && (!edit.expectedText || text.slice(edit.start, edit.end) === edit.expectedText)) +} + +export async function runMigration(options: CliOptions) { + const projectPath = path.resolve(process.cwd(), options.project) + const parsedConfig = parseTsConfig(projectPath) + const matchesRequestedFile = createFileMatcher(options.files) + const targetFiles = parsedConfig.fileNames + .map(normalizeFileName) + .filter(isTargetFile) + .filter(matchesRequestedFile) + + if (targetFiles.length === 0) { + console.error('No matching TypeScript source files found.') + process.exitCode = 1 + return { converged: false, totalEdits: 0 } + } + + const fileTexts = new Map() + const printer = ts.createPrinter() + const migrationRootNames = options.useFullProjectRoots + ? parsedConfig.fileNames.map(normalizeFileName) + : getMigrationRootNames(parsedConfig, targetFiles) + + let totalEdits = 0 + let converged = false + let previousProgram: ts.Program | undefined + + for (let iteration = 1; iteration <= options.maxIterations; iteration += 1) { + const program = createMigrationProgram(migrationRootNames, parsedConfig, fileTexts, previousProgram) + const checker = program.getTypeChecker() + const editsByFile = new Map() + + for (const fileName of targetFiles) { + const sourceFile = program.getSourceFile(fileName) + if (!sourceFile) + continue + + const diagnostics = program + .getSemanticDiagnostics(sourceFile) + .filter((diagnostic): diagnostic is ts.DiagnosticWithLocation => { + return diagnostic.file !== undefined + && diagnostic.start !== undefined + && diagnostic.length !== undefined + && SUPPORTED_DIAGNOSTIC_CODES.has(diagnostic.code) + }) + + if (options.verbose && diagnostics.length > 0) + console.log(`file ${path.relative(process.cwd(), fileName)}: ${diagnostics.length} supported diagnostic(s)`) + + for (const diagnostic of diagnostics) { + const target = resolveEditTarget(sourceFile, diagnostic, checker) + if (!target) { + if (options.verbose) + console.log(`unresolved ${formatDiagnostic(diagnostic)}`) + continue + } + + const editFileName = target.sourceFile.fileName + const edit = createEditForTarget(target, printer) + const existing = editsByFile.get(editFileName) ?? [] + if (hasOverlap(existing, edit)) + continue + + existing.push(edit) + editsByFile.set(editFileName, existing) + + if (options.verbose) { + const position = target.sourceFile.getLineAndCharacterOfPosition(edit.start) + console.log(`iter ${iteration}: ${path.relative(process.cwd(), editFileName)}:${position.line + 1}:${position.character + 1} -> add !`) + } + } + } + + if (editsByFile.size === 0) { + console.log(`No more supported diagnostics after ${iteration - 1} iteration(s).`) + converged = true + break + } + + let iterationEditCount = 0 + + for (const [fileName, edits] of editsByFile) { + const currentText = fileTexts.get(fileName) ?? await fs.readFile(fileName, 'utf8') + const applicableEdits = filterApplicableEdits(currentText, edits) + if (applicableEdits.length === 0) + continue + + const { appliedEditCount, text: editedText } = applyEdits(currentText, applicableEdits) + if (appliedEditCount === 0) + continue + + const nextText = normalizeMalformedAssertions(editedText) + if (nextText === currentText) { + if (options.verbose) { + const firstEdit = applicableEdits[0] + console.log(`iter ${iteration}: no-op after normalization for ${path.relative(process.cwd(), fileName)}:${firstEdit?.start ?? 0} ${JSON.stringify(firstEdit ? currentText.slice(firstEdit.start, firstEdit.end) : '')} -> ${JSON.stringify(firstEdit?.replacement ?? '')}`) + } + continue + } + + fileTexts.set(fileName, nextText) + iterationEditCount += appliedEditCount + } + + totalEdits += iterationEditCount + console.log(`Iteration ${iteration}: ${iterationEditCount} edit(s) across ${editsByFile.size} file(s).`) + previousProgram = program + } + + if (totalEdits === 0) { + console.log('No supported noUncheckedIndexedAccess-style diagnostics were migrated.') + return { converged, totalEdits } + } + + if (!options.write) { + if (!converged) + console.log(`Stopped after reaching --max-iterations=${options.maxIterations}.`) + + console.log(`Dry run complete. ${totalEdits} edit(s) are ready. Re-run with --write to apply them.`) + return { converged, totalEdits } + } + + const changedFiles = Array.from(fileTexts.entries()) + await Promise.all(changedFiles.map(async ([fileName, text]) => { + await fs.writeFile(fileName, text) + })) + + if (!converged) + console.log(`Stopped after reaching --max-iterations=${options.maxIterations}.`) + + console.log(`Wrote ${totalEdits} edit(s) to ${changedFiles.length} file(s).`) + return { converged, totalEdits } +} + +export async function runMigrationCommand(argv: string[]) { + await runMigration(parseArgs(argv)) +} diff --git a/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/normalize.ts b/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/normalize.ts new file mode 100644 index 0000000000..d3b88736fc --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/normalize.ts @@ -0,0 +1,51 @@ +import fs from 'node:fs/promises' +import path from 'node:path' +import process from 'node:process' +import { normalizeMalformedAssertions } from './migrate' + +const ROOT = process.cwd() +const EXTENSIONS = new Set(['.ts', '.tsx']) + +async function collectFiles(directory: string): Promise { + const entries = await fs.readdir(directory, { withFileTypes: true }) + const files: string[] = [] + + for (const entry of entries) { + if (entry.name === 'node_modules' || entry.name === '.next') + continue + + const absolutePath = path.join(directory, entry.name) + if (entry.isDirectory()) { + files.push(...await collectFiles(absolutePath)) + continue + } + + if (!EXTENSIONS.has(path.extname(entry.name))) + continue + + files.push(absolutePath) + } + + return files +} + +async function main() { + const files = await collectFiles(ROOT) + let changedFileCount = 0 + + await Promise.all(files.map(async (fileName) => { + const currentText = await fs.readFile(fileName, 'utf8') + const nextText = normalizeMalformedAssertions(currentText) + if (nextText === currentText) + return + + await fs.writeFile(fileName, nextText) + changedFileCount += 1 + })) + + console.log(`Normalized malformed assertion syntax in ${changedFileCount} file(s).`) +} + +export async function runNormalizeCommand(_argv: string[]) { + await main() +} diff --git a/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/run.ts b/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/run.ts new file mode 100644 index 0000000000..ad655e4f11 --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/run.ts @@ -0,0 +1,325 @@ +import { execFile } from 'node:child_process' +import { createHash } from 'node:crypto' +import fs from 'node:fs/promises' +import os from 'node:os' +import path from 'node:path' +import process from 'node:process' +import { promisify } from 'node:util' +import { runMigration, SUPPORTED_DIAGNOSTIC_CODES } from './migrate' + +const execFileAsync = promisify(execFile) +const DIAGNOSTIC_PATTERN = /^(.+?\.(?:ts|tsx))\((\d+),(\d+)\): error TS(\d+): (.+)$/ +const DEFAULT_BATCH_SIZE = 100 +const DEFAULT_BATCH_ITERATIONS = 5 +const DEFAULT_MAX_ROUNDS = 20 +const TYPECHECK_CACHE_DIR = path.join(os.tmpdir(), 'migrate-no-unchecked-indexed-access') + +type CliOptions = { + batchIterations: number + batchSize: number + maxRounds: number + project: string + verbose: boolean +} + +type DiagnosticEntry = { + code: number + fileName: string + line: number + message: string +} + +function parseArgs(argv: string[]): CliOptions { + const options: CliOptions = { + batchIterations: DEFAULT_BATCH_ITERATIONS, + batchSize: DEFAULT_BATCH_SIZE, + maxRounds: DEFAULT_MAX_ROUNDS, + project: 'tsconfig.json', + verbose: false, + } + + for (let i = 0; i < argv.length; i += 1) { + const arg = argv[i] + + if (arg === '--') + continue + + if (arg === '--verbose') { + options.verbose = true + continue + } + + if (arg === '--project') { + const value = argv[i + 1] + if (!value) + throw new Error('Missing value for --project') + + options.project = value + i += 1 + continue + } + + if (arg === '--batch-size') { + const value = Number(argv[i + 1]) + if (!Number.isInteger(value) || value <= 0) + throw new Error('Invalid value for --batch-size') + + options.batchSize = value + i += 1 + continue + } + + if (arg === '--batch-iterations') { + const value = Number(argv[i + 1]) + if (!Number.isInteger(value) || value <= 0) + throw new Error('Invalid value for --batch-iterations') + + options.batchIterations = value + i += 1 + continue + } + + if (arg === '--max-rounds') { + const value = Number(argv[i + 1]) + if (!Number.isInteger(value) || value <= 0) + throw new Error('Invalid value for --max-rounds') + + options.maxRounds = value + i += 1 + continue + } + + throw new Error(`Unknown option: ${arg}`) + } + + return options +} + +function getTypeCheckBuildInfoPath(projectPath: string): string { + const hash = createHash('sha1') + .update(projectPath) + .digest('hex') + .slice(0, 16) + + return path.join(TYPECHECK_CACHE_DIR, `${hash}.tsbuildinfo`) +} + +async function runTypeCheck( + project: string, + options?: { + incremental?: boolean + }, +): Promise<{ diagnostics: DiagnosticEntry[], exitCode: number, rawOutput: string }> { + const projectPath = path.resolve(process.cwd(), project) + const projectDirectory = path.dirname(projectPath) + const buildInfoPath = getTypeCheckBuildInfoPath(projectPath) + const incremental = options?.incremental ?? true + + await fs.mkdir(TYPECHECK_CACHE_DIR, { recursive: true }) + + const tscArgs = ['exec', 'tsc', '--noEmit', '--pretty', 'false'] + if (incremental) { + tscArgs.push('--incremental', '--tsBuildInfoFile', buildInfoPath) + } + else { + tscArgs.push('--incremental', 'false') + } + tscArgs.push('--project', projectPath) + + try { + const { stdout, stderr } = await execFileAsync('pnpm', tscArgs, { + cwd: projectDirectory, + env: { + ...process.env, + NODE_OPTIONS: process.env.NODE_OPTIONS ?? '--max-old-space-size=8192', + }, + maxBuffer: 1024 * 1024 * 32, + }) + + const rawOutput = `${stdout}${stderr}`.trim() + return { + diagnostics: parseDiagnostics(rawOutput, projectDirectory), + exitCode: 0, + rawOutput, + } + } + catch (error) { + const exitCode = typeof error === 'object' && error && 'code' in error && typeof error.code === 'number' + ? error.code + : 1 + const stdout = typeof error === 'object' && error && 'stdout' in error && typeof error.stdout === 'string' + ? error.stdout + : '' + const stderr = typeof error === 'object' && error && 'stderr' in error && typeof error.stderr === 'string' + ? error.stderr + : '' + const rawOutput = `${stdout}${stderr}`.trim() + + return { + diagnostics: parseDiagnostics(rawOutput, projectDirectory), + exitCode, + rawOutput, + } + } +} + +function parseDiagnostics(rawOutput: string, projectDirectory: string): DiagnosticEntry[] { + return rawOutput + .split('\n') + .map(line => line.trim()) + .flatMap((line) => { + const match = line.match(DIAGNOSTIC_PATTERN) + if (!match) + return [] + + return [{ + code: Number(match[4]), + fileName: path.resolve(projectDirectory, match[1]!), + line: Number(match[2]), + message: match[5] ?? '', + }] + }) +} + +function summarizeCodes(diagnostics: DiagnosticEntry[]): string { + const counts = new Map() + for (const diagnostic of diagnostics) + counts.set(diagnostic.code, (counts.get(diagnostic.code) ?? 0) + 1) + + return Array.from(counts.entries()) + .sort((left, right) => right[1] - left[1]) + .slice(0, 8) + .map(([code, count]) => `TS${code}:${count}`) + .join(', ') +} + +function chunk(items: T[], size: number): T[][] { + const batches: T[][] = [] + for (let i = 0; i < items.length; i += size) + batches.push(items.slice(i, i + size)) + + return batches +} + +async function runBatchMigration(options: CliOptions) { + for (let round = 1; round <= options.maxRounds; round += 1) { + const { diagnostics, exitCode, rawOutput } = await runTypeCheck(options.project) + if (exitCode === 0) { + const finalCheck = await runTypeCheck(options.project, { incremental: false }) + if (finalCheck.exitCode !== 0) { + const finalDiagnostics = finalCheck.diagnostics + console.log(`Final cold type check found ${finalDiagnostics.length} diagnostic(s). ${summarizeCodes(finalDiagnostics)}`) + + if (options.verbose) { + for (const diagnostic of finalDiagnostics.slice(0, 40)) + console.log(`${path.relative(process.cwd(), diagnostic.fileName)}:${diagnostic.line} TS${diagnostic.code} ${diagnostic.message}`) + } + + const finalSupportedFiles = Array.from(new Set( + finalDiagnostics + .filter(diagnostic => SUPPORTED_DIAGNOSTIC_CODES.has(diagnostic.code)) + .map(diagnostic => diagnostic.fileName), + )) + + if (finalSupportedFiles.length > 0) { + console.log(` Final pass batch: ${finalSupportedFiles.length} file(s)`) + let finalResult = await runMigration({ + files: finalSupportedFiles, + maxIterations: options.batchIterations, + project: options.project, + verbose: options.verbose, + write: true, + }) + + if (finalResult.totalEdits === 0) { + console.log(' No edits produced; retrying final pass with full project roots.') + finalResult = await runMigration({ + files: finalSupportedFiles, + maxIterations: options.batchIterations, + project: options.project, + useFullProjectRoots: true, + verbose: options.verbose, + write: true, + }) + } + + if (finalResult.totalEdits > 0) + continue + } + + if (finalCheck.rawOutput) + process.stderr.write(`${finalCheck.rawOutput}\n`) + process.exitCode = 1 + return + } + + console.log(`Type check passed after ${round - 1} migration round(s).`) + return + } + + const supportedDiagnostics = diagnostics.filter(diagnostic => SUPPORTED_DIAGNOSTIC_CODES.has(diagnostic.code)) + const unsupportedDiagnostics = diagnostics.filter(diagnostic => !SUPPORTED_DIAGNOSTIC_CODES.has(diagnostic.code)) + const supportedFiles = Array.from(new Set(supportedDiagnostics.map(diagnostic => diagnostic.fileName))) + + console.log(`Round ${round}: ${diagnostics.length} diagnostic(s). ${summarizeCodes(diagnostics)}`) + + if (options.verbose) { + for (const diagnostic of diagnostics.slice(0, 40)) + console.log(`${path.relative(process.cwd(), diagnostic.fileName)}:${diagnostic.line} TS${diagnostic.code} ${diagnostic.message}`) + } + + if (supportedFiles.length === 0) { + console.error('No supported diagnostics remain to migrate.') + if (unsupportedDiagnostics.length > 0) { + console.error('Remaining unsupported diagnostics:') + for (const diagnostic of unsupportedDiagnostics.slice(0, 40)) + console.error(`${path.relative(process.cwd(), diagnostic.fileName)}:${diagnostic.line} TS${diagnostic.code} ${diagnostic.message}`) + } + if (rawOutput) + process.stderr.write(`${rawOutput}\n`) + process.exitCode = 1 + return + } + + let roundEdits = 0 + const batches = chunk(supportedFiles, options.batchSize) + + for (const [index, batch] of batches.entries()) { + console.log(` Batch ${index + 1}/${batches.length}: ${batch.length} file(s)`) + let result = await runMigration({ + files: batch, + maxIterations: options.batchIterations, + project: options.project, + verbose: options.verbose, + write: true, + }) + + if (result.totalEdits === 0) { + console.log(' No edits produced; retrying batch with full project roots.') + result = await runMigration({ + files: batch, + maxIterations: options.batchIterations, + project: options.project, + useFullProjectRoots: true, + verbose: options.verbose, + write: true, + }) + } + + roundEdits += result.totalEdits + } + + if (roundEdits === 0) { + console.error('Migration script made no edits in this round; stopping to avoid an infinite loop.') + process.exitCode = 1 + return + } + } + + console.error(`Reached --max-rounds=${options.maxRounds} before type check passed.`) + process.exitCode = 1 +} + +export async function runBatchMigrationCommand(argv: string[]) { + await runBatchMigration(parseArgs(argv)) +} diff --git a/packages/migrate-no-unchecked-indexed-access/tsconfig.json b/packages/migrate-no-unchecked-indexed-access/tsconfig.json new file mode 100644 index 0000000000..aeb24e1df5 --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/tsconfig.json @@ -0,0 +1,3 @@ +{ + "extends": "@dify/tsconfig/node.json" +} diff --git a/packages/migrate-no-unchecked-indexed-access/vite.config.ts b/packages/migrate-no-unchecked-indexed-access/vite.config.ts new file mode 100644 index 0000000000..ac4aed1a06 --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/vite.config.ts @@ -0,0 +1,17 @@ +import { defineConfig } from 'vite-plus' + +export default defineConfig({ + pack: { + clean: true, + deps: { + neverBundle: ['typescript'], + }, + entry: ['src/cli.ts'], + format: ['esm'], + outDir: 'dist', + platform: 'node', + sourcemap: true, + target: 'node22', + treeshake: true, + }, +}) diff --git a/packages/tsconfig/base.json b/packages/tsconfig/base.json new file mode 100644 index 0000000000..707f1aff56 --- /dev/null +++ b/packages/tsconfig/base.json @@ -0,0 +1,19 @@ +{ + "compilerOptions": { + "esModuleInterop": true, + "skipLibCheck": true, + "target": "es2022", + "allowJs": true, + "resolveJsonModule": true, + "moduleDetection": "force", + "isolatedModules": true, + "verbatimModuleSyntax": true, + + "strict": true, + "noUncheckedIndexedAccess": true, + "noImplicitOverride": true, + + "module": "preserve", + "noEmit": true + } +} diff --git a/packages/tsconfig/nextjs.json b/packages/tsconfig/nextjs.json new file mode 100644 index 0000000000..81c6436a97 --- /dev/null +++ b/packages/tsconfig/nextjs.json @@ -0,0 +1,10 @@ +{ + "extends": "./web.json", + "compilerOptions": { + "plugins": [ + { + "name": "next" + } + ] + } +} diff --git a/packages/tsconfig/node.json b/packages/tsconfig/node.json new file mode 100644 index 0000000000..832dab2b09 --- /dev/null +++ b/packages/tsconfig/node.json @@ -0,0 +1,7 @@ +{ + "extends": "./base.json", + "compilerOptions": { + "lib": ["es2022"], + "types": ["node"] + } +} diff --git a/packages/tsconfig/package.json b/packages/tsconfig/package.json new file mode 100644 index 0000000000..52cafc5bb3 --- /dev/null +++ b/packages/tsconfig/package.json @@ -0,0 +1,11 @@ +{ + "name": "@dify/tsconfig", + "version": "0.0.0-private", + "private": true, + "exports": { + "./base.json": "./base.json", + "./nextjs.json": "./nextjs.json", + "./node.json": "./node.json", + "./web.json": "./web.json" + } +} diff --git a/packages/tsconfig/web.json b/packages/tsconfig/web.json new file mode 100644 index 0000000000..9f3ba7c121 --- /dev/null +++ b/packages/tsconfig/web.json @@ -0,0 +1,7 @@ +{ + "extends": "./base.json", + "compilerOptions": { + "jsx": "react-jsx", + "lib": ["es2022", "dom", "dom.iterable"] + } +} diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index 98e6e21bc2..914bc342e2 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -7,23 +7,23 @@ settings: catalogs: default: '@amplitude/analytics-browser': - specifier: 2.38.1 - version: 2.38.1 + specifier: 2.39.0 + version: 2.39.0 '@amplitude/plugin-session-replay-browser': - specifier: 1.27.6 - version: 1.27.6 + specifier: 1.27.7 + version: 1.27.7 '@antfu/eslint-config': - specifier: 8.0.0 - version: 8.0.0 + specifier: 8.2.0 + version: 8.2.0 '@base-ui/react': - specifier: 1.3.0 - version: 1.3.0 + specifier: 1.4.0 + version: 1.4.0 '@chromatic-com/storybook': - specifier: 5.1.1 - version: 5.1.1 + specifier: 5.1.2 + version: 5.1.2 '@cucumber/cucumber': - specifier: 12.7.0 - version: 12.7.0 + specifier: 12.8.0 + version: 12.8.0 '@egoist/tailwindcss-icons': specifier: 1.9.2 version: 1.9.2 @@ -40,8 +40,8 @@ catalogs: specifier: 0.27.19 version: 0.27.19 '@formatjs/intl-localematcher': - specifier: 0.8.2 - version: 0.8.2 + specifier: 0.8.3 + version: 0.8.3 '@headlessui/react': specifier: 2.2.10 version: 2.2.10 @@ -49,8 +49,8 @@ catalogs: specifier: 2.2.0 version: 2.2.0 '@hono/node-server': - specifier: 1.19.13 - version: 1.19.13 + specifier: 1.19.14 + version: 1.19.14 '@iconify-json/heroicons': specifier: 1.2.3 version: 1.2.3 @@ -58,23 +58,23 @@ catalogs: specifier: 1.2.10 version: 1.2.10 '@lexical/link': - specifier: 0.42.0 - version: 0.42.0 + specifier: 0.43.0 + version: 0.43.0 '@lexical/list': - specifier: 0.42.0 - version: 0.42.0 + specifier: 0.43.0 + version: 0.43.0 '@lexical/react': - specifier: 0.42.0 - version: 0.42.0 + specifier: 0.43.0 + version: 0.43.0 '@lexical/selection': - specifier: 0.42.0 - version: 0.42.0 + specifier: 0.43.0 + version: 0.43.0 '@lexical/text': - specifier: 0.42.0 - version: 0.42.0 + specifier: 0.43.0 + version: 0.43.0 '@lexical/utils': - specifier: 0.42.0 - version: 0.42.0 + specifier: 0.43.0 + version: 0.43.0 '@mdx-js/loader': specifier: 3.1.1 version: 3.1.1 @@ -88,23 +88,23 @@ catalogs: specifier: 4.7.0 version: 4.7.0 '@next/eslint-plugin-next': - specifier: 16.2.2 - version: 16.2.2 + specifier: 16.2.3 + version: 16.2.3 '@next/mdx': - specifier: 16.2.2 - version: 16.2.2 + specifier: 16.2.3 + version: 16.2.3 '@orpc/client': - specifier: 1.13.13 - version: 1.13.13 + specifier: 1.13.14 + version: 1.13.14 '@orpc/contract': - specifier: 1.13.13 - version: 1.13.13 + specifier: 1.13.14 + version: 1.13.14 '@orpc/openapi-client': - specifier: 1.13.13 - version: 1.13.13 + specifier: 1.13.14 + version: 1.13.14 '@orpc/tanstack-query': - specifier: 1.13.13 - version: 1.13.13 + specifier: 1.13.14 + version: 1.13.14 '@playwright/test': specifier: 1.59.1 version: 1.59.1 @@ -115,8 +115,8 @@ catalogs: specifier: 4.2.0 version: 4.2.0 '@sentry/react': - specifier: 10.47.0 - version: 10.47.0 + specifier: 10.48.0 + version: 10.48.0 '@storybook/addon-docs': specifier: 10.3.5 version: 10.3.5 @@ -154,23 +154,23 @@ catalogs: specifier: 4.2.2 version: 4.2.2 '@tanstack/eslint-plugin-query': - specifier: 5.96.2 - version: 5.96.2 + specifier: 5.99.0 + version: 5.99.0 '@tanstack/react-devtools': specifier: 0.10.2 version: 0.10.2 '@tanstack/react-form': - specifier: 1.28.6 - version: 1.28.6 + specifier: 1.29.0 + version: 1.29.0 '@tanstack/react-form-devtools': - specifier: 0.2.20 - version: 0.2.20 + specifier: 0.2.21 + version: 0.2.21 '@tanstack/react-query': - specifier: 5.96.2 - version: 5.96.2 + specifier: 5.99.0 + version: 5.99.0 '@tanstack/react-query-devtools': - specifier: 5.96.2 - version: 5.96.2 + specifier: 5.99.0 + version: 5.99.0 '@tanstack/react-virtual': specifier: 3.13.23 version: 3.13.23 @@ -187,14 +187,14 @@ catalogs: specifier: 14.6.1 version: 14.6.1 '@tsslint/cli': - specifier: 3.0.2 - version: 3.0.2 + specifier: 3.0.3 + version: 3.0.3 '@tsslint/compat-eslint': - specifier: 3.0.2 - version: 3.0.2 + specifier: 3.0.3 + version: 3.0.3 '@tsslint/config': - specifier: 3.0.2 - version: 3.0.2 + specifier: 3.0.3 + version: 3.0.3 '@types/js-cookie': specifier: 3.0.6 version: 3.0.6 @@ -205,8 +205,8 @@ catalogs: specifier: 0.6.4 version: 0.6.4 '@types/node': - specifier: 25.5.2 - version: 25.5.2 + specifier: 25.6.0 + version: 25.6.0 '@types/qs': specifier: 6.15.0 version: 6.15.0 @@ -220,23 +220,23 @@ catalogs: specifier: 1.15.9 version: 1.15.9 '@typescript-eslint/eslint-plugin': - specifier: 8.58.1 - version: 8.58.1 + specifier: 8.58.2 + version: 8.58.2 '@typescript-eslint/parser': - specifier: 8.58.1 - version: 8.58.1 + specifier: 8.58.2 + version: 8.58.2 '@typescript/native-preview': - specifier: 7.0.0-dev.20260407.1 - version: 7.0.0-dev.20260407.1 + specifier: 7.0.0-dev.20260413.1 + version: 7.0.0-dev.20260413.1 '@vitejs/plugin-react': specifier: 6.0.1 version: 6.0.1 '@vitejs/plugin-rsc': - specifier: 0.5.22 - version: 0.5.22 + specifier: 0.5.24 + version: 0.5.24 '@vitest/coverage-v8': - specifier: 4.1.3 - version: 4.1.3 + specifier: 4.1.4 + version: 4.1.4 abcjs: specifier: 6.6.2 version: 6.6.2 @@ -249,6 +249,9 @@ catalogs: class-variance-authority: specifier: 0.7.1 version: 0.7.1 + client-only: + specifier: 0.0.1 + version: 0.0.1 clsx: specifier: 2.1.1 version: 2.1.1 @@ -271,8 +274,8 @@ catalogs: specifier: 10.6.0 version: 10.6.0 dompurify: - specifier: 3.3.3 - version: 3.3.3 + specifier: 3.4.0 + version: 3.4.0 echarts: specifier: 6.0.0 version: 6.0.0 @@ -298,20 +301,20 @@ catalogs: specifier: 10.2.0 version: 10.2.0 eslint-markdown: - specifier: 0.6.0 - version: 0.6.0 + specifier: 0.6.1 + version: 0.6.1 eslint-plugin-better-tailwindcss: - specifier: 4.3.2 - version: 4.3.2 + specifier: 4.4.1 + version: 4.4.1 eslint-plugin-hyoban: specifier: 0.14.1 version: 0.14.1 eslint-plugin-markdown-preferences: - specifier: 0.41.0 - version: 0.41.0 + specifier: 0.41.1 + version: 0.41.1 eslint-plugin-no-barrel-files: - specifier: 1.2.2 - version: 1.2.2 + specifier: 1.3.1 + version: 1.3.1 eslint-plugin-react-refresh: specifier: 0.5.2 version: 0.5.2 @@ -324,18 +327,15 @@ catalogs: fast-deep-equal: specifier: 3.1.3 version: 3.1.3 - foxact: - specifier: 0.3.0 - version: 0.3.0 happy-dom: - specifier: 20.8.9 - version: 20.8.9 + specifier: 20.9.0 + version: 20.9.0 hast-util-to-jsx-runtime: specifier: 2.3.6 version: 2.3.6 hono: - specifier: 4.12.12 - version: 4.12.12 + specifier: 4.12.14 + version: 4.12.14 html-entities: specifier: 2.6.0 version: 2.6.0 @@ -343,8 +343,8 @@ catalogs: specifier: 1.11.13 version: 1.11.13 i18next: - specifier: 26.0.3 - version: 26.0.3 + specifier: 26.0.4 + version: 26.0.4 i18next-resources-to-backend: specifier: 1.2.1 version: 1.2.1 @@ -373,8 +373,8 @@ catalogs: specifier: 0.16.45 version: 0.16.45 knip: - specifier: 6.3.0 - version: 6.3.0 + specifier: 6.4.1 + version: 6.4.1 ky: specifier: 2.0.0 version: 2.0.0 @@ -382,8 +382,11 @@ catalogs: specifier: 1.2.1 version: 1.2.1 lexical: - specifier: 0.42.0 - version: 0.42.0 + specifier: 0.43.0 + version: 0.43.0 + loro-crdt: + specifier: 1.10.8 + version: 1.10.8 mermaid: specifier: 11.14.0 version: 11.14.0 @@ -397,8 +400,8 @@ catalogs: specifier: 1.0.0 version: 1.0.0 next: - specifier: 16.2.2 - version: 16.2.2 + specifier: 16.2.3 + version: 16.2.3 next-themes: specifier: 0.4.6 version: 0.4.6 @@ -406,8 +409,8 @@ catalogs: specifier: 2.8.9 version: 2.8.9 pinyin-pro: - specifier: 3.28.0 - version: 3.28.0 + specifier: 3.28.1 + version: 3.28.1 postcss: specifier: 8.5.9 version: 8.5.9 @@ -415,17 +418,17 @@ catalogs: specifier: 4.2.0 version: 4.2.0 qs: - specifier: 6.15.0 - version: 6.15.0 + specifier: 6.15.1 + version: 6.15.1 react: - specifier: 19.2.4 - version: 19.2.4 + specifier: 19.2.5 + version: 19.2.5 react-18-input-autosize: specifier: 3.0.0 version: 3.0.0 react-dom: - specifier: 19.2.4 - version: 19.2.4 + specifier: 19.2.5 + version: 19.2.5 react-easy-crop: specifier: 5.5.7 version: 5.5.7 @@ -433,8 +436,8 @@ catalogs: specifier: 5.2.4 version: 5.2.4 react-i18next: - specifier: 17.0.2 - version: 17.0.2 + specifier: 16.5.8 + version: 16.5.8 react-multi-email: specifier: 1.0.25 version: 1.0.25 @@ -445,8 +448,8 @@ catalogs: specifier: 8.0.0-rc.0 version: 8.0.0-rc.0 react-server-dom-webpack: - specifier: 19.2.4 - version: 19.2.4 + specifier: 19.2.5 + version: 19.2.5 react-sortablejs: specifier: 6.1.4 version: 6.1.4 @@ -471,6 +474,9 @@ catalogs: shiki: specifier: 4.0.2 version: 4.0.2 + socket.io-client: + specifier: 4.8.3 + version: 4.8.3 sortablejs: specifier: 1.15.7 version: 1.15.7 @@ -514,14 +520,14 @@ catalogs: specifier: 13.0.0 version: 13.0.0 vinext: - specifier: 0.0.40 - version: 0.0.40 + specifier: 0.0.41 + version: 0.0.41 vite-plugin-inspect: specifier: 12.0.0-beta.1 version: 12.0.0-beta.1 vite-plus: - specifier: 0.1.16 - version: 0.1.16 + specifier: 0.1.18 + version: 0.1.18 vitest-canvas-mock: specifier: 1.1.4 version: 1.1.4 @@ -545,8 +551,8 @@ overrides: flatted@<=3.4.1: 3.4.2 glob@>=10.2.0 <10.5.0: 11.1.0 is-core-module: npm:@nolyfill/is-core-module@^1.0.39 - lodash@>=4.0.0 <= 4.17.23: 4.18.0 lodash-es@>=4.0.0 <= 4.17.23: 4.18.0 + lodash@>=4.0.0 <= 4.17.23: 4.18.0 picomatch@<2.3.2: 2.3.2 picomatch@>=4.0.0 <4.0.4: 4.0.4 rollup@>=4.0.0 <4.59.0: 4.59.0 @@ -559,8 +565,8 @@ overrides: svgo@>=3.0.0 <3.3.3: 3.3.3 tar@<=7.5.10: 7.5.11 undici@>=7.0.0 <7.24.0: 7.24.0 - vite: npm:@voidzero-dev/vite-plus-core@0.1.16 - vitest: npm:@voidzero-dev/vite-plus-test@0.1.16 + vite: npm:@voidzero-dev/vite-plus-core@0.1.18 + vitest: npm:@voidzero-dev/vite-plus-test@0.1.18 yaml@>=2.0.0 <2.8.3: 2.8.3 yauzl@<3.2.1: 3.2.1 @@ -568,30 +574,73 @@ importers: .: devDependencies: + '@antfu/eslint-config': + specifier: 'catalog:' + version: 8.2.0(@eslint-react/eslint-plugin@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@next/eslint-plugin-next@16.2.3)(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2))(@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@vue/compiler-sfc@3.5.31)(eslint-plugin-react-refresh@0.5.2(eslint@10.2.0(jiti@2.6.1)))(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0))(typescript@6.0.2)(vitest@4.1.4) + eslint: + specifier: 'catalog:' + version: 10.2.0(jiti@2.6.1) + eslint-markdown: + specifier: 'catalog:' + version: 0.6.1(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-markdown-preferences: + specifier: 'catalog:' + version: 0.41.1(@eslint/markdown@8.0.1)(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-no-barrel-files: + specifier: 'catalog:' + version: 1.3.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + vite: + specifier: npm:@voidzero-dev/vite-plus-core@0.1.18 + version: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' vite-plus: specifier: 'catalog:' - version: 0.1.16(@types/node@25.5.2)(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(vite@8.0.3(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(yaml@2.8.3))(yaml@2.8.3) + version: 0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) e2e: devDependencies: '@cucumber/cucumber': specifier: 'catalog:' - version: 12.7.0 + version: 12.8.0 + '@dify/tsconfig': + specifier: workspace:* + version: link:../packages/tsconfig '@playwright/test': specifier: 'catalog:' version: 1.59.1 '@types/node': specifier: 'catalog:' - version: 25.5.2 + version: 25.6.0 tsx: specifier: 'catalog:' version: 4.21.0 typescript: specifier: 'catalog:' version: 6.0.2 + vite: + specifier: npm:@voidzero-dev/vite-plus-core@0.1.18 + version: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' vite-plus: specifier: 'catalog:' - version: 0.1.16(@types/node@25.5.2)(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(vite@8.0.3(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(yaml@2.8.3))(yaml@2.8.3) + version: 0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + + packages/dify-ui: + dependencies: + clsx: + specifier: 'catalog:' + version: 2.1.1 + tailwind-merge: + specifier: 'catalog:' + version: 3.5.0 + devDependencies: + '@dify/tsconfig': + specifier: workspace:* + version: link:../tsconfig + tailwindcss: + specifier: 'catalog:' + version: 4.2.2 + typescript: + specifier: 'catalog:' + version: 6.0.2 packages/iconify-collections: devDependencies: @@ -599,107 +648,134 @@ importers: specifier: 'catalog:' version: 0.1.2 + packages/migrate-no-unchecked-indexed-access: + dependencies: + typescript: + specifier: 'catalog:' + version: 6.0.2 + devDependencies: + '@dify/tsconfig': + specifier: workspace:* + version: link:../tsconfig + '@types/node': + specifier: 'catalog:' + version: 25.6.0 + vite: + specifier: npm:@voidzero-dev/vite-plus-core@0.1.18 + version: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite-plus: + specifier: 'catalog:' + version: 0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + + packages/tsconfig: {} + sdks/nodejs-client: devDependencies: + '@dify/tsconfig': + specifier: workspace:* + version: link:../../packages/tsconfig '@eslint/js': specifier: 'catalog:' version: 10.0.1(eslint@10.2.0(jiti@2.6.1)) '@types/node': specifier: 'catalog:' - version: 25.5.2 + version: 25.6.0 '@typescript-eslint/eslint-plugin': specifier: 'catalog:' - version: 8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + version: 8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@typescript-eslint/parser': specifier: 'catalog:' - version: 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + version: 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@vitest/coverage-v8': specifier: 'catalog:' - version: 4.1.3(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(vite@8.0.3(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(yaml@2.8.3))(yaml@2.8.3)) + version: 4.1.4(@voidzero-dev/vite-plus-test@0.1.18) eslint: specifier: 'catalog:' version: 10.2.0(jiti@2.6.1) typescript: specifier: 'catalog:' version: 6.0.2 + vite: + specifier: npm:@voidzero-dev/vite-plus-core@0.1.18 + version: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' vite-plus: specifier: 'catalog:' - version: 0.1.16(@types/node@25.5.2)(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(vite@8.0.3(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(yaml@2.8.3))(yaml@2.8.3) + version: 0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) vitest: - specifier: npm:@voidzero-dev/vite-plus-test@0.1.16 - version: '@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(vite@8.0.3(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(yaml@2.8.3))(yaml@2.8.3)' + specifier: npm:@voidzero-dev/vite-plus-test@0.1.18 + version: '@voidzero-dev/vite-plus-test@0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' web: dependencies: '@amplitude/analytics-browser': specifier: 'catalog:' - version: 2.38.1 + version: 2.39.0 '@amplitude/plugin-session-replay-browser': specifier: 'catalog:' - version: 1.27.6(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0) + version: 1.27.7(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0) '@base-ui/react': specifier: 'catalog:' - version: 1.3.0(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 1.4.0(@date-fns/tz@1.4.1)(@types/react@19.2.14)(date-fns@4.1.0)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@emoji-mart/data': specifier: 'catalog:' version: 1.2.1 '@floating-ui/react': specifier: 'catalog:' - version: 0.27.19(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 0.27.19(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@formatjs/intl-localematcher': specifier: 'catalog:' - version: 0.8.2 + version: 0.8.3 '@headlessui/react': specifier: 'catalog:' - version: 2.2.10(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 2.2.10(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@heroicons/react': specifier: 'catalog:' - version: 2.2.0(react@19.2.4) + version: 2.2.0(react@19.2.5) '@lexical/code': specifier: npm:lexical-code-no-prism@0.41.0 - version: lexical-code-no-prism@0.41.0(@lexical/utils@0.42.0)(lexical@0.42.0) + version: lexical-code-no-prism@0.41.0(@lexical/utils@0.43.0)(lexical@0.43.0) '@lexical/link': specifier: 'catalog:' - version: 0.42.0 + version: 0.43.0 '@lexical/list': specifier: 'catalog:' - version: 0.42.0 + version: 0.43.0 '@lexical/react': specifier: 'catalog:' - version: 0.42.0(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(yjs@13.6.30) + version: 0.43.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(yjs@13.6.30) '@lexical/selection': specifier: 'catalog:' - version: 0.42.0 + version: 0.43.0 '@lexical/text': specifier: 'catalog:' - version: 0.42.0 + version: 0.43.0 '@lexical/utils': specifier: 'catalog:' - version: 0.42.0 + version: 0.43.0 '@monaco-editor/react': specifier: 'catalog:' - version: 4.7.0(monaco-editor@0.55.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 4.7.0(monaco-editor@0.55.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@orpc/client': specifier: 'catalog:' - version: 1.13.13 + version: 1.13.14 '@orpc/contract': specifier: 'catalog:' - version: 1.13.13 + version: 1.13.14 '@orpc/openapi-client': specifier: 'catalog:' - version: 1.13.13 + version: 1.13.14 '@orpc/tanstack-query': specifier: 'catalog:' - version: 1.13.13(@orpc/client@1.13.13)(@tanstack/query-core@5.96.2) + version: 1.13.14(@orpc/client@1.13.14)(@tanstack/query-core@5.99.0) '@remixicon/react': specifier: 'catalog:' - version: 4.9.0(react@19.2.4) + version: 4.9.0(react@19.2.5) '@sentry/react': specifier: 'catalog:' - version: 10.47.0(react@19.2.4) + version: 10.48.0(react@19.2.5) '@streamdown/math': specifier: 'catalog:' - version: 1.0.2(react@19.2.4) + version: 1.0.2(react@19.2.5) '@svgdotjs/svg.js': specifier: 'catalog:' version: 3.2.5 @@ -711,28 +787,28 @@ importers: version: 0.5.19(tailwindcss@4.2.2) '@tanstack/react-form': specifier: 'catalog:' - version: 1.28.6(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 1.29.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@tanstack/react-query': specifier: 'catalog:' - version: 5.96.2(react@19.2.4) + version: 5.99.0(react@19.2.5) '@tanstack/react-virtual': specifier: 'catalog:' - version: 3.13.23(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 3.13.23(react-dom@19.2.5(react@19.2.5))(react@19.2.5) abcjs: specifier: 'catalog:' version: 6.6.2 ahooks: specifier: 'catalog:' - version: 3.9.7(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 3.9.7(react-dom@19.2.5(react@19.2.5))(react@19.2.5) class-variance-authority: specifier: 'catalog:' version: 0.7.1 - clsx: + client-only: specifier: 'catalog:' - version: 2.1.1 + version: 0.0.1 cmdk: specifier: 'catalog:' - version: 1.1.1(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 1.1.1(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) copy-to-clipboard: specifier: 'catalog:' version: 3.3.3 @@ -747,13 +823,13 @@ importers: version: 10.6.0 dompurify: specifier: 'catalog:' - version: 3.3.3 + version: 3.4.0 echarts: specifier: 'catalog:' version: 6.0.0 echarts-for-react: specifier: 'catalog:' - version: 3.0.6(echarts@6.0.0)(react@19.2.4) + version: 3.0.6(echarts@6.0.0)(react@19.2.5) elkjs: specifier: 'catalog:' version: 0.11.1 @@ -762,7 +838,7 @@ importers: version: 8.6.0(embla-carousel@8.6.0) embla-carousel-react: specifier: 'catalog:' - version: 8.6.0(react@19.2.4) + version: 8.6.0(react@19.2.5) emoji-mart: specifier: 'catalog:' version: 5.6.0 @@ -772,9 +848,6 @@ importers: fast-deep-equal: specifier: 'catalog:' version: 3.1.3 - foxact: - specifier: 'catalog:' - version: 0.3.0(react-dom@19.2.4(react@19.2.4))(react@19.2.4) hast-util-to-jsx-runtime: specifier: 'catalog:' version: 2.3.6 @@ -786,7 +859,7 @@ importers: version: 1.11.13 i18next: specifier: 'catalog:' - version: 26.0.3(typescript@6.0.2) + version: 26.0.4(typescript@6.0.2) i18next-resources-to-backend: specifier: 'catalog:' version: 1.2.1 @@ -795,7 +868,7 @@ importers: version: 11.1.4 jotai: specifier: 'catalog:' - version: 2.19.1(@babel/core@7.29.0)(@babel/template@7.28.6)(@types/react@19.2.14)(react@19.2.4) + version: 2.19.1(@babel/core@7.29.0)(@babel/template@7.28.6)(@types/react@19.2.14)(react@19.2.5) js-audio-recorder: specifier: 'catalog:' version: 1.0.7 @@ -819,7 +892,10 @@ importers: version: 1.2.1 lexical: specifier: 'catalog:' - version: 0.42.0 + version: 0.43.0 + loro-crdt: + specifier: 'catalog:' + version: 1.10.8 mermaid: specifier: 'catalog:' version: 11.14.0 @@ -834,58 +910,58 @@ importers: version: 1.0.0 next: specifier: 'catalog:' - version: 16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0) + version: 16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0) next-themes: specifier: 'catalog:' - version: 0.4.6(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 0.4.6(react-dom@19.2.5(react@19.2.5))(react@19.2.5) nuqs: specifier: 'catalog:' - version: 2.8.9(next@16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0))(react@19.2.4) + version: 2.8.9(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(react@19.2.5) pinyin-pro: specifier: 'catalog:' - version: 3.28.0 + version: 3.28.1 qrcode.react: specifier: 'catalog:' - version: 4.2.0(react@19.2.4) + version: 4.2.0(react@19.2.5) qs: specifier: 'catalog:' - version: 6.15.0 + version: 6.15.1 react: specifier: 'catalog:' - version: 19.2.4 + version: 19.2.5 react-18-input-autosize: specifier: 'catalog:' - version: 3.0.0(react@19.2.4) + version: 3.0.0(react@19.2.5) react-dom: specifier: 'catalog:' - version: 19.2.4(react@19.2.4) + version: 19.2.5(react@19.2.5) react-easy-crop: specifier: 'catalog:' - version: 5.5.7(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 5.5.7(react-dom@19.2.5(react@19.2.5))(react@19.2.5) react-hotkeys-hook: specifier: 'catalog:' - version: 5.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 5.2.4(react-dom@19.2.5(react@19.2.5))(react@19.2.5) react-i18next: specifier: 'catalog:' - version: 17.0.2(i18next@26.0.3(typescript@6.0.2))(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(typescript@6.0.2) + version: 16.5.8(i18next@26.0.4(typescript@6.0.2))(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(typescript@6.0.2) react-multi-email: specifier: 'catalog:' - version: 1.0.25(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 1.0.25(react-dom@19.2.5(react@19.2.5))(react@19.2.5) react-papaparse: specifier: 'catalog:' version: 4.4.0 react-pdf-highlighter: specifier: 'catalog:' - version: 8.0.0-rc.0(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 8.0.0-rc.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5) react-sortablejs: specifier: 'catalog:' - version: 6.1.4(@types/sortablejs@1.15.9)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sortablejs@1.15.7) + version: 6.1.4(@types/sortablejs@1.15.9)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sortablejs@1.15.7) react-textarea-autosize: specifier: 'catalog:' - version: 8.5.9(@types/react@19.2.14)(react@19.2.4) + version: 8.5.9(@types/react@19.2.14)(react@19.2.5) reactflow: specifier: 'catalog:' - version: 11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) remark-breaks: specifier: 'catalog:' version: 4.0.0 @@ -901,6 +977,9 @@ importers: shiki: specifier: 'catalog:' version: 4.0.2 + socket.io-client: + specifier: 'catalog:' + version: 4.8.3 sortablejs: specifier: 'catalog:' version: 1.15.7 @@ -909,13 +988,10 @@ importers: version: 1.0.8 streamdown: specifier: 'catalog:' - version: 2.5.0(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 2.5.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5) string-ts: specifier: 'catalog:' version: 2.3.1 - tailwind-merge: - specifier: 'catalog:' - version: 3.5.0 tldts: specifier: 'catalog:' version: 7.0.28 @@ -924,7 +1000,7 @@ importers: version: 5.1.0 use-context-selector: specifier: 'catalog:' - version: 2.0.0(react@19.2.4)(scheduler@0.27.0) + version: 2.0.0(react@19.2.5)(scheduler@0.27.0) uuid: specifier: 'catalog:' version: 13.0.0 @@ -933,20 +1009,23 @@ importers: version: 4.3.6 zundo: specifier: 'catalog:' - version: 2.3.0(zustand@5.0.12(@types/react@19.2.14)(immer@11.1.4)(react@19.2.4)(use-sync-external-store@1.6.0(react@19.2.4))) + version: 2.3.0(zustand@5.0.12(@types/react@19.2.14)(immer@11.1.4)(react@19.2.5)(use-sync-external-store@1.6.0(react@19.2.5))) zustand: specifier: 'catalog:' - version: 5.0.12(@types/react@19.2.14)(immer@11.1.4)(react@19.2.4)(use-sync-external-store@1.6.0(react@19.2.4)) + version: 5.0.12(@types/react@19.2.14)(immer@11.1.4)(react@19.2.5)(use-sync-external-store@1.6.0(react@19.2.5)) devDependencies: '@antfu/eslint-config': specifier: 'catalog:' - version: 8.0.0(@eslint-react/eslint-plugin@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@next/eslint-plugin-next@16.2.2)(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.1(typescript@6.0.2))(@typescript-eslint/utils@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(@vue/compiler-sfc@3.5.31)(eslint-plugin-react-refresh@0.5.2(eslint@10.2.0(jiti@2.6.1)))(eslint@10.2.0(jiti@2.6.1))(oxlint@1.58.0(oxlint-tsgolint@0.20.0))(typescript@6.0.2) + version: 8.2.0(@eslint-react/eslint-plugin@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@next/eslint-plugin-next@16.2.3)(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2))(@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.18)(@vue/compiler-sfc@3.5.31)(eslint-plugin-react-refresh@0.5.2(eslint@10.2.0(jiti@2.6.1)))(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0))(typescript@6.0.2) '@chromatic-com/storybook': specifier: 'catalog:' - version: 5.1.1(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)) + version: 5.1.2(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)) '@dify/iconify-collections': specifier: workspace:* version: link:../packages/iconify-collections + '@dify/tsconfig': + specifier: workspace:* + version: link:../packages/tsconfig '@egoist/tailwindcss-icons': specifier: 'catalog:' version: 1.9.2(tailwindcss@4.2.2) @@ -955,67 +1034,70 @@ importers: version: 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@hono/node-server': specifier: 'catalog:' - version: 1.19.13(hono@4.12.12) + version: 1.19.14(hono@4.12.14) '@iconify-json/heroicons': specifier: 'catalog:' version: 1.2.3 '@iconify-json/ri': specifier: 'catalog:' version: 1.2.10 + '@langgenius/dify-ui': + specifier: workspace:* + version: link:../packages/dify-ui '@mdx-js/loader': specifier: 'catalog:' - version: 3.1.1(webpack@5.105.4(uglify-js@3.19.3)) + version: 3.1.1(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) '@mdx-js/react': specifier: 'catalog:' - version: 3.1.1(@types/react@19.2.14)(react@19.2.4) + version: 3.1.1(@types/react@19.2.14)(react@19.2.5) '@mdx-js/rollup': specifier: 'catalog:' version: 3.1.1(rollup@4.59.0) '@next/eslint-plugin-next': specifier: 'catalog:' - version: 16.2.2 + version: 16.2.3 '@next/mdx': specifier: 'catalog:' - version: 16.2.2(@mdx-js/loader@3.1.1(webpack@5.105.4(uglify-js@3.19.3)))(@mdx-js/react@3.1.1(@types/react@19.2.14)(react@19.2.4)) + version: 16.2.3(@mdx-js/loader@3.1.1(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)))(@mdx-js/react@3.1.1(@types/react@19.2.14)(react@19.2.5)) '@rgrove/parse-xml': specifier: 'catalog:' version: 4.2.0 '@storybook/addon-docs': specifier: 'catalog:' - version: 10.3.5(@types/react@19.2.14)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(webpack@5.105.4(uglify-js@3.19.3)) + version: 10.3.5(@types/react@19.2.14)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) '@storybook/addon-links': specifier: 'catalog:' - version: 10.3.5(react@19.2.4)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)) + version: 10.3.5(react@19.2.5)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)) '@storybook/addon-onboarding': specifier: 'catalog:' - version: 10.3.5(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)) + version: 10.3.5(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)) '@storybook/addon-themes': specifier: 'catalog:' - version: 10.3.5(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)) + version: 10.3.5(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)) '@storybook/nextjs-vite': specifier: 'catalog:' - version: 10.3.5(@babel/core@7.29.0)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0))(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(typescript@6.0.2)(webpack@5.105.4(uglify-js@3.19.3)) + version: 10.3.5(@babel/core@7.29.0)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) '@storybook/react': specifier: 'catalog:' - version: 10.3.5(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(typescript@6.0.2) + version: 10.3.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2) '@tailwindcss/postcss': specifier: 'catalog:' version: 4.2.2 '@tailwindcss/vite': specifier: 'catalog:' - version: 4.2.2(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + version: 4.2.2(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) '@tanstack/eslint-plugin-query': specifier: 'catalog:' - version: 5.96.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + version: 5.99.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@tanstack/react-devtools': specifier: 'catalog:' - version: 0.10.2(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(csstype@3.2.3)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(solid-js@1.9.11) + version: 0.10.2(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(csstype@3.2.3)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(solid-js@1.9.11) '@tanstack/react-form-devtools': specifier: 'catalog:' - version: 0.2.20(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.4)(solid-js@1.9.11) + version: 0.2.21(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.5)(solid-js@1.9.11) '@tanstack/react-query-devtools': specifier: 'catalog:' - version: 5.96.2(@tanstack/react-query@5.96.2(react@19.2.4))(react@19.2.4) + version: 5.99.0(@tanstack/react-query@5.99.0(react@19.2.5))(react@19.2.5) '@testing-library/dom': specifier: 'catalog:' version: 10.4.1 @@ -1024,19 +1106,19 @@ importers: version: 6.9.1 '@testing-library/react': specifier: 'catalog:' - version: 16.3.2(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 16.3.2(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@testing-library/user-event': specifier: 'catalog:' version: 14.6.1(@testing-library/dom@10.4.1) '@tsslint/cli': specifier: 'catalog:' - version: 3.0.2(@tsslint/compat-eslint@3.0.2(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2) + version: 3.0.3(@tsslint/compat-eslint@3.0.3(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2) '@tsslint/compat-eslint': specifier: 'catalog:' - version: 3.0.2(jiti@2.6.1)(typescript@6.0.2) + version: 3.0.3(jiti@2.6.1)(typescript@6.0.2) '@tsslint/config': specifier: 'catalog:' - version: 3.0.2(@tsslint/compat-eslint@3.0.2(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2) + version: 3.0.3(@tsslint/compat-eslint@3.0.3(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2) '@types/js-cookie': specifier: 'catalog:' version: 3.0.6 @@ -1048,7 +1130,7 @@ importers: version: 0.6.4 '@types/node': specifier: 'catalog:' - version: 25.5.2 + version: 25.6.0 '@types/qs': specifier: 'catalog:' version: 6.15.0 @@ -1063,22 +1145,22 @@ importers: version: 1.15.9 '@typescript-eslint/parser': specifier: 'catalog:' - version: 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + version: 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@typescript/native-preview': specifier: 'catalog:' - version: 7.0.0-dev.20260407.1 + version: 7.0.0-dev.20260413.1 '@vitejs/plugin-react': specifier: 'catalog:' - version: 6.0.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + version: 6.0.1(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) '@vitejs/plugin-rsc': specifier: 'catalog:' - version: 0.5.22(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.4(react@19.2.4))(react-server-dom-webpack@19.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(webpack@5.105.4(uglify-js@3.19.3)))(react@19.2.4) + version: 0.5.24(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.5(react@19.2.5))(react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)))(react@19.2.5) '@vitest/coverage-v8': specifier: 'catalog:' - version: 4.1.3(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + version: 4.1.4(@voidzero-dev/vite-plus-test@0.1.18) agentation: specifier: 'catalog:' - version: 3.0.2(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 3.0.2(react-dom@19.2.5(react@19.2.5))(react@19.2.5) code-inspector-plugin: specifier: 'catalog:' version: 1.5.1 @@ -1087,19 +1169,19 @@ importers: version: 10.2.0(jiti@2.6.1) eslint-markdown: specifier: 'catalog:' - version: 0.6.0(eslint@10.2.0(jiti@2.6.1)) + version: 0.6.1(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-better-tailwindcss: specifier: 'catalog:' - version: 4.3.2(eslint@10.2.0(jiti@2.6.1))(oxlint@1.58.0(oxlint-tsgolint@0.20.0))(tailwindcss@4.2.2)(typescript@6.0.2) + version: 4.4.1(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0))(tailwindcss@4.2.2)(typescript@6.0.2) eslint-plugin-hyoban: specifier: 'catalog:' version: 0.14.1(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-markdown-preferences: specifier: 'catalog:' - version: 0.41.0(@eslint/markdown@8.0.1)(eslint@10.2.0(jiti@2.6.1)) + version: 0.41.1(@eslint/markdown@8.0.1)(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-no-barrel-files: specifier: 'catalog:' - version: 1.2.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + version: 1.3.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint-plugin-react-refresh: specifier: 'catalog:' version: 0.5.2(eslint@10.2.0(jiti@2.6.1)) @@ -1108,25 +1190,25 @@ importers: version: 4.0.2(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-storybook: specifier: 'catalog:' - version: 10.3.5(eslint@10.2.0(jiti@2.6.1))(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(typescript@6.0.2) + version: 10.3.5(eslint@10.2.0(jiti@2.6.1))(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2) happy-dom: specifier: 'catalog:' - version: 20.8.9 + version: 20.9.0 hono: specifier: 'catalog:' - version: 4.12.12 + version: 4.12.14 knip: specifier: 'catalog:' - version: 6.3.0(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) + version: 6.4.1(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) postcss: specifier: 'catalog:' version: 8.5.9 react-server-dom-webpack: specifier: 'catalog:' - version: 19.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(webpack@5.105.4(uglify-js@3.19.3)) + version: 19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) storybook: specifier: 'catalog:' - version: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + version: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) tailwindcss: specifier: 'catalog:' version: 4.2.2 @@ -1141,22 +1223,22 @@ importers: version: 3.19.3 vinext: specifier: 'catalog:' - version: 0.0.40(@mdx-js/rollup@3.1.1(rollup@4.59.0))(@vitejs/plugin-react@6.0.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)))(@vitejs/plugin-rsc@0.5.22(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.4(react@19.2.4))(react-server-dom-webpack@19.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(webpack@5.105.4(uglify-js@3.19.3)))(react@19.2.4))(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0))(react-dom@19.2.4(react@19.2.4))(react-server-dom-webpack@19.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(webpack@5.105.4(uglify-js@3.19.3)))(react@19.2.4)(typescript@6.0.2) + version: 0.0.41(453b4e184a832f83060410b31544dc36) vite: - specifier: npm:@voidzero-dev/vite-plus-core@0.1.16 - version: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + specifier: npm:@voidzero-dev/vite-plus-core@0.1.18 + version: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' vite-plugin-inspect: specifier: 'catalog:' - version: 12.0.0-beta.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0) + version: 12.0.0-beta.1(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0) vite-plus: specifier: 'catalog:' - version: 0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + version: 0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) vitest: - specifier: npm:@voidzero-dev/vite-plus-test@0.1.16 - version: '@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + specifier: npm:@voidzero-dev/vite-plus-test@0.1.18 + version: '@voidzero-dev/vite-plus-test@0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' vitest-canvas-mock: specifier: 'catalog:' - version: 1.1.4(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + version: 1.1.4(@voidzero-dev/vite-plus-test@0.1.18) packages: @@ -1167,17 +1249,17 @@ packages: resolution: {integrity: sha512-UrcABB+4bUrFABwbluTIBErXwvbsU/V7TZWfmbgJfbkwiBuziS9gxdODUyuiecfdGQ85jglMW6juS3+z5TsKLw==} engines: {node: '>=10'} - '@amplitude/analytics-browser@2.38.1': - resolution: {integrity: sha512-8E3WDuCz5pmVysw7iwT9MjltzaO7Sqy9jWNaXovO30Z8sXs5Ncl32qv6o14kwlpl3wRSaaAKDe0Z3Grjx3dYYQ==} + '@amplitude/analytics-browser@2.39.0': + resolution: {integrity: sha512-sTNGGjiubsDs1NqKsTXp0ykCaSIzjaGclMRHlnO7JBatqK0f/Knl0cfn1a7XBFuTVix/M5nrWATsKv6+0dSpMg==} - '@amplitude/analytics-client-common@2.4.42': - resolution: {integrity: sha512-pEpE6s8GsXTlD9Jj4b/wplCQD8fT2ml/VZSnQ1E5sU0goaeZaYQKMTXGpbA2aE40ABZMwQSopxJn+puBrJc8eg==} + '@amplitude/analytics-client-common@2.4.43': + resolution: {integrity: sha512-R5n3cfnVNLk32BE2DbCp4xpn39mfmjMUjvOO9kt5dLFdF0cozb9MCawVyZJQVfnJJT6k5NMoswdUBu7Ul0nbRw==} '@amplitude/analytics-connector@1.6.4': resolution: {integrity: sha512-SpIv0IQMNIq6SH3UqFGiaZyGSc7PBZwRdq7lvP0pBxW8i4Ny+8zwI0pV+VMfMHQwWY3wdIbWw5WQphNjpdq1/Q==} - '@amplitude/analytics-core@2.44.1': - resolution: {integrity: sha512-bx8RAYneoEyT/gsCpcktEgBMUs5vIb2piA/Kof88BaNKAWEpIa9B4Ogg4vNPqmEgNIx/wztSduFMHHw2pLcncg==} + '@amplitude/analytics-core@2.45.0': + resolution: {integrity: sha512-vWRYbXu2Grs1GM+WHo03RPtbaPs5sJm21YQcAow9JASvtoY4xNqItIeRydCJQWtFHhbbxY41n+CVW6mzDP6aBA==} '@amplitude/analytics-types@2.11.1': resolution: {integrity: sha512-wFEgb0t99ly2uJKm5oZ28Lti0Kh5RecR5XBkwfUpDzn84IoCIZ8GJTsMw/nThu8FZFc7xFDA4UAt76zhZKrs9A==} @@ -1185,26 +1267,26 @@ packages: '@amplitude/experiment-core@0.7.2': resolution: {integrity: sha512-Wc2NWvgQ+bLJLeF0A9wBSPIaw0XuqqgkPKsoNFQrmS7r5Djd56um75In05tqmVntPJZRvGKU46pAp8o5tdf4mA==} - '@amplitude/plugin-autocapture-browser@1.25.1': - resolution: {integrity: sha512-eIaPO7eUH2W0OWe0JoqUVvMPUGDeOn4JQa7zdClEbvHnPxfGS1RHIFNsBk5ofgEWxhUo2Ka/Z0Wl86k9FMaa7w==} + '@amplitude/plugin-autocapture-browser@1.25.2': + resolution: {integrity: sha512-AWzIX0uit60Q742rH/96/n88e+3BaVZa4+7Xs+BeuuIOyrljOZlQKzH23Lxzkl0DgbNb5+MMqWds0pov3DV5TA==} - '@amplitude/plugin-custom-enrichment-browser@0.1.3': - resolution: {integrity: sha512-iKZkqkI5CpLb62cGNgvqTVEUj8i5UBFWJc0aQMZZBqc+vmzHBaqvjeAU0dwO8KA623YfT5I+/Vp1MnqvEXGJFg==} + '@amplitude/plugin-custom-enrichment-browser@0.1.4': + resolution: {integrity: sha512-vxuQocn8YGE2wMLZUmotRG8c6RijoaQAsHKDQEO56CNk3WhSecgSGMnlHcUcOYIzwfXKFj4MxRJS386kdDHV+Q==} - '@amplitude/plugin-network-capture-browser@1.9.12': - resolution: {integrity: sha512-/8x+GDqE25pTvsU9Po7Ur+V8pUuX4IG5p2xHPM9N/APfyc3D1zLTkC8FKo8wfPpg4Wu97mSzy1JnvPDqbJcJyw==} + '@amplitude/plugin-network-capture-browser@1.9.13': + resolution: {integrity: sha512-8uzTQFbP+dvqJX+S39KqKw+EheJW8JCWT/xlXT55vtTU/ZTFeF074QnHFEKUPewpYXpwKXgJky8PDoMk0b46Qw==} - '@amplitude/plugin-page-url-enrichment-browser@0.7.4': - resolution: {integrity: sha512-gF7V1ypkYB7FTwKlqjbO+7Z+Wvf72RfA64aREj9aplZdRJ0EY3qSEYMA3L2v0U5ztYchiy5MJraSaaxKfzXdJg==} + '@amplitude/plugin-page-url-enrichment-browser@0.7.5': + resolution: {integrity: sha512-0Q7P5vsue/s92i3zevVDVJf9AiHkbxGdwkB8iV2oWgkXtglzWugwr//qN+muHmXdi1ZWxRjm93CW+jQJVripgw==} - '@amplitude/plugin-page-view-tracking-browser@2.9.5': - resolution: {integrity: sha512-fWewMrgo0T7AyKnrZn6ox0ER5Ibw/IFTkX0GrQ8DxcsXrmUuSWUTsxZaA7YPDzuWPbd4AX9/AWZF2i6A9Ybtfg==} + '@amplitude/plugin-page-view-tracking-browser@2.9.6': + resolution: {integrity: sha512-/4lG2lXIB6qbQNf1VYQ5fDOnvInPEtYuOgvmyLfuZ6PvHVFUu4NZtoOVdAcy0R9x76rNyCpRXxdL78p9Ra1ANA==} - '@amplitude/plugin-session-replay-browser@1.27.6': - resolution: {integrity: sha512-wHv9b/Qzu9qg0thE+qo23/KpYGiADnAj42I1C1goQAJG7XNOk62F0sdejVvnQIV9NsLe0ItoS+tg3eqlBE7Exg==} + '@amplitude/plugin-session-replay-browser@1.27.7': + resolution: {integrity: sha512-KcGMFaBGqZAOm1Gdzio9d95IL3Nmp5J1xOu1PD0NAPYLfW1MyoyA5PFIIlMqqVf1DoCjmgqP7AY4swetU2tpWg==} - '@amplitude/plugin-web-vitals-browser@1.1.27': - resolution: {integrity: sha512-jh/dWMsthx5E+ensNTwj7nkqi8iG8wyJc1HryOdY49w9zTgcbZmJwE2uumLBXBasn7l62a5EdqRkwctGL53fHw==} + '@amplitude/plugin-web-vitals-browser@1.1.28': + resolution: {integrity: sha512-gs4Y1eOuVUEDwYEJF82f/GmgQ7iM4Y/eZTkftJKjFsBNbrPro2CuLymfdAcC+QuVfyrp3qAiWcSGnjDXA6ZbQg==} '@amplitude/rrdom@2.0.0-alpha.37': resolution: {integrity: sha512-u4dSnBtlbJ8oU5P/Ywl2RLqvjqWbkl4ScMUbvQA7in4pWcx+0NRN+VVjLZXQcd8Fn7E/rcxjeUh7e7HfwvdasQ==} @@ -1238,14 +1320,14 @@ packages: '@amplitude/rrweb@2.0.0-alpha.37': resolution: {integrity: sha512-jJkSpPYiVgOZB422pb2jOJJn3pvb5E5f9vKK8CEmUlk2mVAl6kPQzW98mb05M65OJFj5nn9tSe9h5r5+Cl93ag==} - '@amplitude/session-replay-browser@1.35.1': - resolution: {integrity: sha512-7X6T+niZaG+zpvcFOwdkbTNUWzD6T9/rQ7POYkTK+C/6FtvJ0fpHXNHdHT8fozKox2UXL/wwZvoQWFriHSe1dA==} + '@amplitude/session-replay-browser@1.36.0': + resolution: {integrity: sha512-HZpNRMRAiLbzGF84DzF+ZH5WztJH4tVe2e/FzYJ2r27Sgf2gftCmzCB9pN8BXXcHKYtQK8/Qol+PTmSIzvyvEw==} '@amplitude/targeting@0.2.0': resolution: {integrity: sha512-/50ywTrC4hfcfJVBbh5DFbqMPPfaIOivZeb5Gb+OGM03QrA+lsUqdvtnKLNuWtceD4H6QQ2KFzPJ5aAJLyzVDA==} - '@antfu/eslint-config@8.0.0': - resolution: {integrity: sha512-IKiCfsa1vRgj8srB2azqiN3nOAcVyP/TZ5Ibiz0TDW9NoQPizTvkmRTSi1vo4ax0SL9TH/8uJLK6uCfd6bQzLA==} + '@antfu/eslint-config@8.2.0': + resolution: {integrity: sha512-spfwYXMNrlkl69riTSBnbC0C2K8EVfVMOK3ceP2EpAAioyfprIW1gTwyLRtd9jZSFeNdX4mFNAIG+o0sOneOfA==} hasBin: true peerDependencies: '@angular-eslint/eslint-plugin': ^21.1.0 @@ -1382,19 +1464,21 @@ packages: resolution: {integrity: sha512-LwdZHpScM4Qz8Xw2iKSzS+cfglZzJGvofQICy7W7v4caru4EaAmyUuO6BGrbyQ2mYV11W0U8j5mBhd14dd3B0A==} engines: {node: '>=6.9.0'} - '@base-ui/react@1.3.0': - resolution: {integrity: sha512-FwpKqZbPz14AITp1CVgf4AjhKPe1OeeVKSBMdgD10zbFlj3QSWelmtCMLi2+/PFZZcIm3l87G7rwtCZJwHyXWA==} + '@base-ui/react@1.4.0': + resolution: {integrity: sha512-QcqdVbr/+ba2/RAKJIV1PV6S02Q5+r6a4Eym8ndBw+ZbBILkkmQAyRxXCg/pArrHnkrGeU8goe26aw0h6eE8pg==} engines: {node: '>=14.0.0'} peerDependencies: + '@date-fns/tz': ^1.2.0 '@types/react': ^17 || ^18 || ^19 + date-fns: ^4.0.0 react: ^17 || ^18 || ^19 react-dom: ^17 || ^18 || ^19 peerDependenciesMeta: '@types/react': optional: true - '@base-ui/utils@0.2.6': - resolution: {integrity: sha512-yQ+qeuqohwhsNpoYDqqXaLllYAkPCP4vYdDrVo8FQXaAPfHWm1pG/Vm+jmGTA5JFS0BAIjookyapuJFY8F9PIw==} + '@base-ui/utils@0.2.7': + resolution: {integrity: sha512-nXYKhiL/0JafyJE8PfcflipGftOftlIwKd72rU15iZ1M5yqgg5J9P8NHU71GReDuXco5MJA/eVQqUT5WRqX9sA==} peerDependencies: '@types/react': ^17 || ^18 || ^19 react: ^17 || ^18 || ^19 @@ -1425,8 +1509,8 @@ packages: '@chevrotain/utils@11.1.2': resolution: {integrity: sha512-4mudFAQ6H+MqBTfqLmU7G1ZwRzCLfJEooL/fsF6rCX5eePMbGhoy5n4g+G4vlh2muDcsCTJtL+uKbOzWxs5LHA==} - '@chromatic-com/storybook@5.1.1': - resolution: {integrity: sha512-BPoAXHM71XgeCK2u0jKr9i8apeQMm/Z9IWGyndA2FMijfQG9m8ox45DdWh/pxFkK5ClhGgirv5QwMhFIeHmThg==} + '@chromatic-com/storybook@5.1.2': + resolution: {integrity: sha512-H/hgvwC3E+OtseP2OT2QYUJH2VfnzT6wM3pWOkaNV6g7QI+VUdWJbeJ3o2jFqvEPQNqzhQKWDOlvM4lu+7is6g==} engines: {node: '>=20.0.0', yarn: '>=1.22.18'} peerDependencies: storybook: ^0.0.0-0 || ^10.1.0 || ^10.1.0-0 || ^10.2.0-0 || ^10.3.0-0 || ^10.4.0-0 @@ -1471,8 +1555,8 @@ packages: '@cucumber/cucumber-expressions@19.0.0': resolution: {integrity: sha512-4FKoOQh2Uf6F6/Ln+1OxuK8LkTg6PyAqekhf2Ix8zqV2M54sH+m7XNJNLhOFOAW/t9nxzRbw2CcvXbCLjcvHZg==} - '@cucumber/cucumber@12.7.0': - resolution: {integrity: sha512-7A/9CJpJDxv1SQ7hAZU0zPn2yRxx6XMR+LO4T94Enm3cYNWsEEj+RGX38NLX4INT+H6w5raX3Csb/qs4vUBsOA==} + '@cucumber/cucumber@12.8.0': + resolution: {integrity: sha512-sRG2QMAgCic4Uq1q+5LRzApEHiNGX5rhQY/GuOJZ9BIySrGPA9pevB0imJsZvdzt9scaWyIM3c7dIf4Dp1YQRA==} engines: {node: 20 || 22 || >=24} hasBin: true @@ -1496,18 +1580,18 @@ packages: peerDependencies: '@cucumber/messages': '>=18' - '@cucumber/junit-xml-formatter@0.9.0': - resolution: {integrity: sha512-WF+A7pBaXpKMD1i7K59Nk5519zj4extxY4+4nSgv5XLsGXHDf1gJnb84BkLUzevNtp2o2QzMG0vWLwSm8V5blw==} + '@cucumber/junit-xml-formatter@0.13.2': + resolution: {integrity: sha512-worYkxjeOWJV+b7WkgJekWgFHlIhbuocnFK3hP+pMYXqZMmkXsxAorYPjeF8KyLnZXajw5fKHS2bM9rQIUI7Zw==} peerDependencies: '@cucumber/messages': '*' - '@cucumber/message-streams@4.0.1': - resolution: {integrity: sha512-Kxap9uP5jD8tHUZVjTWgzxemi/0uOsbGjd4LBOSxcJoOCRbESFwemUzilJuzNTB8pcTQUh8D5oudUyxfkJOKmA==} + '@cucumber/message-streams@4.1.1': + resolution: {integrity: sha512-QCAntLajesWMyX+mZKrj63YghVAts7yKFlZe46XprLbdJZN0ddB+f/Mr9OnyWKC2DHhJ18jzCfKIFCaqpAmUxg==} peerDependencies: '@cucumber/messages': '>=17.1.1' - '@cucumber/messages@32.0.1': - resolution: {integrity: sha512-1OSoW+GQvFUNAl6tdP2CTBexTXMNJF0094goVUcvugtQeXtJ0K8sCP0xbq7GGoiezs/eJAAOD03+zAPT64orHQ==} + '@cucumber/messages@32.2.0': + resolution: {integrity: sha512-oYp1dgL2TByYWL51Z+rNm+/mFtJhiPU9WS03goes9EALb8d9GFcXRbG1JluFLFaChF1YDqIzLac0kkC3tv1DjQ==} '@cucumber/pretty-formatter@1.0.1': resolution: {integrity: sha512-A1lU4VVP0aUWdOTmpdzvXOyEYuPtBDI0xYwYJnmoMDplzxMdhcHk86lyyvYDoMoPzzq6OkOE3isuosvUU4X7IQ==} @@ -1523,6 +1607,9 @@ packages: '@cucumber/tag-expressions@9.1.0': resolution: {integrity: sha512-bvHjcRFZ+J1TqIa9eFNO1wGHqwx4V9ZKV3hYgkuK/VahHx73uiP4rKV3JVrvWSMrwrFvJG6C8aEwnCWSvbyFdQ==} + '@date-fns/tz@1.4.1': + resolution: {integrity: sha512-P5LUNhtbj6YfI3iJjw5EL9eUAG6OitD0W3fWQcpQjDRc/QIsL0tRNuO1PcDvPccWL1fSTXXdE1ds+l95DV/OFA==} + '@e18e/eslint-plugin@0.3.0': resolution: {integrity: sha512-hHgfpxsrZ2UYHcicA+tGZnmk19uJTaye9VH79O+XS8R4ona2Hx3xjhXghclNW58uXMk3xXlbYEOMr8thsoBmWg==} peerDependencies: @@ -1539,14 +1626,17 @@ packages: peerDependencies: tailwindcss: '*' - '@emnapi/core@1.9.1': - resolution: {integrity: sha512-mukuNALVsoix/w1BJwFzwXBN/dHeejQtuVzcDsfOEsdpCumXb/E9j8w11h5S54tT1xhifGfbbSm/ICrObRb3KA==} + '@emnapi/core@1.9.2': + resolution: {integrity: sha512-UC+ZhH3XtczQYfOlu3lNEkdW/p4dsJ1r/bP7H8+rhao3TTTMO1ATq/4DdIi23XuGoFY+Cz0JmCbdVl0hz9jZcA==} '@emnapi/runtime@1.9.1': resolution: {integrity: sha512-VYi5+ZVLhpgK4hQ0TAjiQiZ6ol0oe4mBx7mVv7IflsiEp0OWoVsp/+f9Vc1hOhE0TtkORVrI1GvzyreqpgWtkA==} - '@emnapi/wasi-threads@1.2.0': - resolution: {integrity: sha512-N10dEJNSsUx41Z6pZsXU8FjPjpBEplgH24sfkmITrBED1/U2Esum9F3lfLrMjKHHjmi557zQn7kR9R+XWXu5Rg==} + '@emnapi/runtime@1.9.2': + resolution: {integrity: sha512-3U4+MIWHImeyu1wnmVygh5WlgfYDtyf0k8AbLhMFxOipihf6nrWC4syIm/SwEeec0mNSafiiNnMJwbza/Is6Lw==} + + '@emnapi/wasi-threads@1.2.1': + resolution: {integrity: sha512-uTII7OYF+/Mes/MrcIOYp5yOtSMLBWSIoLPpcgwipoiKbli6k322tcoFsxoIIxPDqW01SQGAgko4EzZi2BNv2w==} '@emoji-mart/data@1.2.1': resolution: {integrity: sha512-no2pQMWiBy6gpBEiqGeU77/bFejDqUTRY7KX+0+iur13op3bqUsXdnwoZs6Xb1zbv0gAj5VvS1PWoUUckSr5Dw==} @@ -1555,6 +1645,10 @@ packages: resolution: {integrity: sha512-0xew1CxOam0gV5OMjh2KjFQZsKL2bByX1+q4j3E73MpYIdyUxcZb/xQct9ccUb+ve5KGUYbCUxyPnYB7RbuP+w==} engines: {node: ^20.19.0 || ^22.13.0 || >=24} + '@es-joy/jsdoccomment@0.86.0': + resolution: {integrity: sha512-ukZmRQ81WiTpDWO6D/cTBM7XbrNtutHKvAVnZN/8pldAwLoJArGOvkNyxPTBGsPjsoaQBJxlH+tE2TNA/92Qgw==} + engines: {node: ^20.19.0 || ^22.13.0 || >=24} + '@es-joy/resolve.exports@1.2.0': resolution: {integrity: sha512-Q9hjxWI5xBM+qW2enxfe8wDKdFWMfd0Z29k5ZJnuBqD/CasY5Zryj09aCA6owbGATWz+39p5uIdaHXpopOcG8g==} engines: {node: '>=10'} @@ -1807,9 +1901,9 @@ packages: resolution: {integrity: sha512-8FTGbNzTvmSlc4cZBaShkC6YvFMG0riksYWRFKXztqVdXaQbcZLXlFbSpC05s70sGEsXAw0qwhx69JiW7hQS7A==} engines: {node: ^20.19.0 || ^22.13.0 || >=24} - '@eslint/css-tree@3.6.9': - resolution: {integrity: sha512-3D5/OHibNEGk+wKwNwMbz63NMf367EoR4mVNNpxddCHKEb2Nez7z62J2U6YjtErSsZDoY0CsccmoUpdEbkogNA==} - engines: {node: ^10 || ^12.20.0 || ^14.13.0 || >=15.0.0} + '@eslint/css-tree@4.0.1': + resolution: {integrity: sha512-2fCSKRwoUHntYq9J1Lm28s2zeoCSNh1Cbk6Tg7k7ViwOnveIfZwPRFGwBglz+dzw2MHe5w5Fo9+VJfqL9nco2w==} + engines: {node: ^20.19.0 || ^22.13.0 || >=24} '@eslint/eslintrc@3.3.5': resolution: {integrity: sha512-4IlJx0X0qftVsN5E+/vGujTRIFtwuLbNsVUe7TO6zYPDR1O6nFwvwhIKEKSrl6dZchmYBITazxKoUYOjdtjlRg==} @@ -1887,11 +1981,11 @@ packages: '@floating-ui/utils@0.2.11': resolution: {integrity: sha512-RiB/yIh78pcIxl6lLMG0CgBXAZ2Y0eVHqMPYugu+9U0AeT6YBeiJpf7lbdJNIugFP5SIjwNRgo4DhR1Qxi26Gg==} - '@formatjs/fast-memoize@3.1.1': - resolution: {integrity: sha512-CbNbf+tlJn1baRnPkNePnBqTLxGliG6DDgNa/UtV66abwIjwsliPMOt0172tzxABYzSuxZBZfcp//qI8AvBWPg==} + '@formatjs/fast-memoize@3.1.2': + resolution: {integrity: sha512-vPnriihkfK0lzoQGaXq+qXH23VsYyansRTkTgo2aTG0k1NjLFyZimFVdfj4C9JkSE5dm7CEngcQ5TTc1yAyBfQ==} - '@formatjs/intl-localematcher@0.8.2': - resolution: {integrity: sha512-q05KMYGJLyqFNFtIb8NhWLF5X3aK/k0wYt7dnRFuy6aLQL+vUwQ1cg5cO4qawEiINybeCPXAWlprY2mSBjSXAQ==} + '@formatjs/intl-localematcher@0.8.3': + resolution: {integrity: sha512-pHUjWb9NuhnMs8+PxQdzBtZRFJHlGhrURGAbm6Ltwl82BFajeuiIR3jblSa7ia3r62rXe/0YtVpUG3xWr5bFCA==} '@headlessui/react@2.2.10': resolution: {integrity: sha512-5pVLNK9wlpxTUTy9GpgbX/SdcRh+HBnPktjM2wbiLTH4p+2EPHBO1aoSryUCuKUIItdDWO9ITlhUL8UnUN/oIA==} @@ -1905,8 +1999,8 @@ packages: peerDependencies: react: '>= 16 || ^19.0.0-rc' - '@hono/node-server@1.19.13': - resolution: {integrity: sha512-TsQLe4i2gvoTtrHje625ngThGBySOgSK3Xo2XRYOdqGN1teR8+I7vchQC46uLJi8OF62YTYA3AhSpumtkhsaKQ==} + '@hono/node-server@1.19.14': + resolution: {integrity: sha512-GwtvgtXxnWsucXvbQXkRgqksiH2Qed37H9xHZocE5sA3N8O8O8/8FA3uclQXxXVzc9XBZuEOMK7+r02FmSpHtw==} engines: {node: '>=18.14.1'} peerDependencies: hono: ^4 @@ -2130,77 +2224,77 @@ packages: '@jridgewell/trace-mapping@0.3.31': resolution: {integrity: sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw==} - '@lexical/clipboard@0.42.0': - resolution: {integrity: sha512-D3K2ID0zew/+CKpwxnUTTh/N46yU4IK8bFWV9Htz+g1vFhgUF9UnDOQCmqpJbdP7z+9U1F8rk3fzf9OmP2Fm2w==} + '@lexical/clipboard@0.43.0': + resolution: {integrity: sha512-3dWDusVyM9EosBt4/n/ERyPIGOyuWuECj9zbvJdzGUdvu/VsqCdlyDsU5M7NxTUNQn2Fhkdj2o00UeB6bagX5Q==} - '@lexical/code-core@0.42.0': - resolution: {integrity: sha512-vrZTUPWDJkHjAAvuV2+Qte4vYE80s7hIO7wxipiJmWojGx6lcmQjO+UqJ8AIrqI4Wjy8kXrK74kisApWmwxuCw==} + '@lexical/code-core@0.43.0': + resolution: {integrity: sha512-8NtEOI4+hM688Pmd0Qh/aTCS5uovps902V53LGB15DUUwwL+Z5U+Hz7ZYozhyM6W755FQ3x15qtEGIIbDHE5bQ==} - '@lexical/devtools-core@0.42.0': - resolution: {integrity: sha512-8nP8eE9i8JImgSrvInkWFfMCmXVKp3w3VaOvbJysdlK/Zal6xd8EWJEi6elj0mUW5T/oycfipPs2Sfl7Z+n14A==} + '@lexical/devtools-core@0.43.0': + resolution: {integrity: sha512-Hyz8vxvmo0aThXjq3+t0mabozmQeb6U+pxKceAgBSxE9oLWbQmP7RW8jYPZW20bYqEcX1Kgmu+CdW8e3eSF7Kw==} peerDependencies: react: '>=17.x' react-dom: '>=17.x' - '@lexical/dragon@0.42.0': - resolution: {integrity: sha512-/TQzP+7PLJMqq9+MlgQWiJsxS9GOOa8Gp0svCD8vNIOciYmXfd28TR1Go+ZnBWwr7k/2W++3XUYVQU2KUcQsDQ==} + '@lexical/dragon@0.43.0': + resolution: {integrity: sha512-wB2s8uO9DFwS5err1wM+7Yoz3cixtEXy1ZiU8RoJJ7tmjSEmQsLIflAQq8Lic291tCNPs+lSHKjdw+52vi0Z7Q==} - '@lexical/extension@0.42.0': - resolution: {integrity: sha512-rkZq/h8d1BenKRqU4t/zQUVfY/RinMX1Tz7t+Ee3ss0sk+kzP4W+URXNAxpn7r39Vn6wrFBqmCziah3dLAIqPw==} + '@lexical/extension@0.43.0': + resolution: {integrity: sha512-hCFj//3RhsPrCmx8VRTTLIsWtC2n5GG03ZDdyrgmeLzXNuknwDqhzaGAfQi9LSYn+NU+j3yCUROu8pZqaedtvw==} - '@lexical/hashtag@0.42.0': - resolution: {integrity: sha512-WOg5nFOfhabNBXzEIutdWDj+TUHtJEezj6w8jyYDGqZ31gu0cgrXSeV8UIynz/1oj+rpzEeEB7P6ODnwgjt7qA==} + '@lexical/hashtag@0.43.0': + resolution: {integrity: sha512-oCKjY8/jkxJuu8iBnNX0WSLA6ZIYTn+v3NLpJxDqnAFZJCnJ2i/nM8GKzPMzHCDzJVNxbQB08fOptdXf8eN0Fg==} - '@lexical/history@0.42.0': - resolution: {integrity: sha512-YfCZ1ICUt6BCg2ncJWFMuS4yftnB7FEHFRf3qqTSTf6oGZ4IZfzabMNEy47xybUuf7FXBbdaCKJrc/zOM+wGxw==} + '@lexical/history@0.43.0': + resolution: {integrity: sha512-SdrH3xgtUcolVRLihbQwiANQIiwSLdkKBon9oSsZNNnzVgEb7DUQUtJQGf33oW8HHWObIuWkh72W0fN1dZixOw==} - '@lexical/html@0.42.0': - resolution: {integrity: sha512-KgBUDLXehufCsXW3w0XsuoI2xecIhouOishnaNOH4zIA7dAtnNAfdPN/kWrWs0s83gz44OrnqccP+Bprw3UDEQ==} + '@lexical/html@0.43.0': + resolution: {integrity: sha512-C6LpUQlRl9J8Hqpm/C8LCX1ZxFHyD/gvOdV+NuNGnXN06uo0jDDm9SNh/HI3VWvFu9ec4OuzUkQRCafW8WC8fQ==} - '@lexical/link@0.42.0': - resolution: {integrity: sha512-cdeM/+f+kn7aGwW/3FIi6USjl1gBNdEEwg0/ZS+KlYcsy8gxx2e4cyVjsomBu/WU17Qxa0NC0paSr7qEJ/1Fig==} + '@lexical/link@0.43.0': + resolution: {integrity: sha512-jjU9PVWWBA2yEssbVkLQpu1ZIpXi3JwYb+JO20R47hzUm7T8SAPDd/VwU+2tcjqz065YntSGIaQ79dCft7WOJw==} - '@lexical/list@0.42.0': - resolution: {integrity: sha512-TIezILnmIVuvfqEEbcMnsT4xQRlswI6ysHISqsvKL6l5EBhs1gqmNYjHa/Yrfzaq5y52TM1PAtxbFts+G7N6kg==} + '@lexical/list@0.43.0': + resolution: {integrity: sha512-WyYVeQa2x1LrI8Emr9AiWTjSMiZw77Zy7MRnohPTdX/4fu3Njfw61lpoonCNHlv/r5Mb/RHkIAwWjtjcSzwA+g==} - '@lexical/mark@0.42.0': - resolution: {integrity: sha512-H1aGjbMEcL4B8GT7bm/ePHm7j3Wema+wIRNPmxMtXGMz5gpVN3gZlvg2UcUHHJb00SrBA95OUVT5I2nu/KP06w==} + '@lexical/mark@0.43.0': + resolution: {integrity: sha512-pgwR5ia2ECDS0pyQxIrFvMOKjffI6fo2cGwqYg+Jz+ANMqE5zD4PoOUs7FEuZYAKPOAQR9GrETB7YAVSzKjk3Q==} - '@lexical/markdown@0.42.0': - resolution: {integrity: sha512-+mOxgBiumlgVX8Acna+9HjJfSOw1jywufGcAQq3/8S11wZ4gE0u13AaR8LMmU8ydVeOQg09y8PNzGNQ/avZJbg==} + '@lexical/markdown@0.43.0': + resolution: {integrity: sha512-bJYhISQkdRo6XxcajgP9T+c8XAGfkJ/DHnSvM5nyJnHD0vZSH/2RZd2Lgt0eAnMVEt9ECG8cUkR557QSaPeJBA==} - '@lexical/offset@0.42.0': - resolution: {integrity: sha512-V+4af1KmTOnBZrR+kU3e6eD33W/g3QqMPPp3cpFwyXk/dKRc4K8HfyDsSDrjop1mPd9pl3lKSiEmX6uQG8K9XQ==} + '@lexical/offset@0.43.0': + resolution: {integrity: sha512-SYNF16Hk17ePaxFtPcBx3rzSM8yxDYSAzkSOdnUUePSzfTW3DUDzvUfe7q/7QCe/UlZd+4ULI0VjNgYRlR8Uiw==} - '@lexical/overflow@0.42.0': - resolution: {integrity: sha512-wlrHaM27rODJP5m+CTgfZGLg3qWlQ0ptGodcqoGdq6HSbV8nGFY6TvcLMaMtYQ1lm4v9G7Xe9LwjooR6xS3Gug==} + '@lexical/overflow@0.43.0': + resolution: {integrity: sha512-Usm7UfIwydhsg+qMbkBav79AOKqYa32zXY+TXveTqbaA+IAoIl3vFYP9x9ie4cHz/kgrmt/QuQs66cwPefRakg==} - '@lexical/plain-text@0.42.0': - resolution: {integrity: sha512-YWvBwIxLltrIaZDcv0rK4s44P6Yt17yhOb0E+g3+tjF8GGPrrocox+Pglu0m2RHR+G7zULN3isolmWIm/HhWiw==} + '@lexical/plain-text@0.43.0': + resolution: {integrity: sha512-wza2z2+OSsq3UPsFseqsVvnAWvW9s3W/rjQuf6Bk2/Xde2F3R7fvu3kArsaaVPzUKTVeOPCD8hUKIUpxP5OT2g==} - '@lexical/react@0.42.0': - resolution: {integrity: sha512-ujWJXhvlFVVTpwDcnSgEYWRuqUbreZaMB+4bjIDT5r7hkAplUHQndlkeuFHKFiJBasSAreleV7zhXrLL5xa9eA==} + '@lexical/react@0.43.0': + resolution: {integrity: sha512-Ov9PCS7Ghm83fmjSDr6CafDLsuMhf7A7FFfEr4DmDM/6Lw2w0a0QQJP+KqxPqaVaRgeQMJAVg38Zgrvuk3v7tw==} peerDependencies: react: '>=17.x' react-dom: '>=17.x' - '@lexical/rich-text@0.42.0': - resolution: {integrity: sha512-v4YgiM3oK3FZcRrfB+LetvLbQ5aee9MRO9tHf0EFweXg19XnSjHV0cfPAW7TyPxRELzB69+K0Q3AybRlTMjG4Q==} + '@lexical/rich-text@0.43.0': + resolution: {integrity: sha512-y6uhY5X+PBLg8LSCDazSMAkUfA1RwBW6DFOuUKW5SI1DaB/oc/vpQhkR1DYGqXnytMx7hfiK+7lL51ZC0ydeWg==} - '@lexical/selection@0.42.0': - resolution: {integrity: sha512-iWTjLA5BSEuUnvWe9Xwu9FSdZFl3Yi0NqalabXKI+7KgCIlIVXE74y4NvWPUSLkSCB/Z1RPKiHmZqZ1vyu/yGQ==} + '@lexical/selection@0.43.0': + resolution: {integrity: sha512-sdKdXIFggtHxTctvXjTyx2RgWuKOOP3PhrzRJF+COGfckrr/YzDtQCOfyvktElyKEeYXa3t9sx/R6Ep3n074fA==} - '@lexical/table@0.42.0': - resolution: {integrity: sha512-GKiZyjQsHDXRckq5VBrOowyvds51WoVRECfDgcl8pqLMnKyEdCa58E7fkSJrr5LS80Scod+Cjn6SBRzOcdsrKg==} + '@lexical/table@0.43.0': + resolution: {integrity: sha512-oLrOBzRwpmdHDpGVRgwBVgO1ro0w50rMdtOVQ6KsL53ijZ6OiI1YE2ZNOy4qfJvjub+2dgp83gKpB7YcmXAP3w==} - '@lexical/text@0.42.0': - resolution: {integrity: sha512-hT3EYVtBmONXyXe4TFVgtFcG1tf6JhLEuAf95+cOjgFGFSgvkZ/64BPbKLNTj2/9n6cU7EGPUNNwVigCSECJ2g==} + '@lexical/text@0.43.0': + resolution: {integrity: sha512-dtUZ79WaAv3nEYBIWPBZIrjwCUPONN8HcgtReY3qku7WQkzqy3FaMwT/lBa92cUhqsn4ChLIBO3lPFhWRALyvg==} - '@lexical/utils@0.42.0': - resolution: {integrity: sha512-wGNdCW3QWEyVdFiSTLZfFPtiASPyYLcekIiYYZmoRVxVimT/jY+QPfnkO4JYgkO7Z70g/dsg9OhqyQSChQfvkQ==} + '@lexical/utils@0.43.0': + resolution: {integrity: sha512-Y9wzFwoeI9KLDJsztTz45Aobp6sACHSRqUtyjxpCsU0jwL60Tt9rD71QVz7SvpmzxjtnBb040s6LHa6vP0gY+A==} - '@lexical/yjs@0.42.0': - resolution: {integrity: sha512-DplzWnYhfFceGPR+UyDFpZdB287wF/vNOHFuDsBF/nGDdTezvr0Gf60opzyBEF3oXym6p3xTmGygxvO97LZ+vw==} + '@lexical/yjs@0.43.0': + resolution: {integrity: sha512-3ghY9BYZVo3Hg2TmY2+H3Q6+AhhGwNIhnr6mvCbdLBEsnSTXr4VZSPMXN2ae5phCPrI19eHrx4MvFNYodQcqrA==} peerDependencies: yjs: '>=13.5.22' @@ -2251,14 +2345,14 @@ packages: '@next/env@16.0.0': resolution: {integrity: sha512-s5j2iFGp38QsG1LWRQaE2iUY3h1jc014/melHFfLdrsMJPqxqDQwWNwyQTcNoUSGZlCVZuM7t7JDMmSyRilsnA==} - '@next/env@16.2.2': - resolution: {integrity: sha512-LqSGz5+xGk9EL/iBDr2yo/CgNQV6cFsNhRR2xhSXYh7B/hb4nePCxlmDvGEKG30NMHDFf0raqSyOZiQrO7BkHQ==} + '@next/env@16.2.3': + resolution: {integrity: sha512-ZWXyj4uNu4GCWQw9cjRxWlbD+33mcDszIo9iQxFnBX3Wmgq9ulaSJcl6VhuWx5pCWqqD+9W6Wfz7N0lM5lYPMA==} - '@next/eslint-plugin-next@16.2.2': - resolution: {integrity: sha512-IOPbWzDQ+76AtjZioaCjpIY72xNSDMnarZ2GMQ4wjNLvnJEJHqxQwGFhgnIWLV9klb4g/+amg88Tk5OXVpyLTw==} + '@next/eslint-plugin-next@16.2.3': + resolution: {integrity: sha512-nE/b9mht28XJxjTwKs/yk7w4XTaU3t40UHVAky6cjiijdP/SEy3hGsnQMPxmXPTpC7W4/97okm6fngKnvCqVaA==} - '@next/mdx@16.2.2': - resolution: {integrity: sha512-2CbRTXE6sJ7zDAaKXknb5FrrPs46iJeMPzuoBXsAOV/XVnxABGD4mSDusn0VuCoII/KjUZ+zsuo2VFbchYQXng==} + '@next/mdx@16.2.3': + resolution: {integrity: sha512-mm7XNfPagSIcN8jFtozB9toeh5ESES0KCLRoo0gu6xydijvnIrV7dRIK3akNL3Tecc8AHX1FNzYZOZTeFU6RCw==} peerDependencies: '@mdx-js/loader': '>=0.15.0' '@mdx-js/react': '>=0.15.0' @@ -2268,54 +2362,54 @@ packages: '@mdx-js/react': optional: true - '@next/swc-darwin-arm64@16.2.2': - resolution: {integrity: sha512-B92G3ulrwmkDSEJEp9+XzGLex5wC1knrmCSIylyVeiAtCIfvEJYiN3v5kXPlYt5R4RFlsfO/v++aKV63Acrugg==} + '@next/swc-darwin-arm64@16.2.3': + resolution: {integrity: sha512-u37KDKTKQ+OQLvY+z7SNXixwo4Q2/IAJFDzU1fYe66IbCE51aDSAzkNDkWmLN0yjTUh4BKBd+hb69jYn6qqqSg==} engines: {node: '>= 10'} cpu: [arm64] os: [darwin] - '@next/swc-darwin-x64@16.2.2': - resolution: {integrity: sha512-7ZwSgNKJNQiwW0CKhNm9B1WS2L1Olc4B2XY0hPYCAL3epFnugMhuw5TMWzMilQ3QCZcCHoYm9NGWTHbr5REFxw==} + '@next/swc-darwin-x64@16.2.3': + resolution: {integrity: sha512-gHjL/qy6Q6CG3176FWbAKyKh9IfntKZTB3RY/YOJdDFpHGsUDXVH38U4mMNpHVGXmeYW4wj22dMp1lTfmu/bTQ==} engines: {node: '>= 10'} cpu: [x64] os: [darwin] - '@next/swc-linux-arm64-gnu@16.2.2': - resolution: {integrity: sha512-c3m8kBHMziMgo2fICOP/cd/5YlrxDU5YYjAJeQLyFsCqVF8xjOTH/QYG4a2u48CvvZZSj1eHQfBCbyh7kBr30Q==} + '@next/swc-linux-arm64-gnu@16.2.3': + resolution: {integrity: sha512-U6vtblPtU/P14Y/b/n9ZY0GOxbbIhTFuaFR7F4/uMBidCi2nSdaOFhA0Go81L61Zd6527+yvuX44T4ksnf8T+Q==} engines: {node: '>= 10'} cpu: [arm64] os: [linux] libc: [glibc] - '@next/swc-linux-arm64-musl@16.2.2': - resolution: {integrity: sha512-VKLuscm0P/mIfzt+SDdn2+8TNNJ7f0qfEkA+az7OqQbjzKdBxAHs0UvuiVoCtbwX+dqMEL9U54b5wQ/aN3dHeg==} + '@next/swc-linux-arm64-musl@16.2.3': + resolution: {integrity: sha512-/YV0LgjHUmfhQpn9bVoGc4x4nan64pkhWR5wyEV8yCOfwwrH630KpvRg86olQHTwHIn1z59uh6JwKvHq1h4QEw==} engines: {node: '>= 10'} cpu: [arm64] os: [linux] libc: [musl] - '@next/swc-linux-x64-gnu@16.2.2': - resolution: {integrity: sha512-kU3OPHJq6sBUjOk7wc5zJ7/lipn8yGldMoAv4z67j6ov6Xo/JvzA7L7LCsyzzsXmgLEhk3Qkpwqaq/1+XpNR3g==} + '@next/swc-linux-x64-gnu@16.2.3': + resolution: {integrity: sha512-/HiWEcp+WMZ7VajuiMEFGZ6cg0+aYZPqCJD3YJEfpVWQsKYSjXQG06vJP6F1rdA03COD9Fef4aODs3YxKx+RDQ==} engines: {node: '>= 10'} cpu: [x64] os: [linux] libc: [glibc] - '@next/swc-linux-x64-musl@16.2.2': - resolution: {integrity: sha512-CKXRILyErMtUftp+coGcZ38ZwE/Aqq45VMCcRLr2I4OXKrgxIBDXHnBgeX/UMil0S09i2JXaDL3Q+TN8D/cKmg==} + '@next/swc-linux-x64-musl@16.2.3': + resolution: {integrity: sha512-Kt44hGJfZSefebhk/7nIdivoDr3Ugp5+oNz9VvF3GUtfxutucUIHfIO0ZYO8QlOPDQloUVQn4NVC/9JvHRk9hw==} engines: {node: '>= 10'} cpu: [x64] os: [linux] libc: [musl] - '@next/swc-win32-arm64-msvc@16.2.2': - resolution: {integrity: sha512-sS/jSk5VUoShUqINJFvNjVT7JfR5ORYj/+/ZpOYbbIohv/lQfduWnGAycq2wlknbOql2xOR0DoV0s6Xfcy49+g==} + '@next/swc-win32-arm64-msvc@16.2.3': + resolution: {integrity: sha512-O2NZ9ie3Tq6xj5Z5CSwBT3+aWAMW2PIZ4egUi9MaWLkwaehgtB7YZjPm+UpcNpKOme0IQuqDcor7BsW6QBiQBw==} engines: {node: '>= 10'} cpu: [arm64] os: [win32] - '@next/swc-win32-x64-msvc@16.2.2': - resolution: {integrity: sha512-aHaKceJgdySReT7qeck5oShucxWRiiEuwCGK8HHALe6yZga8uyFpLkPgaRw3kkF04U7ROogL/suYCNt/+CuXGA==} + '@next/swc-win32-x64-msvc@16.2.3': + resolution: {integrity: sha512-Ibm29/GgB/ab5n7XKqlStkm54qqZE8v2FnijUPBgrd67FWrac45o/RsNlaOWjme/B5UqeWt/8KM4aWBwA1D2Kw==} engines: {node: '>= 10'} cpu: [x64] os: [win32] @@ -2344,36 +2438,36 @@ packages: resolution: {integrity: sha512-y3SvzjuY1ygnzWA4Krwx/WaJAsTMP11DN+e21A8Fa8PW1oDtVB5NSRW7LWurAiS2oKRkuCgcjTYMkBuBkcPCRg==} engines: {node: '>=12.4.0'} - '@orpc/client@1.13.13': - resolution: {integrity: sha512-jagx/Sa+9K4HEC5lBrUlMSrmR/06hvZctWh93/sKZc8GBk4zM0+71oT1kXQVw1oRYFV2XAq3xy3m6NdM6gfKYA==} + '@orpc/client@1.13.14': + resolution: {integrity: sha512-JQf3lO//UGHmmkd8+9fuWuh1gga1lhWuKnsT19cui7F6WizBy0NdFSVQerOsSy2c1kxOthlD7GnicGgSY2rhQA==} - '@orpc/contract@1.13.13': - resolution: {integrity: sha512-md6iyrYkePBSJNs1VnVEEnAUORMDPHIf3JGRSHxyssIcNakev/iOjP0HvpH0Sx0MlTBhihAJo6uFL8Vpth58Nw==} + '@orpc/contract@1.13.14': + resolution: {integrity: sha512-MfsjaQQDVcs4wHmdl5N/7vkwMnQ41nlojWXyRfRXNJHQczqBzM6sYaTJuUPXlw4YbIu64KHZ5nbbtwNCO5YXsg==} - '@orpc/openapi-client@1.13.13': - resolution: {integrity: sha512-k8od+bD7MqysKPPybAkxgfaNIaNseFPXtbidWkZAdCZ5w34SnDc7QPZJ0PQbyt9n9B+jOXSADNwQSTWSuGpjyA==} + '@orpc/openapi-client@1.13.14': + resolution: {integrity: sha512-mHuj/UL5qLqB1JqrRdlAoUYMidbsry8Cr9QOlOZk1mp7+OZhasFv75UNzxyjNNaSjyd3l2k4UkgpcHK4VSD7tQ==} - '@orpc/shared@1.13.13': - resolution: {integrity: sha512-kNpYOBjHvmgKHla6munWOaEeA0utEfAvoiZpXjiRjjt1RxTibdwQvVHgxRIBNMXfQsb+ON3Q/wDkoaUhvvSnIw==} + '@orpc/shared@1.13.14': + resolution: {integrity: sha512-/ri8ttSX+ppoo01d3LdqQ4Xh6VZS5PYRYmHxTvO8tuyiqBJhN18d8P1VtEW4T9hetoK7JZKeU7EAeqVUnCF9WA==} peerDependencies: '@opentelemetry/api': '>=1.9.0' peerDependenciesMeta: '@opentelemetry/api': optional: true - '@orpc/standard-server-fetch@1.13.13': - resolution: {integrity: sha512-Lffy26+WtCQkwOUacsrdyeJF1GNzrhm75O3LXKVFXqmSdyVVdyI6zuqLn/YKGODU2L9IqGxZ2CwsV2tE298SSA==} + '@orpc/standard-server-fetch@1.13.14': + resolution: {integrity: sha512-k2zkCi98qd3NkvWhUX/Yece/qjB+o07g/gHC509YB5HbOGtBV/da3eseYjFyzBx5LDxMz28BOALI8/q/YDhKZw==} - '@orpc/standard-server-peer@1.13.13': - resolution: {integrity: sha512-FeWAbXfnZDPYQRajM0hD6GJvHeC3DZILngAjdcLHy5zt3riu6nL2lLPSWDv5yNWWscmYU+CfKmXWd0Z01BOeWA==} + '@orpc/standard-server-peer@1.13.14': + resolution: {integrity: sha512-jinseQ8bn7XQOHjsCXhR1HiF3wAwn1xEQPpnE/av0PoOi4h0ATvhZjDLaRHvRavs8YwrIqwSuAuYT/hDxON58A==} - '@orpc/standard-server@1.13.13': - resolution: {integrity: sha512-9pgS8XvauuRQElkyuD8F3om+nN0KBEnTkhblDHCBzkZERjWkmfirJmshQrWHoFaDTk+nnXHIaY6d7TBTxXdPRw==} + '@orpc/standard-server@1.13.14': + resolution: {integrity: sha512-o8PaDERiwREFQpIZO0mQ1PhguchyNzrf1w7m3eK1JB4rPjHu1VJUgqCpy/sV3Id5ji4bX/gKHEC3NZjDX6mEWQ==} - '@orpc/tanstack-query@1.13.13': - resolution: {integrity: sha512-6+Cheaiu+RDPdszdeRKoBINrF8MQp64zSeZB+L3gqgF43zlYDhLOgELZMzYa6U3U6bLk4rmIeubpk+i1kACfRg==} + '@orpc/tanstack-query@1.13.14': + resolution: {integrity: sha512-5rq1Z1anVTVBseYeNBi5RJSgWPxpD0MqK7MYej3xnt56jjc6mFmWpUGNz9xy0BXPh3KmA/xDTNuB23kKgJ5JmQ==} peerDependencies: - '@orpc/client': 1.13.13 + '@orpc/client': 1.13.14 '@tanstack/query-core': '>=5.80.2' '@ota-meshi/ast-token-store@0.3.0': @@ -2507,18 +2601,15 @@ packages: cpu: [x64] os: [win32] - '@oxc-project/runtime@0.123.0': - resolution: {integrity: sha512-wRf0z8saz9tHLcK3YeTeBmwISrpy4bBimvKxUmryiIhbt+ZJb0nwwJNL3D8xpeWbNfZlGSlzRBZbfcbApIGZJw==} + '@oxc-project/runtime@0.124.0': + resolution: {integrity: sha512-sSg6n37J3w3mM4odFvRqzQENf6+qxKnvStr/gU0FgRRg1VE/4MqryLd9PJmE0a7K5xlDfbrctBtSagaFH6ij9Q==} engines: {node: ^20.19.0 || >=22.12.0} '@oxc-project/types@0.121.0': resolution: {integrity: sha512-CGtOARQb9tyv7ECgdAlFxi0Fv7lmzvmlm2rpD/RdijOO9rfk/JvB1CjT8EnoD+tjna/IYgKKw3IV7objRb+aYw==} - '@oxc-project/types@0.122.0': - resolution: {integrity: sha512-oLAl5kBpV4w69UtFZ9xqcmTi+GENWOcPF7FCrczTiBbmC0ibXxCwyvZGbO39rCVEuLGAZM84DH0pUIyyv/YJzA==} - - '@oxc-project/types@0.123.0': - resolution: {integrity: sha512-YtECP/y8Mj1lSHiUWGSRzy/C6teUKlS87dEfuVKT09LgQbUsBW1rNg+MiJ4buGu3yuADV60gbIvo9/HplA56Ew==} + '@oxc-project/types@0.124.0': + resolution: {integrity: sha512-VBFWMTBvHxS11Z5Lvlr3IWgrwhMTXV+Md+EQF0Xf60+wAdsGFTBx7X7K/hP4pi8N7dcm1RvcHwDxZ16Qx8keUg==} '@oxc-resolver/binding-android-arm-eabi@11.19.1': resolution: {integrity: sha512-aUs47y+xyXHUKlbhqHUjBABjvycq6YSD7bpxSW7vplUmdzAlJ93yXY6ZR0c1o1x5A/QKbENCvs3+NlY8IpIVzg==} @@ -2628,124 +2719,124 @@ packages: cpu: [x64] os: [win32] - '@oxfmt/binding-android-arm-eabi@0.43.0': - resolution: {integrity: sha512-CgU2s+/9hHZgo0IxVxrbMPrMj+tJ6VM3mD7Mr/4oiz4FNTISLoCvRmB5nk4wAAle045RtRjd86m673jwPyb1OQ==} + '@oxfmt/binding-android-arm-eabi@0.45.0': + resolution: {integrity: sha512-A/UMxFob1fefCuMeGxQBulGfFE38g2Gm23ynr3u6b+b7fY7/ajGbNsa3ikMIkGMLJW/TRoQaMoP1kME7S+815w==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm] os: [android] - '@oxfmt/binding-android-arm64@0.43.0': - resolution: {integrity: sha512-T9OfRwjA/EdYxAqbvR7TtqLv5nIrwPXuCtTwOHtS7aR9uXyn74ZYgzgTo6/ZwvTq9DY4W+DsV09hB2EXgn9EbA==} + '@oxfmt/binding-android-arm64@0.45.0': + resolution: {integrity: sha512-L63z4uZmHjgvvqvMJD7mwff8aSBkM0+X4uFr6l6U5t6+Qc9DCLVZWIunJ7Gm4fn4zHPdSq6FFQnhu9yqqobxIg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [android] - '@oxfmt/binding-darwin-arm64@0.43.0': - resolution: {integrity: sha512-o3i49ZUSJWANzXMAAVY1wnqb65hn4JVzwlRQ5qfcwhRzIA8lGVaud31Q3by5ALHPrksp5QEaKCQF9aAS3TXpZA==} + '@oxfmt/binding-darwin-arm64@0.45.0': + resolution: {integrity: sha512-UV34dd623FzqT+outIGndsCA/RBB+qgB3XVQhgmmJ9PJwa37NzPC9qzgKeOhPKxVk2HW+JKldQrVL54zs4Noww==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [darwin] - '@oxfmt/binding-darwin-x64@0.43.0': - resolution: {integrity: sha512-vWECzzCFkb0kK6jaHjbtC5sC3adiNWtqawFCxhpvsWlzVeKmv5bNvkB4nux+o4JKWTpHCM57NDK/MeXt44txmA==} + '@oxfmt/binding-darwin-x64@0.45.0': + resolution: {integrity: sha512-pMNJv0CMa1pDefVPeNbuQxibh8ITpWDFEhMC/IBB9Zlu76EbgzYwrzI4Cb11mqX2+rIYN70UTrh3z06TM59ptQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [darwin] - '@oxfmt/binding-freebsd-x64@0.43.0': - resolution: {integrity: sha512-rgz8JpkKiI/umOf7fl9gwKyQasC8bs5SYHy6g7e4SunfLBY3+8ATcD5caIg8KLGEtKFm5ujKaH8EfjcmnhzTLg==} + '@oxfmt/binding-freebsd-x64@0.45.0': + resolution: {integrity: sha512-xTcRoxbbo61sW2+ZRPeH+vp/o9G8gkdhiVumFU+TpneiPm14c79l6GFlxPXlCE9bNWikigbsrvJw46zCVAQFfg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [freebsd] - '@oxfmt/binding-linux-arm-gnueabihf@0.43.0': - resolution: {integrity: sha512-nWYnF3vIFzT4OM1qL/HSf1Yuj96aBuKWSaObXHSWliwAk2rcj7AWd6Lf7jowEBQMo4wCZVnueIGw/7C4u0KTBQ==} + '@oxfmt/binding-linux-arm-gnueabihf@0.45.0': + resolution: {integrity: sha512-hWL8Hdni+3U1mPFx1UtWeGp3tNb6EhBAUHRMbKUxVkOp3WwoJbpVO2bfUVbS4PfpledviXXNHSTl1veTa6FhkQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm] os: [linux] - '@oxfmt/binding-linux-arm-musleabihf@0.43.0': - resolution: {integrity: sha512-sFg+NWJbLfupYTF4WELHAPSnLPOn1jiDZ33Z1jfDnTaA+cC3iB35x0FMMZTFdFOz3icRIArncwCcemJFGXu6TQ==} + '@oxfmt/binding-linux-arm-musleabihf@0.45.0': + resolution: {integrity: sha512-6Blt/0OBT7vvfQpqYuYbpbFLPqSiaYpEJzUUWhinPEuADypDbtV1+LdjM0vYBNGPvnj85ex7lTerEX6JGcPt9w==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm] os: [linux] - '@oxfmt/binding-linux-arm64-gnu@0.43.0': - resolution: {integrity: sha512-MelWqv68tX6wZEILDrTc9yewiGXe7im62+5x0bNXlCYFOZdA+VnYiJfAihbROsZ5fm90p9C3haFrqjj43XnlAA==} + '@oxfmt/binding-linux-arm64-gnu@0.45.0': + resolution: {integrity: sha512-jLjoLfe+hGfjhA8hNBSdw85yCA8ePKq7ME4T+g6P9caQXvmt6IhE2X7iVjnVdkmYUWEzZrxlh4p6RkDmAMJY/A==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [linux] libc: [glibc] - '@oxfmt/binding-linux-arm64-musl@0.43.0': - resolution: {integrity: sha512-ROaWfYh+6BSJ1Arwy5ujijTlwnZetxDxzBpDc1oBR4d7rfrPBqzeyjd5WOudowzQUgyavl2wEpzn1hw3jWcqLA==} + '@oxfmt/binding-linux-arm64-musl@0.45.0': + resolution: {integrity: sha512-XQKXZIKYJC3GQJ8FnD3iMntpw69Wd9kDDK/Xt79p6xnFYlGGxSNv2vIBvRTDg5CKByWFWWZLCRDOXoP/m6YN4g==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [linux] libc: [musl] - '@oxfmt/binding-linux-ppc64-gnu@0.43.0': - resolution: {integrity: sha512-PJRs/uNxmFipJJ8+SyKHh7Y7VZIKQicqrrBzvfyM5CtKi8D7yZKTwUOZV3ffxmiC2e7l1SDJpkBEOyue5NAFsg==} + '@oxfmt/binding-linux-ppc64-gnu@0.45.0': + resolution: {integrity: sha512-+g5RiG+xOkdrCWkKodv407nTvMq4vYM18Uox2MhZBm/YoqFxxJpWKsloskFFG5NU13HGPw1wzYjjOVcyd9moCA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [ppc64] os: [linux] libc: [glibc] - '@oxfmt/binding-linux-riscv64-gnu@0.43.0': - resolution: {integrity: sha512-j6biGAgzIhj+EtHXlbNumvwG7XqOIdiU4KgIWRXAEj/iUbHKukKW8eXa4MIwpQwW1YkxovduKtzEAPnjlnAhVQ==} + '@oxfmt/binding-linux-riscv64-gnu@0.45.0': + resolution: {integrity: sha512-V7dXKoSyEbWAkkSF4JJNtF+NJZDmJoSarSoP30WCsB3X636Rehd3CvxBj49FIJxEBFWhvcUjGSHVeU8Erck1bQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [riscv64] os: [linux] libc: [glibc] - '@oxfmt/binding-linux-riscv64-musl@0.43.0': - resolution: {integrity: sha512-RYWxAcslKxvy7yri24Xm9cmD0RiANaiEPs007EFG6l9h1ChM69Q5SOzACaCoz4Z9dEplnhhneeBaTWMEdpgIbA==} + '@oxfmt/binding-linux-riscv64-musl@0.45.0': + resolution: {integrity: sha512-Vdelft1sAEYojVGgcODEFXSWYQYlIvoyIGWebKCuUibd1tvS1TjTx413xG2ZLuHpYj45CkN/ztMLMX6jrgqpgg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [riscv64] os: [linux] libc: [musl] - '@oxfmt/binding-linux-s390x-gnu@0.43.0': - resolution: {integrity: sha512-DT6Q8zfQQy3jxpezAsBACEHNUUixKSYTwdXeXojNHe4DQOoxjPdjr3Szu6BRNjxLykZM/xMNmp9ElOIyDppwtw==} + '@oxfmt/binding-linux-s390x-gnu@0.45.0': + resolution: {integrity: sha512-RR7xKgNpqwENnK0aYCGYg0JycY2n93J0reNjHyes+I9Gq52dH95x+CBlnlAQHCPfz6FGnKA9HirgUl14WO6o7w==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [s390x] os: [linux] libc: [glibc] - '@oxfmt/binding-linux-x64-gnu@0.43.0': - resolution: {integrity: sha512-R8Yk7iYcuZORXmCfFZClqbDxRZgZ9/HEidUuBNdoX8Ptx07cMePnMVJ/woB84lFIDjh2ROHVaOP40Ds3rBXFqg==} + '@oxfmt/binding-linux-x64-gnu@0.45.0': + resolution: {integrity: sha512-U/QQ0+BQNSHxjuXR/utvXnQ50Vu5kUuqEomZvQ1/3mhgbBiMc2WU9q5kZ5WwLp3gnFIx9ibkveoRSe2EZubkqg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [linux] libc: [glibc] - '@oxfmt/binding-linux-x64-musl@0.43.0': - resolution: {integrity: sha512-F2YYqyvnQNvi320RWZNAvsaWEHwmW3k4OwNJ1hZxRKXupY63expbBaNp6jAgvYs7y/g546vuQnGHQuCBhslhLQ==} + '@oxfmt/binding-linux-x64-musl@0.45.0': + resolution: {integrity: sha512-o5TLOUCF0RWQjsIS06yVC+kFgp092/yLe6qBGSUvtnmTVw9gxjpdQSXc3VN5Cnive4K11HNstEZF8ROKHfDFSw==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [linux] libc: [musl] - '@oxfmt/binding-openharmony-arm64@0.43.0': - resolution: {integrity: sha512-OE6TdietLXV3F6c7pNIhx/9YC1/2YFwjU9DPc/fbjxIX19hNIaP1rS0cFjCGJlGX+cVJwIKWe8Mos+LdQ1yAJw==} + '@oxfmt/binding-openharmony-arm64@0.45.0': + resolution: {integrity: sha512-RnGcV3HgPuOjsGx/k9oyRNKmOp+NBLGzZTdPDYbc19r7NGeYPplnUU/BfU35bX2Y/O4ejvHxcfkvW2WoYL/gsg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [openharmony] - '@oxfmt/binding-win32-arm64-msvc@0.43.0': - resolution: {integrity: sha512-0nWK6a7pGkbdoypfVicmV9k/N1FwjPZENoqhlTU+5HhZnAhpIO3za30nEE33u6l6tuy9OVfpdXUqxUgZ+4lbZw==} + '@oxfmt/binding-win32-arm64-msvc@0.45.0': + resolution: {integrity: sha512-v3Vj7iKKsUFwt9w5hsqIIoErKVoENC6LoqfDlteOQ5QMDCXihlqLoxpmviUhXnNncg4zV6U9BPwlBbwa+qm4wg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [win32] - '@oxfmt/binding-win32-ia32-msvc@0.43.0': - resolution: {integrity: sha512-9aokTR4Ft+tRdvgN/pKzSkVy2ksc4/dCpDm9L/xFrbIw0yhLtASLbvoG/5WOTUh/BRPPnfGTsWznEqv0dlOmhA==} + '@oxfmt/binding-win32-ia32-msvc@0.45.0': + resolution: {integrity: sha512-N8yotPBX6ph0H3toF4AEpdCeVPrdcSetj+8eGiZGsrLsng3bs/Q5HPu4bbSxip5GBPx5hGbGHrZwH4+rcrjhHA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [ia32] os: [win32] - '@oxfmt/binding-win32-x64-msvc@0.43.0': - resolution: {integrity: sha512-4bPgdQux2ZLWn3bf2TTXXMHcJB4lenmuxrLqygPmvCJ104Yqzj1UctxSRzR31TiJ4MLaG22RK8dUsVpJtrCz5g==} + '@oxfmt/binding-win32-x64-msvc@0.45.0': + resolution: {integrity: sha512-w5MMTRCK1dpQeRA+HHqXQXyN33DlG/N2LOYxJmaT4fJjcmZrbNnqw7SmIk7I2/a2493PPLZ+2E/Ar6t2iKVMug==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [win32] @@ -2780,124 +2871,124 @@ packages: cpu: [x64] os: [win32] - '@oxlint/binding-android-arm-eabi@1.58.0': - resolution: {integrity: sha512-1T7UN3SsWWxpWyWGn1cT3ASNJOo+pI3eUkmEl7HgtowapcV8kslYpFQcYn431VuxghXakPNlbjRwhqmR37PFOg==} + '@oxlint/binding-android-arm-eabi@1.60.0': + resolution: {integrity: sha512-YdeJKaZckDQL1qa62a1aKq/goyq48aX3yOxaaWqWb4sau4Ee4IiLbamftNLU3zbePky6QsDj6thnSSzHRBjDfA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm] os: [android] - '@oxlint/binding-android-arm64@1.58.0': - resolution: {integrity: sha512-GryzujxuiRv2YFF7bRy8mKcxlbuAN+euVUtGJt9KKbLT8JBUIosamVhcthLh+VEr6KE6cjeVMAQxKAzJcoN7dg==} + '@oxlint/binding-android-arm64@1.60.0': + resolution: {integrity: sha512-7ANS7PpXCfq84xZQ8E5WPs14gwcuPcl+/8TFNXfpSu0CQBXz3cUo2fDpHT8v8HJN+Ut02eacvMAzTnc9s6X4tw==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [android] - '@oxlint/binding-darwin-arm64@1.58.0': - resolution: {integrity: sha512-7/bRSJIwl4GxeZL9rPZ11anNTyUO9epZrfEJH/ZMla3+/gbQ6xZixh9nOhsZ0QwsTW7/5J2A/fHbD1udC5DQQA==} + '@oxlint/binding-darwin-arm64@1.60.0': + resolution: {integrity: sha512-pJsgd9AfplLGBm1fIr25V6V14vMrayhx4uIQvlfH7jWs2SZwSrvi3TfgfJySB8T+hvyEH8K2zXljQiUnkgUnfQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [darwin] - '@oxlint/binding-darwin-x64@1.58.0': - resolution: {integrity: sha512-EqdtJSiHweS2vfILNrpyJ6HUwpEq2g7+4Zx1FPi4hu3Hu7tC3znF6ufbXO8Ub2LD4mGgznjI7kSdku9NDD1Mkg==} + '@oxlint/binding-darwin-x64@1.60.0': + resolution: {integrity: sha512-Ue1aXHX49ivwflKqGJc7zcd/LeLgbhaTcDCQStgx5x06AXgjEAZmvrlMuIkWd4AL4FHQe6QJ9f33z04Cg448VQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [darwin] - '@oxlint/binding-freebsd-x64@1.58.0': - resolution: {integrity: sha512-VQt5TH4M42mY20F545G637RKxV/yjwVtKk2vfXuazfReSIiuvWBnv+FVSvIV5fKVTJNjt3GSJibh6JecbhGdBw==} + '@oxlint/binding-freebsd-x64@1.60.0': + resolution: {integrity: sha512-YCyQzsQtusQw+gNRW9rRTifSO+Dt/+dtCl2NHoDMZqJlRTEZ/Oht9YnuporI9yiTx7+cB+eqzX3MtHHVHGIWhg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [freebsd] - '@oxlint/binding-linux-arm-gnueabihf@1.58.0': - resolution: {integrity: sha512-fBYcj4ucwpAtjJT3oeBdFBYKvNyjRSK+cyuvBOTQjh0jvKp4yeA4S/D0IsCHus/VPaNG5L48qQkh+Vjy3HL2/Q==} + '@oxlint/binding-linux-arm-gnueabihf@1.60.0': + resolution: {integrity: sha512-c7dxM2Zksa45Qw16i2iGY3Fti2NirJ38FrsBsKw+qcJ0OtqTsBgKJLF0xV+yLG56UH01Z8WRPgsw31e0MoRoGQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm] os: [linux] - '@oxlint/binding-linux-arm-musleabihf@1.58.0': - resolution: {integrity: sha512-0BeuFfwlUHlJ1xpEdSD1YO3vByEFGPg36uLjK1JgFaxFb4W6w17F8ET8sz5cheZ4+x5f2xzdnRrrWv83E3Yd8g==} + '@oxlint/binding-linux-arm-musleabihf@1.60.0': + resolution: {integrity: sha512-ZWALoA42UYqBEP1Tbw9OWURgFGS1nWj2AAvLdY6ZcGx/Gj93qVCBKjcvwXMupZibYwFbi9s/rzqkZseb/6gVtQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm] os: [linux] - '@oxlint/binding-linux-arm64-gnu@1.58.0': - resolution: {integrity: sha512-TXlZgnPTlxrQzxG9ZXU7BNwx1Ilrr17P3GwZY0If2EzrinqRH3zXPc3HrRcBJgcsoZNMuNL5YivtkJYgp467UQ==} + '@oxlint/binding-linux-arm64-gnu@1.60.0': + resolution: {integrity: sha512-tpy+1w4p9hN5CicMCxqNy6ymfRtV5ayE573vFNjp1k1TN/qhLFgflveZoE/0++RlkHikBz2vY545NWm/hp7big==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [linux] libc: [glibc] - '@oxlint/binding-linux-arm64-musl@1.58.0': - resolution: {integrity: sha512-zSoYRo5dxHLcUx93Stl2hW3hSNjPt99O70eRVWt5A1zwJ+FPjeCCANCD2a9R4JbHsdcl11TIQOjyigcRVOH2mw==} + '@oxlint/binding-linux-arm64-musl@1.60.0': + resolution: {integrity: sha512-eDYDXZGhQAXyn6GwtwiX/qcLS0HlOLPJ/+iiIY8RYr+3P8oKBmgKxADLlniL6FtWfE7pPk7IGN9/xvDEvDvFeg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [linux] libc: [musl] - '@oxlint/binding-linux-ppc64-gnu@1.58.0': - resolution: {integrity: sha512-NQ0U/lqxH2/VxBYeAIvMNUK1y0a1bJ3ZicqkF2c6wfakbEciP9jvIE4yNzCFpZaqeIeRYaV7AVGqEO1yrfVPjA==} + '@oxlint/binding-linux-ppc64-gnu@1.60.0': + resolution: {integrity: sha512-nxehly5XYBHUWI9VJX1bqCf9j/B43DaK/aS/T1fcxCpX3PA4Rm9BB54nPD1CKayT8xg6REN1ao+01hSRNgy8OA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [ppc64] os: [linux] libc: [glibc] - '@oxlint/binding-linux-riscv64-gnu@1.58.0': - resolution: {integrity: sha512-X9J+kr3gIC9FT8GuZt0ekzpNUtkBVzMVU4KiKDSlocyQuEgi3gBbXYN8UkQiV77FTusLDPsovjo95YedHr+3yg==} + '@oxlint/binding-linux-riscv64-gnu@1.60.0': + resolution: {integrity: sha512-j1qf/NaUfOWQutjeoooNG1Q0zsK0XGmSu1uDLq3cctquRF3j7t9Hxqf/76ehCc5GEUAanth2W4Fa+XT1RFg/nw==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [riscv64] os: [linux] libc: [glibc] - '@oxlint/binding-linux-riscv64-musl@1.58.0': - resolution: {integrity: sha512-CDze3pi1OO3Wvb/QsXjmLEY4XPKGM6kIo82ssNOgmcl1IdndF9VSGAE38YLhADWmOac7fjqhBw82LozuUVxD0Q==} + '@oxlint/binding-linux-riscv64-musl@1.60.0': + resolution: {integrity: sha512-YELKPRefQ/q/h3RUmeRfPCUhh2wBvgV1RyZ/F9M9u8cDyXsQW2ojv1DeWQTt466yczDITjZnIOg/s05pk7Ve2A==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [riscv64] os: [linux] libc: [musl] - '@oxlint/binding-linux-s390x-gnu@1.58.0': - resolution: {integrity: sha512-b/89glbxFaEAcA6Uf1FvCNecBJEgcUTsV1quzrqXM/o4R1M4u+2KCVuyGCayN2UpsRWtGGLb+Ver0tBBpxaPog==} + '@oxlint/binding-linux-s390x-gnu@1.60.0': + resolution: {integrity: sha512-JkO3C6Gki7Y6h/MiIkFKvHFOz98/YWvQ4WYbK9DLXACMP2rjULzkeGyAzorJE5S1dzLQGFgeqvN779kSFwoV1g==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [s390x] os: [linux] libc: [glibc] - '@oxlint/binding-linux-x64-gnu@1.58.0': - resolution: {integrity: sha512-0/yYpkq9VJFCEcuRlrViGj8pJUFFvNS4EkEREaN7CB1EcLXJIaVSSa5eCihwBGXtOZxhnblWgxks9juRdNQI7w==} + '@oxlint/binding-linux-x64-gnu@1.60.0': + resolution: {integrity: sha512-XjKHdFVCpZZZSWBCKyyqCq65s2AKXykMXkjLoKYODrD+f5toLhlwsMESscu8FbgnJQ4Y/dpR/zdazsahmgBJIA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [linux] libc: [glibc] - '@oxlint/binding-linux-x64-musl@1.58.0': - resolution: {integrity: sha512-hr6FNvmcAXiH+JxSvaJ4SJ1HofkdqEElXICW9sm3/Rd5eC3t7kzvmLyRAB3NngKO2wzXRCAm4Z/mGWfrsS4X8w==} + '@oxlint/binding-linux-x64-musl@1.60.0': + resolution: {integrity: sha512-js29ZWIuPhNWzY8NC7KoffEMEeWG105vbmm+8EOJsC+T/jHBiKIJEUF78+F/IrgEWMMP9N0kRND4Pp75+xAhKg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [linux] libc: [musl] - '@oxlint/binding-openharmony-arm64@1.58.0': - resolution: {integrity: sha512-R+O368VXgRql1K6Xar+FEo7NEwfo13EibPMoTv3sesYQedRXd6m30Dh/7lZMxnrQVFfeo4EOfYIP4FpcgWQNHg==} + '@oxlint/binding-openharmony-arm64@1.60.0': + resolution: {integrity: sha512-H+PUITKHk04stFpWj3x3Kg08Afp/bcXSBi0EhasR5a0Vw7StXHTzdl655PUI0fB4qdh2Wsu6Dsi+3ACxPoyQnA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [openharmony] - '@oxlint/binding-win32-arm64-msvc@1.58.0': - resolution: {integrity: sha512-Q0FZiAY/3c4YRj4z3h9K1PgaByrifrfbBoODSeX7gy97UtB7pySPUQfC2B/GbxWU6k7CzQrRy5gME10PltLAFQ==} + '@oxlint/binding-win32-arm64-msvc@1.60.0': + resolution: {integrity: sha512-WA/yc7f7ZfCefBXVzNHn1Ztulb1EFwNBb4jMZ6pjML0zz6pHujlF3Q3jySluz3XHl/GNeMTntG1seUBWVMlMag==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [win32] - '@oxlint/binding-win32-ia32-msvc@1.58.0': - resolution: {integrity: sha512-Y8FKBABrSPp9H0QkRLHDHOSUgM/309a3IvOVgPcVxYcX70wxJrk608CuTg7w+C6vEd724X5wJoNkBcGYfH7nNQ==} + '@oxlint/binding-win32-ia32-msvc@1.60.0': + resolution: {integrity: sha512-33YxL1sqwYNZXtn3MD/4dno6s0xeedXOJlT1WohkVD565WvohClZUr7vwKdAk954n4xiEWJkewiCr+zLeq7AeA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [ia32] os: [win32] - '@oxlint/binding-win32-x64-msvc@1.58.0': - resolution: {integrity: sha512-bCn5rbiz5My+Bj7M09sDcnqW0QJyINRVxdZ65x1/Y2tGrMwherwK/lpk+HRQCKvXa8pcaQdF5KY5j54VGZLwNg==} + '@oxlint/binding-win32-x64-msvc@1.60.0': + resolution: {integrity: sha512-JOro4ZcfBLamJCyfURQmOQByoorgOdx3ZjAkSqnb/CyG/i+lN3KoV5LAgk5ZAW6DPq7/Cx7n23f8DuTWXTWgyQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [win32] @@ -3002,8 +3093,8 @@ packages: '@polka/url@1.0.0-next.29': resolution: {integrity: sha512-wwQAWhWSuHaag8c4q/KN/vCoeOJYshAIvMQwD4GpSb3OiZklFfvAgmj0VCBBImRpuF/aFgIRzllXlVX93Jevww==} - '@preact/signals-core@1.14.0': - resolution: {integrity: sha512-AowtCcCU/33lFlh1zRFf/u+12rfrhtNakj7UpaGEsmMwUKpKWMVvcktOGcwBBNiB4lWrZWc01LhiyyzVklJyaQ==} + '@preact/signals-core@1.14.1': + resolution: {integrity: sha512-vxPpfXqrwUe9lpjqfYNjAF/0RF/eFGeLgdJzdmIIZjpOnTmGmAB4BjWone562mJGMRP4frU6iZ6ei3PDsu52Ng==} '@radix-ui/primitive@1.1.3': resolution: {integrity: sha512-JTF99U/6XIjCBo0wqkU5sK10glYe27MRRsfwoiq5zzOEZLHU3A3KCMa5X/azekYRCJ0HlwI0crAXS/5dEHTzDg==} @@ -3284,106 +3375,8 @@ packages: resolution: {integrity: sha512-UuBOt7BOsKVOkFXRe4Ypd/lADuNIfqJXv8GvHqtXaTYXPPKkj2nS2zPllVsrtRjcomDhIJVBnZwfmlI222WH8g==} engines: {node: '>=14.0.0'} - '@rolldown/binding-android-arm64@1.0.0-rc.12': - resolution: {integrity: sha512-pv1y2Fv0JybcykuiiD3qBOBdz6RteYojRFY1d+b95WVuzx211CRh+ytI/+9iVyWQ6koTh5dawe4S/yRfOFjgaA==} - engines: {node: ^20.19.0 || >=22.12.0} - cpu: [arm64] - os: [android] - - '@rolldown/binding-darwin-arm64@1.0.0-rc.12': - resolution: {integrity: sha512-cFYr6zTG/3PXXF3pUO+umXxt1wkRK/0AYT8lDwuqvRC+LuKYWSAQAQZjCWDQpAH172ZV6ieYrNnFzVVcnSflAg==} - engines: {node: ^20.19.0 || >=22.12.0} - cpu: [arm64] - os: [darwin] - - '@rolldown/binding-darwin-x64@1.0.0-rc.12': - resolution: {integrity: sha512-ZCsYknnHzeXYps0lGBz8JrF37GpE9bFVefrlmDrAQhOEi4IOIlcoU1+FwHEtyXGx2VkYAvhu7dyBf75EJQffBw==} - engines: {node: ^20.19.0 || >=22.12.0} - cpu: [x64] - os: [darwin] - - '@rolldown/binding-freebsd-x64@1.0.0-rc.12': - resolution: {integrity: sha512-dMLeprcVsyJsKolRXyoTH3NL6qtsT0Y2xeuEA8WQJquWFXkEC4bcu1rLZZSnZRMtAqwtrF/Ib9Ddtpa/Gkge9Q==} - engines: {node: ^20.19.0 || >=22.12.0} - cpu: [x64] - os: [freebsd] - - '@rolldown/binding-linux-arm-gnueabihf@1.0.0-rc.12': - resolution: {integrity: sha512-YqWjAgGC/9M1lz3GR1r1rP79nMgo3mQiiA+Hfo+pvKFK1fAJ1bCi0ZQVh8noOqNacuY1qIcfyVfP6HoyBRZ85Q==} - engines: {node: ^20.19.0 || >=22.12.0} - cpu: [arm] - os: [linux] - - '@rolldown/binding-linux-arm64-gnu@1.0.0-rc.12': - resolution: {integrity: sha512-/I5AS4cIroLpslsmzXfwbe5OmWvSsrFuEw3mwvbQ1kDxJ822hFHIx+vsN/TAzNVyepI/j/GSzrtCIwQPeKCLIg==} - engines: {node: ^20.19.0 || >=22.12.0} - cpu: [arm64] - os: [linux] - libc: [glibc] - - '@rolldown/binding-linux-arm64-musl@1.0.0-rc.12': - resolution: {integrity: sha512-V6/wZztnBqlx5hJQqNWwFdxIKN0m38p8Jas+VoSfgH54HSj9tKTt1dZvG6JRHcjh6D7TvrJPWFGaY9UBVOaWPw==} - engines: {node: ^20.19.0 || >=22.12.0} - cpu: [arm64] - os: [linux] - libc: [musl] - - '@rolldown/binding-linux-ppc64-gnu@1.0.0-rc.12': - resolution: {integrity: sha512-AP3E9BpcUYliZCxa3w5Kwj9OtEVDYK6sVoUzy4vTOJsjPOgdaJZKFmN4oOlX0Wp0RPV2ETfmIra9x1xuayFB7g==} - engines: {node: ^20.19.0 || >=22.12.0} - cpu: [ppc64] - os: [linux] - libc: [glibc] - - '@rolldown/binding-linux-s390x-gnu@1.0.0-rc.12': - resolution: {integrity: sha512-nWwpvUSPkoFmZo0kQazZYOrT7J5DGOJ/+QHHzjvNlooDZED8oH82Yg67HvehPPLAg5fUff7TfWFHQS8IV1n3og==} - engines: {node: ^20.19.0 || >=22.12.0} - cpu: [s390x] - os: [linux] - libc: [glibc] - - '@rolldown/binding-linux-x64-gnu@1.0.0-rc.12': - resolution: {integrity: sha512-RNrafz5bcwRy+O9e6P8Z/OCAJW/A+qtBczIqVYwTs14pf4iV1/+eKEjdOUta93q2TsT/FI0XYDP3TCky38LMAg==} - engines: {node: ^20.19.0 || >=22.12.0} - cpu: [x64] - os: [linux] - libc: [glibc] - - '@rolldown/binding-linux-x64-musl@1.0.0-rc.12': - resolution: {integrity: sha512-Jpw/0iwoKWx3LJ2rc1yjFrj+T7iHZn2JDg1Yny1ma0luviFS4mhAIcd1LFNxK3EYu3DHWCps0ydXQ5i/rrJ2ig==} - engines: {node: ^20.19.0 || >=22.12.0} - cpu: [x64] - os: [linux] - libc: [musl] - - '@rolldown/binding-openharmony-arm64@1.0.0-rc.12': - resolution: {integrity: sha512-vRugONE4yMfVn0+7lUKdKvN4D5YusEiPilaoO2sgUWpCvrncvWgPMzK00ZFFJuiPgLwgFNP5eSiUlv2tfc+lpA==} - engines: {node: ^20.19.0 || >=22.12.0} - cpu: [arm64] - os: [openharmony] - - '@rolldown/binding-wasm32-wasi@1.0.0-rc.12': - resolution: {integrity: sha512-ykGiLr/6kkiHc0XnBfmFJuCjr5ZYKKofkx+chJWDjitX+KsJuAmrzWhwyOMSHzPhzOHOy7u9HlFoa5MoAOJ/Zg==} - engines: {node: '>=14.0.0'} - cpu: [wasm32] - - '@rolldown/binding-win32-arm64-msvc@1.0.0-rc.12': - resolution: {integrity: sha512-5eOND4duWkwx1AzCxadcOrNeighiLwMInEADT0YM7xeEOOFcovWZCq8dadXgcRHSf3Ulh1kFo/qvzoFiCLOL1Q==} - engines: {node: ^20.19.0 || >=22.12.0} - cpu: [arm64] - os: [win32] - - '@rolldown/binding-win32-x64-msvc@1.0.0-rc.12': - resolution: {integrity: sha512-PyqoipaswDLAZtot351MLhrlrh6lcZPo2LSYE+VDxbVk24LVKAGOuE4hb8xZQmrPAuEtTZW8E6D2zc5EUZX4Lw==} - engines: {node: ^20.19.0 || >=22.12.0} - cpu: [x64] - os: [win32] - - '@rolldown/pluginutils@1.0.0-rc.12': - resolution: {integrity: sha512-HHMwmarRKvoFsJorqYlFeFRzXZqCt2ETQlEDOb9aqssrnVBB1/+xgTGtuTrIk5vzLNX1MjMtTf7W9z3tsSbrxw==} - - '@rolldown/pluginutils@1.0.0-rc.13': - resolution: {integrity: sha512-3ngTAv6F/Py35BsYbeeLeecvhMKdsKm4AoOETVhAA+Qc8nrA2I0kF7oa93mE9qnIurngOSpMnQ0x2nQY2FPviA==} + '@rolldown/pluginutils@1.0.0-rc.15': + resolution: {integrity: sha512-UromN0peaE53IaBRe9W7CjrZgXl90fqGpK+mIZbA3qSTeYqg3pqpROBdIPvOG3F5ereDHNwoHBI2e50n1BDr1g==} '@rolldown/pluginutils@1.0.0-rc.7': resolution: {integrity: sha512-qujRfC8sFVInYSPPMLQByRh7zhwkGFS4+tyMQ83srV1qrxL4g8E2tyxVVyxd0+8QeBM1mIk9KbWxkegRr76XzA==} @@ -3544,32 +3537,32 @@ packages: cpu: [x64] os: [win32] - '@sentry-internal/browser-utils@10.47.0': - resolution: {integrity: sha512-bVFRAeJWMBcBCvJKIFCMJ1/yQToL4vPGqfmlnDZeypcxkqUDKQ/Y3ziLHXoDL2sx0lagcgU2vH1QhCQ67Aujjw==} + '@sentry-internal/browser-utils@10.48.0': + resolution: {integrity: sha512-SCiTLBXzugFKxev6NoKYBIhQoDk0gUh0AVVVepCBqfCJiWBG01Zvv0R5tCVohr4cWRllkQ8mlBdNQd/I7s9tdA==} engines: {node: '>=18'} - '@sentry-internal/feedback@10.47.0': - resolution: {integrity: sha512-pdvMmi4dQpX5S/vAAzrhHPIw3T3HjUgDNgUiCBrlp7N9/6zGO2gNPhUnNekP+CjgI/z0rvf49RLqlDenpNrMOg==} + '@sentry-internal/feedback@10.48.0': + resolution: {integrity: sha512-tGkEyOM1HDS9qebDphUMEnyk3qq/50AnuTBiFmMJyjNzowylVGmRRk0sr3xkmbVHCDXQCiYnDmSVlJ2x4SDMrQ==} engines: {node: '>=18'} - '@sentry-internal/replay-canvas@10.47.0': - resolution: {integrity: sha512-A5OY8friSe6g8WAK4L8IeOPiEd9D3Ps40DzRH5j2f6SUja0t90mKMvHRcRf8zq0d4BkdB+JM7tjOkwxpuv8heA==} + '@sentry-internal/replay-canvas@10.48.0': + resolution: {integrity: sha512-9nWuN2z4O+iwbTfuYV5ZmngBgJU/ZxfOo47A5RJP3Nu/kl59aJ1lUhILYOKyeNOIC/JyeERmpIcTxnlPXQzZ3Q==} engines: {node: '>=18'} - '@sentry-internal/replay@10.47.0': - resolution: {integrity: sha512-ScdovxP7hJxgMt70+7hFvwT02GIaIUAxdEM/YPsayZBeCoAukPW8WiwztJfoKtsfPyKJ5A6f0H3PIxTPcA9Row==} + '@sentry-internal/replay@10.48.0': + resolution: {integrity: sha512-sevRTePfuk4PNuz9KAKpmTZEomAU0aLXyIhOwA0OnUDdxPhkY8kq5lwDbuxTHv6DQUjUX3YgFbY45VH1JEqHKA==} engines: {node: '>=18'} - '@sentry/browser@10.47.0': - resolution: {integrity: sha512-rC0agZdxKA5XWfL4VwPOr/rJMogXDqZgnVzr93YWpFn9DMZT/7LzxSJVPIJwRUjx3bFEby3PcTa3YaX7pxm1AA==} + '@sentry/browser@10.48.0': + resolution: {integrity: sha512-4jt2zX2ExgFcNe2x+W+/k81fmDUsOrquGtt028CiGuDuma6kEsWBI4JbooT1jhj2T+eeUxe3YGbM23Zhh7Ghhw==} engines: {node: '>=18'} - '@sentry/core@10.47.0': - resolution: {integrity: sha512-nsYRAx3EWezDut+Zl+UwwP07thh9uY7CfSAi2whTdcJl5hu1nSp2z8bba7Vq/MGbNLnazkd3A+GITBEML924JA==} + '@sentry/core@10.48.0': + resolution: {integrity: sha512-h8F+fXVwYC9ro5ZaO8V+v3vqc0awlXHGblEAuVxSGgh4IV/oFX+QVzXeDTTrFOFS6v/Vn5vAyu240eJrJAS6/g==} engines: {node: '>=18'} - '@sentry/react@10.47.0': - resolution: {integrity: sha512-ZtJV6xxF8jUVE9e3YQUG3Do0XapG1GjniyLyqMPgN6cNvs/HaRJODf7m60By+VGqcl5XArEjEPTvx8CdPUXDfA==} + '@sentry/react@10.48.0': + resolution: {integrity: sha512-uc93vKjmu6gNns+JAX4qquuxWpAMit0uGPA1TYlMjct9NG1uX3TkDPJAr9Pgd1lOXx8mKqCmj5fK33QeExMpPw==} engines: {node: '>=18'} peerDependencies: react: ^16.14.0 || 17.x || 18.x || 19.x @@ -3614,6 +3607,9 @@ packages: resolution: {integrity: sha512-TeheYy0ILzBEI/CO55CP6zJCSdSWeRtGnHy8U8dWSUH4I68iqTsy7HkMktR4xakThc9jotkPQUXT4ITdbV7cHA==} engines: {node: '>=18'} + '@socket.io/component-emitter@3.1.2': + resolution: {integrity: sha512-9BCxFwvbGg/RsZK9tjXd8s4UcwR0MWeFQ1XEKIQVVvAGJyINdrqKMcTRyLoK8Rse1GjzLV9cwjWV1olXRWEXVA==} + '@solid-primitives/event-listener@2.4.5': resolution: {integrity: sha512-nwRV558mIabl4yVAhZKY8cb6G+O1F0M6Z75ttTu5hk+SxdOnKSGj+eetDIu7Oax1P138ZdUU01qnBPR8rnxaEA==} peerDependencies: @@ -3950,20 +3946,20 @@ packages: peerDependencies: solid-js: 1.9.11 - '@tanstack/eslint-plugin-query@5.96.2': - resolution: {integrity: sha512-OsXCATZ+YmG8TyHrunfYy2IDB+dqY87en2im2A60JPgDAg66cCoHTzJWbe9uH8Cw9/K3NiKYlyyo1erVFu3qFw==} + '@tanstack/eslint-plugin-query@5.99.0': + resolution: {integrity: sha512-jVp1AEL7S7BeuQvH5SN1F5UdrNW/AbryKDeWUUMeAKNzh9C+Ik/bRSa/HeuJLlmaN+WOUkdDFbtCK0go7BxnUQ==} peerDependencies: - eslint: ^8.57.0 || ^9.0.0 - typescript: ^5.4.0 + eslint: ^8.57.0 || ^9.0.0 || ^10.0.0 + typescript: ^5.4.0 || ^6.0.0 peerDependenciesMeta: typescript: optional: true - '@tanstack/form-core@1.28.6': - resolution: {integrity: sha512-4zroxL6VDj5O+w7l3dYZnUeL/h30KtNSV7UWzKAL7cl+8clMFdISPDlDlluS37As7oqvPVKo8B83VlIBvgmRog==} + '@tanstack/form-core@1.29.0': + resolution: {integrity: sha512-uyeKEdJBfbj0bkBSwvSYVRtWLOaXvfNX3CeVw1HqGOXVLxpBBGAqWdYLc+UoX/9xcoFwFXrjR9QqMPzvwm2yyQ==} - '@tanstack/form-devtools@0.2.20': - resolution: {integrity: sha512-4cW/eU5DBTrWP53mxwHKp4NQWTIQ3XCA91pMWK7dFNNClIwFnxoSJoKwyUa6b8kRIO6uq1Sjk2mhkAtj5kB22A==} + '@tanstack/form-devtools@0.2.21': + resolution: {integrity: sha512-8mxR1/QDw37mNVSFsr4ZN8+bdamH9LU1/iQ3I7/sfTzFmMsNzUOysX3OZf053eaS4Gaw44PT0pH7U0FWD98QKw==} peerDependencies: solid-js: 1.9.11 @@ -3971,11 +3967,11 @@ packages: resolution: {integrity: sha512-y/xtNPNt/YeyoVxE/JCx+T7yjEzpezmbb+toK8DDD1P4m7Kzs5YR956+7OKexG3f8aXgC3rLZl7b1V+yNUSy5w==} engines: {node: '>=18'} - '@tanstack/query-core@5.96.2': - resolution: {integrity: sha512-hzI6cTVh4KNRk8UtoIBS7Lv9g6BnJPXvBKsvYH1aGWvv0347jT3BnSvztOE+kD76XGvZnRC/t6qdW1CaIfwCeA==} + '@tanstack/query-core@5.99.0': + resolution: {integrity: sha512-3Jv3WQG0BCcH7G+7lf/bP8QyBfJOXeY+T08Rin3GZ1bshvwlbPt7NrDHMEzGdKIOmOzvIQmxjk28YEQX60k7pQ==} - '@tanstack/query-devtools@5.96.2': - resolution: {integrity: sha512-vBTB1Qhbm3nHSbEUtQwks/EdcAtFfEapr1WyBW4w2ExYKuXVi3jIxUIHf5MlSltiHuL7zNyUuanqT/7sI2sb6g==} + '@tanstack/query-devtools@5.99.0': + resolution: {integrity: sha512-m4ufXaJ8FjWXw7xDtyzE/6fkZAyQFg9WrbMrUpt8ZecRJx58jiFOZ2lxZMphZdIpAnIeto/S8stbwLKLusyckQ==} '@tanstack/react-devtools@0.10.2': resolution: {integrity: sha512-1BmZyxOrI5SqmRJ5MgkYZNNdnlLsJxQRI2YgorrAvcF2MxK6x5RcuStvD8+YlXoMw3JtNukPxoITirKAnKYDQA==} @@ -3986,13 +3982,13 @@ packages: react: '>=16.8' react-dom: '>=16.8' - '@tanstack/react-form-devtools@0.2.20': - resolution: {integrity: sha512-aXtorJ7p3TbzOapjaxbjGX/c0uQh/wbYSwgzFt3qatNMb1xL4HM/j00Bx7hDENZNBCf8MF8YEEtvpBmnGb4rnQ==} + '@tanstack/react-form-devtools@0.2.21': + resolution: {integrity: sha512-WBQ7NOcb3FM9UA4juZVyWUyJkyl62vHFbEBybZuvBFw3wq/v9pDGS01Ye8kepGXDg1+LQsOOxyDR65AKsdqSYQ==} peerDependencies: react: ^17.0.0 || ^18.0.0 || ^19.0.0 - '@tanstack/react-form@1.28.6': - resolution: {integrity: sha512-dRxwKeNW3uuJvf0sXsIQ2compFMnIJNk9B436Lx0fqkqK+CBvA1tNmEdX+faoCpuQ5Wua3c8ahVibJ65cpkijA==} + '@tanstack/react-form@1.29.0': + resolution: {integrity: sha512-jj425NNX0QKqbUzqSNiYI3HCPHSk2df47acXCJyXczWOTmG81ECZGkgofgqamFsSU9kMiH6Di5RLUnftrlhWSw==} peerDependencies: '@tanstack/react-start': '*' react: ^17.0.0 || ^18.0.0 || ^19.0.0 @@ -4000,14 +3996,14 @@ packages: '@tanstack/react-start': optional: true - '@tanstack/react-query-devtools@5.96.2': - resolution: {integrity: sha512-nTFKLGuTOFvmFRvcyZ3ArWC/DnMNPoBh6h/2yD6rsf7TCTJCQt+oUWOp2uKPTIuEPtF/vN9Kw5tl5mD1Kbposw==} + '@tanstack/react-query-devtools@5.99.0': + resolution: {integrity: sha512-CqqX7LCU9yOfCY/vBURSx2YSD83ryfX+QkfkaKionTfg1s2Hdm572Ro99gW3QPoJjzvsj1HM4pnN4nbDy3MXKA==} peerDependencies: - '@tanstack/react-query': ^5.96.2 + '@tanstack/react-query': ^5.99.0 react: ^18 || ^19 - '@tanstack/react-query@5.96.2': - resolution: {integrity: sha512-sYyzzJT4G0g02azzJ8o55VFFV31XvFpdUpG+unxS0vSaYsJnSPKGoI6WdPwUucJL1wpgGfwfmntNX/Ub1uOViA==} + '@tanstack/react-query@5.99.0': + resolution: {integrity: sha512-OY2bCqPemT1LlqJ8Y2CUau4KELnIhhG9Ol3ZndPbdnB095pRbPo1cHuXTndg8iIwtoHTgwZjyaDnQ0xD0mYwAw==} peerDependencies: react: ^18 || ^19 @@ -4062,18 +4058,18 @@ packages: peerDependencies: '@testing-library/dom': '>=7.21.4' - '@tsslint/cli@3.0.2': - resolution: {integrity: sha512-8lyZcDEs86zitz0wZ5QRdswY6xGz8j+WL11baN4rlpwahtPgYatujpYV5gpoKeyMAyerlNTdQh6u2LUJLoLNyQ==} + '@tsslint/cli@3.0.3': + resolution: {integrity: sha512-Pt1AuEZoh+dK4QYt95oCjBdBp2h2iYY9pSerf9BTLgfsjeyEsNk7Juhn51sFlAuEnWDNvI8mLULzsIkayd0nUQ==} engines: {node: '>=22.6.0'} hasBin: true peerDependencies: typescript: '*' - '@tsslint/compat-eslint@3.0.2': - resolution: {integrity: sha512-2TzSJPybCEfU/kHNi9UybwI//A7Fe14CwqmNuJ4fR4WYGpfIclXqfDJwsn5U1NzrWbHjWzRSntJITQPNw1SCNA==} + '@tsslint/compat-eslint@3.0.3': + resolution: {integrity: sha512-UGWrE4fu8fUCLkc+zMQNsEfuEkGHjndpa5oSQmzhmo9BQJYAqqH1s2kGIiDsAYwaQTUts4SjclXaITq3pZhkrA==} - '@tsslint/config@3.0.2': - resolution: {integrity: sha512-oHzteAwL6NHVrLzJnrpqMwewEFOydhDH228weO4wkHW8SwvE4oVV5qrKmjwL69ClYt5Le3y2aGDzGou+GuTbKg==} + '@tsslint/config@3.0.3': + resolution: {integrity: sha512-3yFyM4Sj+0LxwmcokwNPuS9pWUBMIhO8vwHiG4vGuquTvF4cgZqDPyQ3GN4hDb5qAZ56iqYtMoBEiSZXlJDYPQ==} engines: {node: '>=22.6.0'} hasBin: true peerDependencies: @@ -4085,12 +4081,12 @@ packages: tsl: optional: true - '@tsslint/core@3.0.2': - resolution: {integrity: sha512-Cu50e9vBojEMQjbqMoshkgLSoBj1BKbbmhSvzgbo07TiQ1wrOblZjvhU8ygB1fAIIHgU4laExX3pLU5OOeeR9g==} + '@tsslint/core@3.0.3': + resolution: {integrity: sha512-EpCKw34f2XyypH5xlxKCwnTgPGpZxbPXfvpwddT3DCxsIzUDJY4SpVJULAZFPAjJd49vopG0kNhXn0C/b+kHcg==} engines: {node: '>=22.6.0'} - '@tsslint/types@3.0.2': - resolution: {integrity: sha512-RbF3TIxu/YQwRpYrH5j2EL3ff4+Lr2SSmwCJmPJfi832F0hpgJj6xB9xKEorrUj0ZaTHE1QOr5SOMe5B6Qv+2Q==} + '@tsslint/types@3.0.3': + resolution: {integrity: sha512-3Jlb5UTPrzqu1D1qOrzjwy0QW2n41A1+ILKvzgViFrtiTwurM5Tav6V7Y4AFxO0xatCA0VHAzzifK0r5znaKbw==} '@tybys/wasm-util@0.10.1': resolution: {integrity: sha512-9tTaPJLSiejZKx+Bmog4uSubteqTvFrVrURwkmHixBo0G4seD0zUxp98E1DzUBJxLQ3NPwXrGKDiVjwx/DpPsg==} @@ -4263,8 +4259,8 @@ packages: '@types/negotiator@0.6.4': resolution: {integrity: sha512-elf6BsTq+AkyNsb2h5cGNst2Mc7dPliVoAPm1fXglC/BM3f2pFA40BaSSv3E5lyHteEawVKLP+8TwiY1DMNb3A==} - '@types/node@25.5.2': - resolution: {integrity: sha512-tO4ZIRKNC+MDWV4qKVZe3Ql/woTnmHDr5JD8UI5hn2pwBrHEwOEMZK7WlNb5RKB6EoJ02gwmQS9OrjuFnZYdpg==} + '@types/node@25.6.0': + resolution: {integrity: sha512-+qIYRKdNYJwY3vRCZMdJbPLJAtGjQBudzZzdzwQYkEPQd+PJGixUL5QfvCLDaULoLv+RhT3LDkwEfKaAkgSmNQ==} '@types/normalize-package-data@2.4.4': resolution: {integrity: sha512-37i+OaWTh9qeK4LSHPsyRC7NahnGotNuZvjLSgcPzblpHB3rrCJxAOgI5gCdKm7coonsaX1Of0ILiTcnZjbfxA==} @@ -4310,11 +4306,11 @@ packages: '@types/zen-observable@0.8.3': resolution: {integrity: sha512-fbF6oTd4sGGy0xjHPKAt+eS2CrxJ3+6gQ3FGcBoIJR2TLAyCkCyI8JqZNy+FeON0AhVgNJoUumVoZQjBFUqHkw==} - '@typescript-eslint/eslint-plugin@8.58.1': - resolution: {integrity: sha512-eSkwoemjo76bdXl2MYqtxg51HNwUSkWfODUOQ3PaTLZGh9uIWWFZIjyjaJnex7wXDu+TRx+ATsnSxdN9YWfRTQ==} + '@typescript-eslint/eslint-plugin@8.58.2': + resolution: {integrity: sha512-aC2qc5thQahutKjP+cl8cgN9DWe3ZUqVko30CMSZHnFEHyhOYoZSzkGtAI2mcwZ38xeImDucI4dnqsHiOYuuCw==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: - '@typescript-eslint/parser': ^8.58.1 + '@typescript-eslint/parser': ^8.58.2 eslint: ^8.57.0 || ^9.0.0 || ^10.0.0 typescript: '>=4.8.4 <6.1.0' @@ -4325,8 +4321,8 @@ packages: eslint: ^8.57.0 || ^9.0.0 || ^10.0.0 typescript: '>=4.8.4 <6.0.0' - '@typescript-eslint/parser@8.58.1': - resolution: {integrity: sha512-gGkiNMPqerb2cJSVcruigx9eHBlLG14fSdPdqMoOcBfh+vvn4iCq2C8MzUB89PrxOXk0y3GZ1yIWb9aOzL93bw==} + '@typescript-eslint/parser@8.58.2': + resolution: {integrity: sha512-/Zb/xaIDfxeJnvishjGdcR4jmr7S+bda8PKNhRGdljDM+elXhlvN0FyPSsMnLmJUrVG9aPO6dof80wjMawsASg==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: eslint: ^8.57.0 || ^9.0.0 || ^10.0.0 @@ -4338,8 +4334,8 @@ packages: peerDependencies: typescript: '>=4.8.4 <6.0.0' - '@typescript-eslint/project-service@8.58.1': - resolution: {integrity: sha512-gfQ8fk6cxhtptek+/8ZIqw8YrRW5048Gug8Ts5IYcMLCw18iUgrZAEY/D7s4hkI0FxEfGakKuPK/XUMPzPxi5g==} + '@typescript-eslint/project-service@8.58.2': + resolution: {integrity: sha512-Cq6UfpZZk15+r87BkIh5rDpi38W4b+Sjnb8wQCPPDDweS/LRCFjCyViEbzHk5Ck3f2QDfgmlxqSa7S7clDtlfg==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: typescript: '>=4.8.4 <6.1.0' @@ -4354,8 +4350,8 @@ packages: resolution: {integrity: sha512-snZKH+W4WbWkrBqj4gUNRIGb/jipDW3qMqVJ4C9rzdFc+wLwruxk+2a5D+uoFcKPAqyqEnSb4l2ULuZf95eSkw==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} - '@typescript-eslint/scope-manager@8.58.1': - resolution: {integrity: sha512-TPYUEqJK6avLcEjumWsIuTpuYODTTDAtoMdt8ZZa93uWMTX13Nb8L5leSje1NluammvU+oI3QRr5lLXPgihX3w==} + '@typescript-eslint/scope-manager@8.58.2': + resolution: {integrity: sha512-SgmyvDPexWETQek+qzZnrG6844IaO02UVyOLhI4wpo82dpZJY9+6YZCKAMFzXb7qhx37mFK1QcPQ18tud+vo6Q==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} '@typescript-eslint/tsconfig-utils@8.57.2': @@ -4364,14 +4360,14 @@ packages: peerDependencies: typescript: '>=4.8.4 <6.0.0' - '@typescript-eslint/tsconfig-utils@8.58.1': - resolution: {integrity: sha512-JAr2hOIct2Q+qk3G+8YFfqkqi7sC86uNryT+2i5HzMa2MPjw4qNFvtjnw1IiA1rP7QhNKVe21mSSLaSjwA1Olw==} + '@typescript-eslint/tsconfig-utils@8.58.2': + resolution: {integrity: sha512-3SR+RukipDvkkKp/d0jP0dyzuls3DbGmwDpVEc5wqk5f38KFThakqAAO0XMirWAE+kT00oTauTbzMFGPoAzB0A==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: typescript: '>=4.8.4 <6.1.0' - '@typescript-eslint/type-utils@8.58.1': - resolution: {integrity: sha512-HUFxvTJVroT+0rXVJC7eD5zol6ID+Sn5npVPWoFuHGg9Ncq5Q4EYstqR+UOqaNRFXi5TYkpXXkLhoCHe3G0+7w==} + '@typescript-eslint/type-utils@8.58.2': + resolution: {integrity: sha512-Z7EloNR/B389FvabdGeTo2XMs4W9TjtPiO9DAsmT0yom0bwlPyRjkJ1uCdW1DvrrrYP50AJZ9Xc3sByZA9+dcg==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: eslint: ^8.57.0 || ^9.0.0 || ^10.0.0 @@ -4381,8 +4377,8 @@ packages: resolution: {integrity: sha512-/iZM6FnM4tnx9csuTxspMW4BOSegshwX5oBDznJ7S4WggL7Vczz5d2W11ecc4vRrQMQHXRSxzrCsyG5EsPPTbA==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} - '@typescript-eslint/types@8.58.1': - resolution: {integrity: sha512-io/dV5Aw5ezwzfPBBWLoT+5QfVtP8O7q4Kftjn5azJ88bYyp/ZMCsyW1lpKK46EXJcaYMZ1JtYj+s/7TdzmQMw==} + '@typescript-eslint/types@8.58.2': + resolution: {integrity: sha512-9TukXyATBQf/Jq9AMQXfvurk+G5R2MwfqQGDR2GzGz28HvY/lXNKGhkY+6IOubwcquikWk5cjlgPvD2uAA7htQ==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} '@typescript-eslint/typescript-estree@8.57.2': @@ -4391,8 +4387,8 @@ packages: peerDependencies: typescript: '>=4.8.4 <6.0.0' - '@typescript-eslint/typescript-estree@8.58.1': - resolution: {integrity: sha512-w4w7WR7GHOjqqPnvAYbazq+Y5oS68b9CzasGtnd6jIeOIeKUzYzupGTB2T4LTPSv4d+WPeccbxuneTFHYgAAWg==} + '@typescript-eslint/typescript-estree@8.58.2': + resolution: {integrity: sha512-ELGuoofuhhoCvNbQjFFiobFcGgcDCEm0ThWdmO4Z0UzLqPXS3KFvnEZ+SHewwOYHjM09tkzOWXNTv9u6Gqtyuw==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: typescript: '>=4.8.4 <6.1.0' @@ -4404,8 +4400,8 @@ packages: eslint: ^8.57.0 || ^9.0.0 || ^10.0.0 typescript: '>=4.8.4 <6.0.0' - '@typescript-eslint/utils@8.58.1': - resolution: {integrity: sha512-Ln8R0tmWC7pTtLOzgJzYTXSCjJ9rDNHAqTaVONF4FEi2qwce8mD9iSOxOpLFFvWp/wBFlew0mjM1L1ihYWfBdQ==} + '@typescript-eslint/utils@8.58.2': + resolution: {integrity: sha512-QZfjHNEzPY8+l0+fIXMvuQ2sJlplB4zgDZvA+NmvZsZv3EQwOcc1DuIU1VJUTWZ/RKouBMhDyNaBMx4sWvrzRA==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: eslint: ^8.57.0 || ^9.0.0 || ^10.0.0 @@ -4415,47 +4411,47 @@ packages: resolution: {integrity: sha512-zhahknjobV2FiD6Ee9iLbS7OV9zi10rG26odsQdfBO/hjSzUQbkIYgda+iNKK1zNiW2ey+Lf8MU5btN17V3dUw==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} - '@typescript-eslint/visitor-keys@8.58.1': - resolution: {integrity: sha512-y+vH7QE8ycjoa0bWciFg7OpFcipUuem1ujhrdLtq1gByKwfbC7bPeKsiny9e0urg93DqwGcHey+bGRKCnF1nZQ==} + '@typescript-eslint/visitor-keys@8.58.2': + resolution: {integrity: sha512-f1WO2Lx8a9t8DARmcWAUPJbu0G20bJlj8L4z72K00TMeJAoyLr/tHhI/pzYBLrR4dXWkcxO1cWYZEOX8DKHTqA==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} - '@typescript/native-preview-darwin-arm64@7.0.0-dev.20260407.1': - resolution: {integrity: sha512-akoBfxvDbULMWLqHPDBI5sRkhjQ0blX5+iG7GBoSstqJZW4P0nzd516COGs7xWHsu3apBhaBgSTMCFO78kG80w==} + '@typescript/native-preview-darwin-arm64@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-CDgxIPvAWRCfOiQKvSk4wUkAoRW4Cy6vfAUBPNHSeLalIt43ToF0LOAsa5uLyRGsftjfMYY0A4qFOmgDvBhgzQ==} cpu: [arm64] os: [darwin] - '@typescript/native-preview-darwin-x64@7.0.0-dev.20260407.1': - resolution: {integrity: sha512-j/V5BS+tgcRFGQC+y95vZB78fI45UgobAEY1+NlFZ3Yih9ICKWRfJPcalpiP5vjiO2NgqVzcFfO9XbpJyq5TTA==} + '@typescript/native-preview-darwin-x64@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-oiMmUtNMaqBh+eUogX53ichcEf7d+7upC0qa7xS9zWl85XEPKlrZCZpZ79yixw1PkdpjqJJigI11bmCi/JVv+g==} cpu: [x64] os: [darwin] - '@typescript/native-preview-linux-arm64@7.0.0-dev.20260407.1': - resolution: {integrity: sha512-QG0E0lmcZQZimvNltxyi5Q3Oz1pd0BdztS7K5T9HTs30E3TSeYHq7Csw3SbDfAVwcqs2HTe/AVqLy6ar+1zm3Q==} + '@typescript/native-preview-linux-arm64@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-hPKanfs9c+7953gIYw13CNxN0HqFAOfJjnWk4SHqSBe3Pj9pxoeJvvRWlofp5C833eOZK6gZB7ll0/uNb0djtA==} cpu: [arm64] os: [linux] - '@typescript/native-preview-linux-arm@7.0.0-dev.20260407.1': - resolution: {integrity: sha512-ZDr+zQFSTPmLIGyXDWixYFeFtktWUDGAD6s65rTI5EJgyt4X5/kEMnNd04mf4PbN0ChSiTRzJYLzaM+JGo+jww==} + '@typescript/native-preview-linux-arm@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-0lSXBzBVsxIGrFv/PxoswzMptsnU6BgSk7GMAUt/o1dVw36R2XrSs538vwKnujaJwt4iIdMS0uGdpUC5s9jkzQ==} cpu: [arm] os: [linux] - '@typescript/native-preview-linux-x64@7.0.0-dev.20260407.1': - resolution: {integrity: sha512-a82yGx039yqZBS0dwKG8+kgeF2xVA7Pg6lL2SrswbaxWz3bXpI0ASX3HgUw+JMSIr4fbZ5ulKcaorPqbhc48/A==} + '@typescript/native-preview-linux-x64@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-8Cr477HRmHZ5YyLfikNvw7qp3/WmnRjzIzJhUDrAx5173OBe8BdyV9jPemFHKDPqwI1AUMTijvptOFoQE7429w==} cpu: [x64] os: [linux] - '@typescript/native-preview-win32-arm64@7.0.0-dev.20260407.1': - resolution: {integrity: sha512-e38ow5yqBrdiz4GunQCRk1E7cTtowpbXeAvVJf1wXrWbFqEc0D8BE7YPmTy9W2fOI0KFHUrsFg5h4Ad/TKVjug==} + '@typescript/native-preview-win32-arm64@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-ulJD9ZbIQyTBIDx8zzAzQLtbvQDGHSWrNRgkgBU5Os2NTYADQRco4pU747R9wZPMLopy3IeNck6m8vwPoYMk1g==} cpu: [arm64] os: [win32] - '@typescript/native-preview-win32-x64@7.0.0-dev.20260407.1': - resolution: {integrity: sha512-1Jiij5NQOvlM72/DdfXzAVia1pdffgHiVgWZVmDwXECpzwQB0WwWfhI/0IddXP92Y9gVQFCGo9lypSAnamfGPA==} + '@typescript/native-preview-win32-x64@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-x7DsSXnLQBf5XBBR8luHf1Nc/T1eByUmrOSEThW6825UB7lHoPlqKdhIoUNnTnS4nXQMxLwcusD4P1EP23GPJw==} cpu: [x64] os: [win32] - '@typescript/native-preview@7.0.0-dev.20260407.1': - resolution: {integrity: sha512-gf1W3UbzVTDkZJuwhNtOcfQ6l3hpDcxuWh90ANlp/cKupmAqaXNGpT23YjTYqXsaI7RDQR7JUELCKeWbW9PJIg==} + '@typescript/native-preview@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-twzr3V4QLEbXaESuI2DqdzutOVFGpkY3VZDR9sF8YlLsAXkwyQvZo58cKM77mZcsHoCR4lCYcdTatWTTa/+8tw==} hasBin: true '@ungap/structured-clone@1.3.0': @@ -4512,8 +4508,8 @@ packages: babel-plugin-react-compiler: optional: true - '@vitejs/plugin-rsc@0.5.22': - resolution: {integrity: sha512-OC4wKNVHpF+LOgtasdMOAw1V0yWHj1Nx/XfkNW/9weFXd/9wXPWDyeJGcUJ03DxqJ8mYi4j9/kvo6HKYCoP9Ow==} + '@vitejs/plugin-rsc@0.5.24': + resolution: {integrity: sha512-FQ7o1Zf1GUB8L5qlIuV2mvIv/KahG2qUYW2gMpxyIN3zF7voDsfvA/t8w/TLjYC0T6p3JwMnK3N+YzMGf/m75A==} peerDependencies: react: '*' react-dom: '*' @@ -4523,17 +4519,17 @@ packages: react-server-dom-webpack: optional: true - '@vitest/coverage-v8@4.1.3': - resolution: {integrity: sha512-/MBdrkA8t6hbdCWFKs09dPik774xvs4Z6L4bycdCxYNLHM8oZuRyosumQMG19LUlBsB6GeVpL1q4kFFazvyKGA==} + '@vitest/coverage-v8@4.1.4': + resolution: {integrity: sha512-x7FptB5oDruxNPDNY2+S8tCh0pcq7ymCe1gTHcsp733jYjrJl8V1gMUlVysuCD9Kz46Xz9t1akkv08dPcYDs1w==} peerDependencies: - '@vitest/browser': 4.1.3 - vitest: 4.1.3 + '@vitest/browser': 4.1.4 + vitest: 4.1.4 peerDependenciesMeta: '@vitest/browser': optional: true - '@vitest/eslint-plugin@1.6.14': - resolution: {integrity: sha512-PXZ5ysw4eHU9h8nDtBvVcGC7Z2C/T9CFdheqSw1NNXFYqViojub0V9bgdYI67iBTOcra2mwD0EYldlY9bGPf2Q==} + '@vitest/eslint-plugin@1.6.15': + resolution: {integrity: sha512-dTMjrdngmcB+DxomlKQ+SUubCTvd0m2hQQFpv5sx+GRodmeoxr2PVbphk57SVp250vpxphk9Ccwyv6fQ6+2gkA==} engines: {node: '>=18'} peerDependencies: '@typescript-eslint/eslint-plugin': '*' @@ -4551,31 +4547,54 @@ packages: '@vitest/expect@3.2.4': resolution: {integrity: sha512-Io0yyORnB6sikFlt8QW5K7slY4OjqNX9jmJQ02QDda8lyM6B5oNgVWoSoKPac8/kgnCUzuHQKrSLtu/uOqqrig==} + '@vitest/expect@4.1.4': + resolution: {integrity: sha512-iPBpra+VDuXmBFI3FMKHSFXp3Gx5HfmSCE8X67Dn+bwephCnQCaB7qWK2ldHa+8ncN8hJU8VTMcxjPpyMkUjww==} + + '@vitest/mocker@4.1.4': + resolution: {integrity: sha512-R9HTZBhW6yCSGbGQnDnH3QHfJxokKN4KB+Yvk9Q1le7eQNYwiCyKxmLmurSpFy6BzJanSLuEUDrD+j97Q+ZLPg==} + peerDependencies: + msw: ^2.4.9 + vite: ^6.0.0 || ^7.0.0 || ^8.0.0 + peerDependenciesMeta: + msw: + optional: true + vite: + optional: true + '@vitest/pretty-format@3.2.4': resolution: {integrity: sha512-IVNZik8IVRJRTr9fxlitMKeJeXFFFN0JaB9PHPGQ8NKQbGpfjlTx9zO4RefN8gp7eqjNy8nyK3NZmBzOPeIxtA==} - '@vitest/pretty-format@4.1.3': - resolution: {integrity: sha512-hYqqwuMbpkkBodpRh4k4cQSOELxXky1NfMmQvOfKvV8zQHz8x8Dla+2wzElkMkBvSAJX5TRGHJAQvK0TcOafwg==} + '@vitest/pretty-format@4.1.4': + resolution: {integrity: sha512-ddmDHU0gjEUyEVLxtZa7xamrpIefdEETu3nZjWtHeZX4QxqJ7tRxSteHVXJOcr8jhiLoGAhkK4WJ3WqBpjx42A==} + + '@vitest/runner@4.1.4': + resolution: {integrity: sha512-xTp7VZ5aXP5ZJrn15UtJUWlx6qXLnGtF6jNxHepdPHpMfz/aVPx+htHtgcAL2mDXJgKhpoo2e9/hVJsIeFbytQ==} + + '@vitest/snapshot@4.1.4': + resolution: {integrity: sha512-MCjCFgaS8aZz+m5nTcEcgk/xhWv0rEH4Yl53PPlMXOZ1/Ka2VcZU6CJ+MgYCZbcJvzGhQRjVrGQNZqkGPttIKw==} '@vitest/spy@3.2.4': resolution: {integrity: sha512-vAfasCOe6AIK70iP5UD11Ac4siNUNJ9i/9PZ3NKx07sG6sUxeag1LWdNrMWeKKYBLlzuK+Gn65Yd5nyL6ds+nw==} + '@vitest/spy@4.1.4': + resolution: {integrity: sha512-XxNdAsKW7C+FLydqFJLb5KhJtl3PGCMmYwFRfhvIgxJvLSXhhVI1zM8f1qD3Zg7RCjTSzDVyct6sghs9UEgBEQ==} + '@vitest/utils@3.2.4': resolution: {integrity: sha512-fB2V0JFrQSMsCo9HiSq3Ezpdv4iYaXRG1Sx8edX3MwxfyNn83mKiGzOcH+Fkxt4MHxr3y42fQi1oeAInqgX2QA==} - '@vitest/utils@4.1.3': - resolution: {integrity: sha512-Pc/Oexse/khOWsGB+w3q4yzA4te7W4gpZZAvk+fr8qXfTURZUMj5i7kuxsNK5mP/dEB6ao3jfr0rs17fHhbHdw==} + '@vitest/utils@4.1.4': + resolution: {integrity: sha512-13QMT+eysM5uVGa1rG4kegGYNp6cnQcsTc67ELFbhNLQO+vgsygtYJx2khvdt4gVQqSSpC/KT5FZZxUpP3Oatw==} - '@voidzero-dev/vite-plus-core@0.1.16': - resolution: {integrity: sha512-fOyf14CXjcXqANFs2fCXEX+0Tn9ZjmqfFV+qTnARwIF1Kzl8WquO4XtvlDgs/fTQ91H4AyoNUgkvWdKS+C4xYA==} + '@voidzero-dev/vite-plus-core@0.1.18': + resolution: {integrity: sha512-3PmXOL26yHzlw8ET9SwXCmglGzUYq2fOTYf2t0mxvVIs7ua3bnf6tOnmR+6YX5k1Ez26B0ooYzx+znc8k+CAMw==} engines: {node: ^20.19.0 || >=22.12.0} peerDependencies: '@arethetypeswrong/core': ^0.18.1 - '@tsdown/css': 0.21.7 - '@tsdown/exe': 0.21.7 + '@tsdown/css': 0.21.8 + '@tsdown/exe': 0.21.8 '@types/node': ^20.19.0 || >=22.12.0 '@vitejs/devtools': ^0.1.0 - esbuild: ^0.28.0 + esbuild: 0.27.2 jiti: '>=1.21.0' less: ^4.0.0 publint: ^0.3.0 @@ -4626,54 +4645,56 @@ packages: yaml: optional: true - '@voidzero-dev/vite-plus-darwin-arm64@0.1.16': - resolution: {integrity: sha512-InG0ZmuGh7DTrn7zWQ0UvKapElphKI6G1oYfys+jraedG70EhIIee9gtO+mTE1T0bF67SgAcLXwNyaiNda0XwA==} + '@voidzero-dev/vite-plus-darwin-arm64@0.1.18': + resolution: {integrity: sha512-bw2pWWE8RZRELWjXcdxdmRaOaYjmGmsxEm23TxvGxQXFb7k9l51W8tpjxariPGLxrEl+Cw5u601IL5LASaPJ5w==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [darwin] - '@voidzero-dev/vite-plus-darwin-x64@0.1.16': - resolution: {integrity: sha512-LGNrECstuhkCRKRj/dE98Xcprw8HU3VMIMJnZsnDR2C5RB2HADNIu21at/a/G3giA9eWm7uhtPp9FvUtTCK9TA==} + '@voidzero-dev/vite-plus-darwin-x64@0.1.18': + resolution: {integrity: sha512-8TFj6yJNsumoH+yFc+6zf3g2UuzvrPHq2FAAVORffaVZ29PWnDSsXjegaIBmoAtGO5Xb4lcilQx7NoF9hONrZg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [darwin] - '@voidzero-dev/vite-plus-linux-arm64-gnu@0.1.16': - resolution: {integrity: sha512-AoFKu6dIOtlkp/mwmtU8ES2uzoaxCHhIym1Tk7qMxyvke4IXnye6VDc4kPMRQwD8mwR3T3bO0HuaEEHxrIWDxw==} + '@voidzero-dev/vite-plus-linux-arm64-gnu@0.1.18': + resolution: {integrity: sha512-xHRqncKanOZ0zNnZSufL4Yx/gWrIFkCjU6jFzCukBOOCrcemq3SrALPHrNf+Nw1RLwNptGUZn2Vx/IjRLzUQDw==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [linux] libc: [glibc] - '@voidzero-dev/vite-plus-linux-arm64-musl@0.1.16': - resolution: {integrity: sha512-PloCsGTRIhcXIpUOJ6PqVG8gYNpq+ooJNyqy5sQ82BRnJuo8oV7uBLFvg0X9B3Bzh+vO1F8/+92+o5TiL35JMg==} + '@voidzero-dev/vite-plus-linux-arm64-musl@0.1.18': + resolution: {integrity: sha512-CA6XxZbkT8lYwWzS2yAj6exr7nHl3R8Sz+ZdOhYCU4yR2qvzGatdVgFr7oPnrkHLF426cHJ172rmNNj8NKie/w==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [linux] libc: [musl] - '@voidzero-dev/vite-plus-linux-x64-gnu@0.1.16': - resolution: {integrity: sha512-nY9/2g+qjhwsW5U3MrFLlx+bOBsdOJiO2HzbxQy7jo/S3jPTnXhFlrRegQuAmqrHAXrSdNwgblgRpICKhx1xZg==} + '@voidzero-dev/vite-plus-linux-x64-gnu@0.1.18': + resolution: {integrity: sha512-xBO3MtLGVASPjH/GDRxexfLCT0othVpiFMdEQ83Y+woVNbrrzcdQTGFUuFG4cAiMhtmjytyFwPBtZ76BWsDO3w==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [linux] libc: [glibc] - '@voidzero-dev/vite-plus-linux-x64-musl@0.1.16': - resolution: {integrity: sha512-JGKEAMoXqzdr9lHT/13uRNV9uzrSYXAFhjAfIC8WEQMG2VUFksvq5/TOc26hzmzbqu+bxRmfN8h1aVTDL8KwFg==} + '@voidzero-dev/vite-plus-linux-x64-musl@0.1.18': + resolution: {integrity: sha512-ADNis6SMarY7i8+b2ynUJ1PiqCHqnVwY7EQ+fSGug5zZ+W/cZq14+VWPxOvGR9LJk+iol8XuqsHy4BaV2+gjzw==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [linux] libc: [musl] - '@voidzero-dev/vite-plus-test@0.1.16': - resolution: {integrity: sha512-d/rJPX/heMzoAFdnpZsp04MAa6nw1yH1tA4mVCV4m8goVcE9nAvt69mjLMzE8N/rYIQOSgenf3hDXuQRuD6OKQ==} + '@voidzero-dev/vite-plus-test@0.1.18': + resolution: {integrity: sha512-dovC2kJgiwMI8ay0i+3NvQGCDWPj8HQB2ONP/HbdJ5/XQVPq13+BihnCq8/ztz6uGhiDD8Nu4OZ3RgB14uvTfA==} engines: {node: ^20.0.0 || ^22.0.0 || >=24.0.0} peerDependencies: '@edge-runtime/vm': '*' '@opentelemetry/api': ^1.9.0 '@types/node': ^20.0.0 || ^22.0.0 || >=24.0.0 - '@vitest/ui': 4.1.2 + '@vitest/coverage-istanbul': 4.1.4 + '@vitest/coverage-v8': 4.1.4 + '@vitest/ui': 4.1.4 happy-dom: '*' jsdom: '*' vite: ^6.0.0 || ^7.0.0 || ^8.0.0 @@ -4684,6 +4705,10 @@ packages: optional: true '@types/node': optional: true + '@vitest/coverage-istanbul': + optional: true + '@vitest/coverage-v8': + optional: true '@vitest/ui': optional: true happy-dom: @@ -4691,14 +4716,14 @@ packages: jsdom: optional: true - '@voidzero-dev/vite-plus-win32-arm64-msvc@0.1.16': - resolution: {integrity: sha512-IugPUCLY7HmiPcCeuHKUqO1+G2vxHnYzAGhS02AixD0sJLTAIKCUANDOiVUFf/HMw+jh/UkugW7MWek8lf/JrQ==} + '@voidzero-dev/vite-plus-win32-arm64-msvc@0.1.18': + resolution: {integrity: sha512-EcDETMHG8xgjIlMizIu/wf0UtRZLGz+lHFvYFZVCkz4vLLz93a06vZ+3Oi9xY2Kc8aOHsCf8Gj5/dox/03cscw==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [win32] - '@voidzero-dev/vite-plus-win32-x64-msvc@0.1.16': - resolution: {integrity: sha512-tq93CIeMs92HF7rdylJknRiyzMOWMKCmpw+g8nl5Q5nmUDNLUsrL3CGfbyqjgbruuPnIr761r9MfydPqZU/cYg==} + '@voidzero-dev/vite-plus-win32-x64-msvc@0.1.18': + resolution: {integrity: sha512-jBgL4ZjSJJu3FDcrqj4muzbr0WKlU6Ym1ilHQnq8R+2TRvE0AtvAMMuphICDslZGi6EK3fwJ+r2Lv7GU1AipQA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [win32] @@ -5019,6 +5044,10 @@ packages: resolution: {integrity: sha512-4zNhdJD/iOjSH0A05ea+Ke6MU5mmpQcbQsSOkgdaUMJ9zTlDTD/GYlwohmIE2u0gaxHYiVHEn1Fw9mZ/ktJWgw==} engines: {node: '>=18'} + chai@6.2.2: + resolution: {integrity: sha512-NUPRluOfOiTKBKvWPtSD4PhFvWCqOi0BGStNWs57X9js7XGTprSmFoz5F0tWhR4WPjNeR9jXqdC7/UpSJTnlRg==} + engines: {node: '>=18'} + chalk@4.1.1: resolution: {integrity: sha512-diHzdDKxcU+bAsUboHLPEDQiw0qEe0qd7SYUn3HgcFlWgbDcfLGswOHYeGrHKzG9z6UYf01d9VFMfZxPM1xZSg==} engines: {node: '>=10'} @@ -5413,6 +5442,9 @@ packages: dagre-d3-es@7.0.14: resolution: {integrity: sha512-P4rFMVq9ESWqmOgK+dlXvOtLwYg0i7u0HBGJER0LZDJT2VHIPAMZ/riPxqJceWMStH5+E61QxFra9kIS3AqdMg==} + date-fns@4.1.0: + resolution: {integrity: sha512-Ukq0owbQXxa/U3EGtsdVBkR1w7KOQ5gIBqdH2hkvknzZPYvBxb/aa6E8L7tmjFtkwZBu3UXBbjIgPo/Ez4xaNg==} + dayjs@1.11.20: resolution: {integrity: sha512-YbwwqR/uYpeoP4pu043q+LTDLFBLApUP6VxRihdfNTqu4ubqMlGDLd6ErXhEgsyvY0K6nCs7nggYumAN+9uEuQ==} @@ -5507,8 +5539,8 @@ packages: resolution: {integrity: sha512-6obghkliLdmKa56xdbLOpUZ43pAR6xFy1uOrxBaIDjT+yaRuuybLjGS9eVBoSR/UPU5fq3OXClEHLJNGvbxKpQ==} engines: {node: '>=20'} - dompurify@3.3.3: - resolution: {integrity: sha512-Oj6pzI2+RqBfFG+qOaOLbFXLQ90ARpcGG6UePL82bJLtdsa6CYJD7nmiU8MW9nQNOtCHV3lZ/Bzq1X0QYbBZCA==} + dompurify@3.4.0: + resolution: {integrity: sha512-nolgK9JcaUXMSmW+j1yaSvaEaoXYHwWyGJlkoCTghc97KgGDDSnpoU/PlEnw63Ah+TGKFOyY+X5LnxaWbCSfXg==} domutils@3.2.2: resolution: {integrity: sha512-6kZKyUajlDuqlHKVX1w7gyslj9MPIXzIFiz/rGu35uC1wMi+kMhQwGhl4lt9unC9Vb9INnY9Z3/ZA3+FhASLaw==} @@ -5567,6 +5599,13 @@ packages: end-of-stream@1.4.5: resolution: {integrity: sha512-ooEGc6HP26xXq/N+GCGOT0JKCLDGrq2bQUZrQ7gyrJiZANJ/8YDTxTpQBXGMn+WbIQXNVpyWymm7KYVICQnyOg==} + engine.io-client@6.6.4: + resolution: {integrity: sha512-+kjUJnZGwzewFDw951CDWcwj35vMNf2fcj7xQWOctq1F2i1jkDdVvdFG9kM/BEChymCH36KgjnW0NsL58JYRxw==} + + engine.io-parser@5.2.3: + resolution: {integrity: sha512-HqD3yTBfnBxIrbnM1DoD6Pcq8NECnh8d4As1Qgh0z5Gg3jRRIqijury0CL3ghu/edArpUYiYqQiDUQBIs4np3Q==} + engines: {node: '>=10.0.0'} + enhanced-resolve@5.20.1: resolution: {integrity: sha512-Qohcme7V1inbAfvjItgw0EaxVX5q2rdVEZHRBrEQdRZTssLDGsL8Lwrznl8oQ/6kuTJONLaDcGjkNP247XEhcA==} engines: {node: '>=10.13.0'} @@ -5653,8 +5692,8 @@ packages: '@eslint/json': optional: true - eslint-markdown@0.6.0: - resolution: {integrity: sha512-NrgfiNto5IJrW1F/Akf2hJYoJTCbXoClOUvtUMDgoqmQNH0VRihNvFh+MFay4E0HV2eozfgxsLSGxnndtRJA8w==} + eslint-markdown@0.6.1: + resolution: {integrity: sha512-eiHSRFnzcPWN/0YDrtELW/+GnGylAoyXVBDh0iVAttyC5rWAaZfgSrzlFUTlS7Jz4XEL36PFLsoEcXlbvl5qPQ==} engines: {node: ^20.19.0 || ^22.13.0 || >=24.0.0} peerDependencies: eslint: ^9.31.0 || ^10.0.0-rc.0 @@ -5672,8 +5711,8 @@ packages: peerDependencies: eslint: '*' - eslint-plugin-better-tailwindcss@4.3.2: - resolution: {integrity: sha512-1DLX2QmHmOj3u667f8vEI0zKoRc0Y1qJt33tfIeIkpTyzWaz9b2GzWBLD4bR+WJ/kxzC0Skcbx7cMerRWQ6OYg==} + eslint-plugin-better-tailwindcss@4.4.1: + resolution: {integrity: sha512-ueFciTgj2M+4YklYdtvpbMA3Nn22z60sQoSA4bnctOP4h0daUhJKAsDaGi888N00qWtIUqeK5Ikt6xnNnHPg2g==} engines: {node: ^20.19.0 || ^22.12.0 || >=23.0.0} peerDependencies: eslint: ^7.0.0 || ^8.0.0 || ^9.0.0 || ^10.0.0 @@ -5715,8 +5754,8 @@ packages: peerDependencies: eslint: ^9.0.0 || ^10.0.0 - eslint-plugin-jsdoc@62.8.1: - resolution: {integrity: sha512-e9358PdHgvcMF98foNd3L7hVCw70Lt+YcSL7JzlJebB8eT5oRJtW6bHMQKoAwJtw6q0q0w/fRIr2kwnHdFDI6A==} + eslint-plugin-jsdoc@62.9.0: + resolution: {integrity: sha512-PY7/X4jrVgoIDncUmITlUqK546Ltmx/Pd4Hdsu4CvSjryQZJI2mEV4vrdMufyTetMiZ5taNSqvK//BTgVUlNkA==} engines: {node: ^20.19.0 || ^22.13.0 || >=24} peerDependencies: eslint: ^7.0.0 || ^8.0.0 || ^9.0.0 || ^10.0.0 @@ -5727,8 +5766,8 @@ packages: peerDependencies: eslint: '>=9.38.0' - eslint-plugin-markdown-preferences@0.41.0: - resolution: {integrity: sha512-Pu150jKH1Cf5sW/Igck0VbuT0A9qFpIPG1dDvyAt2lG8tA3VzPDkwxBusO8JqQ9NRIrm3pat0X6cfanSki3WZQ==} + eslint-plugin-markdown-preferences@0.41.1: + resolution: {integrity: sha512-Xi4rlT7oBZ8PMGDl7J9khgO2vF9X0F/6ag05/25Vyq7r3llaK95x9D6DpzXidxC2Gagl/e8bp2Hw47r4I3wWSA==} engines: {node: ^20.19.0 || ^22.12.0 || >=24.0.0} peerDependencies: '@eslint/markdown': ^7.4.0 || ^8.0.0 @@ -5740,15 +5779,17 @@ packages: peerDependencies: eslint: '>=8.23.0' - eslint-plugin-no-barrel-files@1.2.2: - resolution: {integrity: sha512-DF2bnHuEHClmL1+maBO5TD2HnnRsLj8J69FFtVkjObkELyjCXaWBsk+URJkqBpdOWURlL+raGX9AEpWCAiOV0g==} + eslint-plugin-no-barrel-files@1.3.1: + resolution: {integrity: sha512-y7OX5kyH7PMNRFhLF6SmM4JapxvaxExrgWPndPNTzilpO5uBqybuN480g3E8TTxT3OLOOhQDynmcJ0dnipIyNA==} + peerDependencies: + eslint: ^8.0.0 || ^9.0.0 || ^10.0.0 eslint-plugin-no-only-tests@3.3.0: resolution: {integrity: sha512-brcKcxGnISN2CcVhXJ/kEQlNa0MEfGRtwKtWA16SkqXHKitaKIMrfemJKLKX1YqDU5C/5JY3PvZXd5jEW04e0Q==} engines: {node: '>=5.0.0'} - eslint-plugin-perfectionist@5.7.0: - resolution: {integrity: sha512-WRHj7OZS/INutQ/gKN5C1ZGnMhkQ3oKZQAA2I7rl5yM8keBtSd9oj/qlJaHuwh5873FhMPqYlttcadF0YsTN7g==} + eslint-plugin-perfectionist@5.8.0: + resolution: {integrity: sha512-k8uIptWIxkUclonCFGyDzgYs9NI+Qh0a7cUXS3L7IYZDEsjXuimFBVbxXPQQngWqMiaxJRwbtYB4smMGMqF+cw==} engines: {node: ^20.0.0 || >=22.0.0} peerDependencies: eslint: ^8.45.0 || ^9.0.0 || ^10.0.0 @@ -5963,9 +6004,6 @@ packages: resolution: {integrity: sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==} engines: {node: '>=0.10.0'} - event-target-bus@1.0.0: - resolution: {integrity: sha512-uPcWKbj/BJU3Tbw9XqhHqET4/LBOhvv3/SJWr7NksxA6TC5YqBpaZgawE9R+WpYFCBFSAE4Vun+xQS6w4ABdlA==} - events@3.3.0: resolution: {integrity: sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q==} engines: {node: '>=0.8.x'} @@ -5974,6 +6012,10 @@ packages: resolution: {integrity: sha512-XYfuKMvj4O35f/pOXLObndIRvyQ+/+6AhODh+OKWj9S9498pHHn/IMszH+gt0fBCRWMNfk1ZSp5x3AifmnI2vg==} engines: {node: '>=6'} + expect-type@1.3.0: + resolution: {integrity: sha512-knvyeauYhqjOYvQ66MznSMs83wmHrCycNEN6Ao+2AeYEfxUIkuiVxdEa1qlGEPK+We3n0THiDciYSsCcgW/DoA==} + engines: {node: '>=12.0.0'} + exsolve@1.0.8: resolution: {integrity: sha512-LmDxfWXwcTArk8fUEnOfSZpHOJ6zOMUJKOtFLFqJLoKJetuQG874Uc7/Kki7zFLzYybmZhp1M7+98pfMqeX8yA==} @@ -6078,17 +6120,6 @@ packages: engines: {node: '>=18.3.0'} hasBin: true - foxact@0.3.0: - resolution: {integrity: sha512-CSlMlC0KlKQQEO83iLeQCLuT1V0OqnMWj7mjLstIDV8baMe1w4F7z3cz3/T+6Z8W12jqkQj07rwlw4Gi39knGg==} - peerDependencies: - react: '*' - react-dom: '*' - peerDependenciesMeta: - react: - optional: true - react-dom: - optional: true - fs-constants@1.0.0: resolution: {integrity: sha512-y6OAwoSIf7FyjMIv94u+b5rdheZEjzR63GTyZJm5qh4Bi+2YgwLCcI/fPFZkL5PSixOt6ZNKm+w+Hfp/Bciwow==} @@ -6157,8 +6188,8 @@ packages: resolution: {integrity: sha512-7ACyT3wmyp3I61S4fG682L0VA2RGD9otkqGJIwNUMF1SWUombIIk+af1unuDYgMm082aHYwD+mzJvv9Iu8dsgg==} engines: {node: '>=18'} - globals@17.4.0: - resolution: {integrity: sha512-hjrNztw/VajQwOLsMNT1cbJiH2muO3OROCHnbehc8eY5JyD2gqz4AcMHPqgaOR59DjgUjYAYLeH699g/eWi2jw==} + globals@17.5.0: + resolution: {integrity: sha512-qoV+HK2yFl/366t2/Cb3+xxPUo5BuMynomoDmiaZBIdbs+0pYbjfZU+twLhGKp4uCZ/+NbtpVepH5bGCxRyy2g==} engines: {node: '>=18'} globrex@0.1.2: @@ -6175,8 +6206,8 @@ packages: hachure-fill@0.5.2: resolution: {integrity: sha512-3GKBOn+m2LX9iq+JC1064cSFprJY4jL1jCXTcpnfER5HYE2l/4EfWSGzkPa/ZDBmYI0ZOEj5VHV/eKnPGkHuOg==} - happy-dom@20.8.9: - resolution: {integrity: sha512-Tz23LR9T9jOGVZm2x1EPdXqwA37G/owYMxRwU0E4miurAtFsPMQ1d2Jc2okUaSjZqAFz2oEn3FLXC5a0a+siyA==} + happy-dom@20.9.0: + resolution: {integrity: sha512-GZZ9mKe8r646NUAf/zemnGbjYh4Bt8/MqASJY+pSm5ZDtc3YQox+4gsLI7yi1hba6o+eCsGxpHn5+iEVn31/FQ==} engines: {node: '>=20.0.0'} has-ansi@4.0.1: @@ -6236,8 +6267,8 @@ packages: resolution: {integrity: sha512-Ox1pJVrDCyGHMG9CFg1tmrRUMRPRsAWYc/PinY0XzJU4K7y7vjNoLKIQ7BR5UJMCxNN8EM1MNDmHWA/B3aZUuw==} engines: {node: '>=6'} - hono@4.12.12: - resolution: {integrity: sha512-p1JfQMKaceuCbpJKAPKVqyqviZdS0eUxH9v82oWo1kb9xjQ5wA6iP3FNVAPDFlz5/p7d45lO+BpSk1tuSZMF4Q==} + hono@4.12.14: + resolution: {integrity: sha512-am5zfg3yu6sqn5yjKBNqhnTX7Cv+m00ox+7jbaKkrLMRJ4rAdldd1xPd/JzbBWspqaQv6RSTrgFN95EsfhC+7w==} engines: {node: '>=16.9.0'} hosted-git-info@9.0.2: @@ -6268,8 +6299,8 @@ packages: i18next-resources-to-backend@1.2.1: resolution: {integrity: sha512-okHbVA+HZ7n1/76MsfhPqDou0fptl2dAlhRDu2ideXloRRduzHsqDOznJBef+R3DFZnbvWoBW+KxJ7fnFjd6Yw==} - i18next@26.0.3: - resolution: {integrity: sha512-1571kXINxHKY7LksWp8wP+zP0YqHSSpl/OW0Y0owFEf2H3s8gCAffWaZivcz14rMkOvn3R/psiQxVsR9t2Nafg==} + i18next@26.0.4: + resolution: {integrity: sha512-gXF7U9bfioXPLv7mw8Qt2nfO7vij5MyINvPgVv99pX3fL1Y01pw2mKBFrlYpRxRCl2wz3ISenj6VsMJT2isfuA==} peerDependencies: typescript: ^5 || ^6 peerDependenciesMeta: @@ -6488,6 +6519,10 @@ packages: resolution: {integrity: sha512-/2uqY7x6bsrpi3i9LVU6J89352C0rpMk0as8trXxCtvd4kPk1ke/Eyif6wqfSLvoNJqcDG9Vk4UsXgygzCt2xA==} engines: {node: '>=20.0.0'} + jsdoc-type-pratt-parser@7.2.0: + resolution: {integrity: sha512-dh140MMgjyg3JhJZY/+iEzW+NO5xR2gpbDFKHqotCmexElVntw7GjWjt511+C/Ef02RU5TKYrJo/Xlzk+OLaTw==} + engines: {node: '>=20.0.0'} + jsesc@3.1.0: resolution: {integrity: sha512-/sM3dO2FOzXjKQhJuo0Q173wf2KOo8t4I8vHy6lF9poUp7bKT0/NHE8fPX23PwfhnykfqnC2xRxOnVw5XuGIaA==} engines: {node: '>=6'} @@ -6537,8 +6572,8 @@ packages: khroma@2.1.0: resolution: {integrity: sha512-Ls993zuzfayK269Svk9hzpeGUKob/sIgZzyHYdjQoAdQetRKpOLj+k/QQQ/6Qi0Yz65mlROrfd+Ev+1+7dz9Kw==} - knip@6.3.0: - resolution: {integrity: sha512-g6dVPoTw6iNm3cubC5IWxVkVsd0r5hXhTBTbAGIEQN53GdA2ZM/slMTPJ7n5l8pBebNQPHpxjmKxuR4xVQ2/hQ==} + knip@6.4.1: + resolution: {integrity: sha512-Ry+ywmDFSZvKp/jx7LxMgsZWRTs931alV84e60lh0Stf6kSRYqSIUTkviyyDFRcSO3yY1Kpbi83OirN+4lA2Xw==} engines: {node: ^20.19.0 || >=22.12.0} hasBin: true @@ -6578,8 +6613,8 @@ packages: '@lexical/utils': '>=0.28.0' lexical: '>=0.28.0' - lexical@0.42.0: - resolution: {integrity: sha512-GY9Lg3YEIU7nSFaiUlLspZ1fm4NfIcfABaxy9nT+fRVDkX7iV005T5Swil83gXUmxFUNKGal3j+hUxHOUDr+Aw==} + lexical@0.43.0: + resolution: {integrity: sha512-waSeXyt1HxTFpU8KNRA3IQcvjvpw0lZNaSbGopfOi4bLV0FF9zYpqiScTnEUMP/b1W7qWmD4Z2Detw43XICxqQ==} lib0@0.2.117: resolution: {integrity: sha512-DeXj9X5xDCjgKLU/7RR+/HQEVzuuEUiwldwOGsHK/sfAfELGWEyTcf0x+uOvCvK3O2zPmZePXWL85vtia6GyZw==} @@ -6699,6 +6734,9 @@ packages: resolution: {integrity: sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q==} hasBin: true + loro-crdt@1.10.8: + resolution: {integrity: sha512-GvH8fSJST1VDHRGzlQml80pBYoFbIP4ULeV1S8fD4ffmA8m+icoPORyVUW2AkJBY3dxKIcMMn0WqaJmpCmnbkQ==} + loupe@3.2.1: resolution: {integrity: sha512-CdzqowRJCeLU72bHvWqwRBBlLcMEtIvGrlvef74kMnV2AolS9Y8xUv1I0U/MNAWMhBlKIoyuEgoJ0t/bbwHbLQ==} @@ -6818,8 +6856,8 @@ packages: mdn-data@2.0.30: resolution: {integrity: sha512-GaqWWShW4kv/G9IEucWScBx9G1/vsFZZJUO+tD26M8J8z3Kw5RDQjaoZe03YAClgeS/SWPOcb4nkFBTEi5DUEA==} - mdn-data@2.23.0: - resolution: {integrity: sha512-786vq1+4079JSeu2XdcDjrhi/Ry7BWtjDl9WtGPWLiIHb2T66GvIVflZTBoSNZ5JqTtJGYEVMuFA/lbQlMOyDQ==} + mdn-data@2.27.1: + resolution: {integrity: sha512-9Yubnt3e8A0OKwxYSXyhLymGW4sCufcLG6VdiDdUGVkPhpqLxlvP5vl1983gQjJl3tqbrM731mjaZaP68AgosQ==} merge-stream@2.0.0: resolution: {integrity: sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w==} @@ -7057,8 +7095,8 @@ packages: react: ^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc react-dom: ^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc - next@16.2.2: - resolution: {integrity: sha512-i6AJdyVa4oQjyvX/6GeER8dpY/xlIV+4NMv/svykcLtURJSy/WzDnnUk/TM4d0uewFHK7xSQz4TbIwPgjky+3A==} + next@16.2.3: + resolution: {integrity: sha512-9V3zV4oZFza3PVev5/poB9g0dEafVcgNyQ8eTRop8GvxZjV2G15FC5ARuG1eFD42QgeYkzJBJzHghNP8Ad9xtA==} engines: {node: '>=20.9.0'} hasBin: true peerDependencies: @@ -7166,8 +7204,8 @@ packages: oxc-resolver@11.19.1: resolution: {integrity: sha512-qE/CIg/spwrTBFt5aKmwe3ifeDdLfA2NESN30E42X/lII5ClF8V7Wt6WIJhcGZjp0/Q+nQ+9vgxGk//xZNX2hg==} - oxfmt@0.43.0: - resolution: {integrity: sha512-KTYNG5ISfHSdmeZ25Xzb3qgz9EmQvkaGAxgBY/p38+ZiAet3uZeu7FnMwcSQJg152Qwl0wnYAxDc+Z/H6cvrwA==} + oxfmt@0.45.0: + resolution: {integrity: sha512-0o/COoN9fY50bjVeM7PQsNgbhndKurBIeTIcspW033OumksjJJmIVDKjAk5HMwU/GHTxSOdGDdhJ6BRzGPmsHg==} engines: {node: ^20.19.0 || >=22.12.0} hasBin: true @@ -7175,8 +7213,8 @@ packages: resolution: {integrity: sha512-/Uc9TQyN1l8w9QNvXtVHYtz+SzDJHKpb5X0UnHodl0BVzijUPk0LPlDOHAvogd1UI+iy9ZSF6gQxEqfzUxCULQ==} hasBin: true - oxlint@1.58.0: - resolution: {integrity: sha512-t4s9leczDMqlvOSjnbCQe7gtoLkWgBGZ7sBdCJ9EOj5IXFSG/X7OAzK4yuH4iW+4cAYe8kLFbC8tuYMwWZm+Cg==} + oxlint@1.60.0: + resolution: {integrity: sha512-tnRzTWiWJ9pg3ftRWnD0+Oqh78L6ZSwcEudvCZaER0PIqiAnNyXj5N1dPwjmNpDalkKS9m/WMLN1CTPUBPmsgw==} engines: {node: ^20.19.0 || >=22.12.0} hasBin: true peerDependencies: @@ -7299,8 +7337,8 @@ packages: resolution: {integrity: sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==} engines: {node: '>=12'} - pinyin-pro@3.28.0: - resolution: {integrity: sha512-mMRty6RisoyYNphJrTo3pnvp3w8OMZBrXm9YSWkxhAfxKj1KZk2y8T2PDIZlDDRsvZ0No+Hz6FI4sZpA6Ey25g==} + pinyin-pro@3.28.1: + resolution: {integrity: sha512-oqz8ulwRgtUXRi0vbqEfGNly19zpyCxYrjhkk5TibGcgSW6eNwS5woajCXRwqURi8Ehc2yOFTiB4uNoZ+NJOnA==} pixelmatch@7.1.0: resolution: {integrity: sha512-1wrVzJ2STrpmONHKBy228LM1b84msXDUoAzVEl0R8Mz4Ce6EPr+IVtxm8+yvrqLYMHswREkjYFaMxnyGnaY3Ng==} @@ -7405,8 +7443,8 @@ packages: peerDependencies: react: ^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0 - qs@6.15.0: - resolution: {integrity: sha512-mAZTtNCeetKMH+pSjrb76NAM8V9a05I9aBZOHztWy/UqcJdQYNsf59vrRKWnojAT9Y+GbIvoTBC++CPHqpDBhQ==} + qs@6.15.1: + resolution: {integrity: sha512-6YHEFRL9mfgcAvql/XhwTvf5jKcOiiupt2FiJxHkiX1z4j7WL8J/jRHYLluORvc1XxB5rV20KoeK00gVJamspg==} engines: {node: '>=0.6'} quansync@0.2.11: @@ -7443,10 +7481,10 @@ packages: resolution: {integrity: sha512-aEZ9qP+/M+58x2qgfSFEWH1BxLyHe5+qkLNJOZQb5iGS017jpbRnoKhNRrXPeA6RfBrZO5wZrT9DMC1UqE1f1w==} engines: {node: ^20.9.0 || >=22} - react-dom@19.2.4: - resolution: {integrity: sha512-AXJdLo8kgMbimY95O2aKQqsz2iWi9jMgKJhRBAxECE4IFxfcazB2LmzloIoibJI3C12IlY20+KFaLv+71bUJeQ==} + react-dom@19.2.5: + resolution: {integrity: sha512-J5bAZz+DXMMwW/wV3xzKke59Af6CHY7G4uYLN1OvBcKEsWOs4pQExj86BBKamxl/Ik5bx9whOrvBlSDfWzgSag==} peerDependencies: - react: ^19.2.4 + react: ^19.2.5 react-draggable@4.5.0: resolution: {integrity: sha512-VC+HBLEZ0XJxnOxVAZsdRi8rD04Iz3SiiKOoYzamjylUcju/hP9np/aZdLHf/7WOD268WMoNJMvYfB5yAK45cw==} @@ -7474,14 +7512,14 @@ packages: react: '>=16.8.0' react-dom: '>=16.8.0' - react-i18next@17.0.2: - resolution: {integrity: sha512-shBftH2vaTWK2Bsp7FiL+cevx3xFJlvFxmsDFQSrJc+6twHkP0tv/bGa01VVWzpreUVVwU+3Hev5iFqRg65RwA==} + react-i18next@16.5.8: + resolution: {integrity: sha512-2ABeHHlakxVY+LSirD+OiERxFL6+zip0PaHo979bgwzeHg27Sqc82xxXWIrSFmfWX0ZkrvXMHwhsi/NGUf5VQg==} peerDependencies: - i18next: '>= 26.0.1' + i18next: '>= 25.6.2' react: '>= 16.8.0' react-dom: '*' react-native: '*' - typescript: ^5 || ^6 + typescript: ^5 peerDependenciesMeta: react-dom: optional: true @@ -7538,12 +7576,12 @@ packages: react: '>=16.3.0' react-dom: '>=16.3.0' - react-server-dom-webpack@19.2.4: - resolution: {integrity: sha512-zEhkWv6RhXDctC2N7yEUHg3751nvFg81ydHj8LTTZuukF/IF1gcOKqqAL6Ds+kS5HtDVACYPik0IvzkgYXPhlQ==} + react-server-dom-webpack@19.2.5: + resolution: {integrity: sha512-bYhdd2cZJhXHqyJBoloYaJrn8MrL9Egf3ZZVn0OrIODCCORm2goFD7C+xszf6xgfsSJi0rtgB/ichcuHfkJ4yQ==} engines: {node: '>=0.10.0'} peerDependencies: - react: ^19.2.4 - react-dom: ^19.2.4 + react: ^19.2.5 + react-dom: ^19.2.5 webpack: ^5.59.0 react-sortablejs@6.1.4: @@ -7570,8 +7608,8 @@ packages: peerDependencies: react: ^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0 - react@19.2.4: - resolution: {integrity: sha512-9nfp2hYpCwOjAN+8TZFGhtWEwgvWHXqESH8qT89AT/lWklpLON22Lc8pEtnpsZz7VmawabSU0gCjnj8aC0euHQ==} + react@19.2.5: + resolution: {integrity: sha512-llUJLzz1zTUBrskt2pwZgLq59AemifIftw4aB7JxOqf1HY2FDaGDxgwpAPVzHU1kdWabH7FauP4i1oEeer2WCA==} engines: {node: '>=0.10.0'} reactflow@11.11.4: @@ -7728,11 +7766,6 @@ packages: robust-predicates@3.0.3: resolution: {integrity: sha512-NS3levdsRIUOmiJ8FZWCP7LG3QpJyrs/TE0Zpf1yvZu8cAJJ6QMW92H1c7kWpdIHo8RvmLxN/o2JXTKHp74lUA==} - rolldown@1.0.0-rc.12: - resolution: {integrity: sha512-yP4USLIMYrwpPHEFB5JGH1uxhcslv6/hL0OyvTuY+3qlOSJvZ7ntYnoWpehBxufkgN0cvXxppuTu5hHa/zPh+A==} - engines: {node: ^20.19.0 || >=22.12.0} - hasBin: true - rollup@4.59.0: resolution: {integrity: sha512-2oMpl67a3zCH9H79LeMcbDhXW/UmWG/y2zuqnF2jQq5uq9TbM9TVyXvA4+t+ne2IIkBdrLpAaRQAvo7YI/Yyeg==} engines: {node: '>=18.0.0', npm: '>=8.0.0'} @@ -7807,9 +7840,6 @@ packages: resolution: {integrity: sha512-OwrZRZAfhHww0WEnKHDY8OM0U/Qs8OTfIDWhUD4BLpNJUfXK4cGmjiagGze086m+mhI+V2nD0gfbHEnJjb9STA==} engines: {node: '>=10'} - server-only@0.0.1: - resolution: {integrity: sha512-qepMx2JxAa5jjfzxG79yPPq+8BuFToHd1hm7kI+Z4zAq1ftQiP7HcxMhDDItrbtwVeLg/cY2JnKnrcFkmiswNA==} - sharp@0.34.5: resolution: {integrity: sha512-Ou9I5Ft9WNcCbXrU9cMgPBcCK8LiwLqcbywW3t4oDV37n1pzpuNLsYiAV8eODnjbtQlSDwZ2cUEeQz4E54Hltg==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} @@ -7826,6 +7856,9 @@ packages: resolution: {integrity: sha512-eAVKTMedR5ckPo4xne/PjYQYrU3qx78gtJZ+sHlXEg5IHhhoQhMfZVzetTYuaJS0L2Ef3AcCRzCHV8T0WI6nIQ==} engines: {node: '>=20'} + siginfo@2.0.0: + resolution: {integrity: sha512-ybx0WO1/8bSBLEWXZvEd7gMW3Sn3JFlW3TvX1nREbDLRNQNaeNN8WK0meBwPdAaOI7TtRRRJn/Es1zhrrCHu7g==} + simple-concat@1.0.1: resolution: {integrity: sha512-cSFtAPtRhljv69IK0hTVZQ+OfE9nePi/rtJmw5UjHeVyVroEqJXP1sFztKUy1qU+xvz3u/sfYJLa947b7nAN2Q==} @@ -7846,6 +7879,14 @@ packages: resolution: {integrity: sha512-dWUG8F5sIIARXih1DTaQAX4SsiTXhInKf1buxdY9DIg4ZYPZK5nGM1VRIYmEbDbsHt7USo99xSLFu5Q1IqTmsg==} engines: {node: '>= 18'} + socket.io-client@4.8.3: + resolution: {integrity: sha512-uP0bpjWrjQmUt5DTHq9RuoCBdFJF10cdX9X+a368j/Ft0wmaVgxlrjvK3kjvgCODOMMOz9lcaRzxmso0bTWZ/g==} + engines: {node: '>=10.0.0'} + + socket.io-parser@4.2.6: + resolution: {integrity: sha512-asJqbVBDsBCJx0pTqw3WfesSY0iRX+2xzWEWzrpcH7L6fLzrhyF8WPI8UaeM4YCuDfpwA/cgsdugMsmtz8EJeg==} + engines: {node: '>=10.0.0'} + solid-js@1.9.11: resolution: {integrity: sha512-WEJtcc5mkh/BnHA6Yrg4whlF8g6QwpmXXRg4P2ztPmcKeHHlH4+djYecBLhSpecZY2RRECXYUwIc/C2r3yzQ4Q==} @@ -7890,6 +7931,9 @@ packages: engines: {node: '>=20.16.0'} hasBin: true + stackback@0.0.2: + resolution: {integrity: sha512-1XMJE5fQo1jGH6Y/7ebnwPOBEkIEnT4QF32d5R1+VXdXveM0IBMJt8zfaxX1P3QhVwrYe+576+jkANtSS2mBbw==} + stackframe@1.3.4: resolution: {integrity: sha512-oeVtt7eWQS+Na6F//S4kJ2K2VbRlS9D43mAlMyVpVWovy9o+jfgH8O9agzANzaiLjclA0oYzUXEM4PurhSUChw==} @@ -8022,8 +8066,8 @@ packages: resolution: {integrity: sha512-yEFYrVhod+hdNyx7g5Bnkkb0G6si8HJurOoOEgC8B/O0uXLHlaey/65KRv6cuWBNhBgHKAROVpc7QyYqE5gFng==} engines: {node: '>=20'} - tailwind-csstree@0.1.5: - resolution: {integrity: sha512-ZHCKXz+TcBj7CJYStiuAtNenPpdHMrhgotOSNJ3UQTSTgwTfAyoyTA2SNW4oD8+2T6xt6awM7CZSU2+PXx9V3w==} + tailwind-csstree@0.3.1: + resolution: {integrity: sha512-v147gLOR+E+9H4dNaP9rBeS/S/CTQJMRItlX9jLOXjdBGfSRauLwiz7LBCViaQmn6URXIlOdN6iMzSzOaeoUUw==} engines: {node: '>=18.18'} peerDependencies: '@eslint/css': '>=1.0.0' @@ -8103,6 +8147,10 @@ packages: resolution: {integrity: sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==} engines: {node: '>=12.0.0'} + tinyglobby@0.2.16: + resolution: {integrity: sha512-pn99VhoACYR8nFHhxqix+uvsbXineAasWm5ojXoN8xEwK5Kd3/TrhNn1wByuD52UxWRLy8pu+kRMniEi6Eq9Zg==} + engines: {node: '>=12.0.0'} + tinypool@2.1.0: resolution: {integrity: sha512-Pugqs6M0m7Lv1I7FtxN4aoyToKg1C4tu+/381vH35y8oENM/Ai7f7C4StcoK4/+BSw9ebcS8jRiVrORFKCALLw==} engines: {node: ^20.0.0 || >=22.0.0} @@ -8246,8 +8294,8 @@ packages: resolution: {integrity: sha512-X2wH19RAPZE3+ldGicOkoj/SIA83OIxcJ6Cuaw23hf8Xc6fQpvZXY0SftE2JgS0QhYLUG4uwodSI3R53keyh7w==} engines: {node: '>=14'} - undici-types@7.18.2: - resolution: {integrity: sha512-AsuCzffGHJybSaRrmr5eHr81mwJU3kjw6M+uprWvCXiNeN9SOGwQ3Jn8jb8m3Z6izVgknn1R0FTCEAP2QrLY/w==} + undici-types@7.19.2: + resolution: {integrity: sha512-qYVnV5OEm2AW8cJMCpdV20CDyaN3g0AjDlOGf1OW4iaDEx8MwdtChUp4zu4H0VP3nDRF/8RKWH+IPp9uW0YGZg==} undici@7.24.0: resolution: {integrity: sha512-jxytwMHhsbdpBXxLAcuu0fzlQeXCNnWdDyRHpvWsUl8vd98UwYdl9YTyn8/HcpcJPC3pwUveefsa3zTxyD/ERg==} @@ -8409,17 +8457,17 @@ packages: vfile@6.0.3: resolution: {integrity: sha512-KzIbH/9tXat2u30jf+smMwFCsno4wHVdNmzFyL+T/L3UGqqk6JKfVqOFOZEpZSHADH1k40ab6NUIXZq422ov3Q==} - vinext@0.0.40: - resolution: {integrity: sha512-rs0z6G2el6kS/667ERKQjSMF3R8ZD2H9xDrnRntVOa6OBnyYcOMM/AVpOy/W1lxOkq6EYTO1OUD9DbNSWxRRJw==} + vinext@0.0.41: + resolution: {integrity: sha512-fpQjNp6cIqjYGH2/kbhN2SdIYHEu79RdlII23SWsY1Qp7LM+je8GfTJH1sxw6dASxPhZKZB/jCmTm5d2/D25zw==} engines: {node: '>=22'} hasBin: true peerDependencies: '@mdx-js/rollup': ^3.0.0 '@vitejs/plugin-react': ^5.1.4 || ^6.0.0 - '@vitejs/plugin-rsc': ^0.5.21 - react: '>=19.2.0' - react-dom: '>=19.2.0' - react-server-dom-webpack: ^19.2.4 + '@vitejs/plugin-rsc': ^0.5.23 + react: ^19.2.5 + react-dom: ^19.2.5 + react-server-dom-webpack: ^19.2.5 vite: ^7.0.0 || ^8.0.0 peerDependenciesMeta: '@mdx-js/rollup': @@ -8452,8 +8500,8 @@ packages: storybook: ^0.0.0-0 || ^9.0.0 || ^10.0.0 || ^10.0.0-0 || ^10.1.0-0 || ^10.2.0-0 || ^10.3.0-0 || ^10.4.0-0 vite: ^5.0.0 || ^6.0.0 || ^7.0.0 || ^8.0.0 - vite-plus@0.1.16: - resolution: {integrity: sha512-sgYHc5zWLSDInaHb/abvEA7UOwh7sUWuyNt+Slphj55jPvzodT8Dqw115xyKwDARTuRFSpm1eo/t58qZ8/NylQ==} + vite-plus@0.1.18: + resolution: {integrity: sha512-RiWUoOmQiJMtd4Dfm6WD0v0Selqh/nQzmaGVIrkfnr+2s5UxGVZy7n2TCO5ZnR7w9noMIgtUAQN8GtKhwHEiOQ==} engines: {node: ^20.19.0 || >=22.12.0} hasBin: true @@ -8470,49 +8518,6 @@ packages: peerDependencies: vite: '*' - vite@8.0.3: - resolution: {integrity: sha512-B9ifbFudT1TFhfltfaIPgjo9Z3mDynBTJSUYxTjOQruf/zHH+ezCQKcoqO+h7a9Pw9Nm/OtlXAiGT1axBgwqrQ==} - engines: {node: ^20.19.0 || >=22.12.0} - hasBin: true - peerDependencies: - '@types/node': ^20.19.0 || >=22.12.0 - '@vitejs/devtools': ^0.1.0 - esbuild: 0.27.2 - jiti: '>=1.21.0' - less: ^4.0.0 - sass: ^1.70.0 - sass-embedded: ^1.70.0 - stylus: '>=0.54.8' - sugarss: ^5.0.0 - terser: ^5.16.0 - tsx: ^4.8.1 - yaml: 2.8.3 - peerDependenciesMeta: - '@types/node': - optional: true - '@vitejs/devtools': - optional: true - esbuild: - optional: true - jiti: - optional: true - less: - optional: true - sass: - optional: true - sass-embedded: - optional: true - stylus: - optional: true - sugarss: - optional: true - terser: - optional: true - tsx: - optional: true - yaml: - optional: true - vitefu@1.1.3: resolution: {integrity: sha512-ub4okH7Z5KLjb6hDyjqrGXqWtWvoYdU3IGm/NorpgHncKoLTCfRIbvlhBm7r0YstIaQRYlp4yEbFqDcKSzXSSg==} peerDependencies: @@ -8526,6 +8531,47 @@ packages: peerDependencies: vitest: ^3.0.0 || ^4.0.0 + vitest@4.1.4: + resolution: {integrity: sha512-tFuJqTxKb8AvfyqMfnavXdzfy3h3sWZRWwfluGbkeR7n0HUev+FmNgZ8SDrRBTVrVCjgH5cA21qGbCffMNtWvg==} + engines: {node: ^20.0.0 || ^22.0.0 || >=24.0.0} + hasBin: true + peerDependencies: + '@edge-runtime/vm': '*' + '@opentelemetry/api': ^1.9.0 + '@types/node': ^20.0.0 || ^22.0.0 || >=24.0.0 + '@vitest/browser-playwright': 4.1.4 + '@vitest/browser-preview': 4.1.4 + '@vitest/browser-webdriverio': 4.1.4 + '@vitest/coverage-istanbul': 4.1.4 + '@vitest/coverage-v8': 4.1.4 + '@vitest/ui': 4.1.4 + happy-dom: '*' + jsdom: '*' + vite: ^6.0.0 || ^7.0.0 || ^8.0.0 + peerDependenciesMeta: + '@edge-runtime/vm': + optional: true + '@opentelemetry/api': + optional: true + '@types/node': + optional: true + '@vitest/browser-playwright': + optional: true + '@vitest/browser-preview': + optional: true + '@vitest/browser-webdriverio': + optional: true + '@vitest/coverage-istanbul': + optional: true + '@vitest/coverage-v8': + optional: true + '@vitest/ui': + optional: true + happy-dom: + optional: true + jsdom: + optional: true + void-elements@3.1.0: resolution: {integrity: sha512-Dhxzh5HZuiHQhbvTW9AMetFfBHDMYpo23Uo9btPXgdYP+3T5S+p+jgNy7spra+veYhBP2dCSgxR/i2Y02h5/6w==} engines: {node: '>=0.10.0'} @@ -8605,6 +8651,11 @@ packages: engines: {node: '>= 8'} hasBin: true + why-is-node-running@2.3.0: + resolution: {integrity: sha512-hUrmaWBdVDcxvYqnyh09zunKzROWjbZTiNy8dBEjkS7ehEDQibXJ7XvlmtbwuTclUiIyN+CyXQD4Vmko8fNm8w==} + engines: {node: '>=8'} + hasBin: true + word-wrap@1.2.5: resolution: {integrity: sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA==} engines: {node: '>=0.10.0'} @@ -8612,6 +8663,18 @@ packages: wrappy@1.0.2: resolution: {integrity: sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==} + ws@8.18.3: + resolution: {integrity: sha512-PEIGCY5tSlUt50cqyMXfCzX+oOPqN0vuGqWzbcJ2xvnkzkq46oOpz7dQaTDBdfICb4N14+GARUDw2XV2N4tvzg==} + engines: {node: '>=10.0.0'} + peerDependencies: + bufferutil: ^4.0.1 + utf-8-validate: '>=5.0.2' + peerDependenciesMeta: + bufferutil: + optional: true + utf-8-validate: + optional: true + ws@8.20.0: resolution: {integrity: sha512-sAt8BhgNbzCtgGbt2OxmpuryO63ZoDk/sqaB/znQm94T4fCEsy/yV+7CdC1kJhOU9lboAEU7R3kquuycDoibVA==} engines: {node: '>=10.0.0'} @@ -8640,6 +8703,10 @@ packages: resolution: {integrity: sha512-yMqGBqtXyeN1e3TGYvgNgDVZ3j84W4cwkOXQswghol6APgZWaff9lnbvN7MHYJOiXsvGPXtjTYJEiC9J2wv9Eg==} engines: {node: '>=8.0'} + xmlhttprequest-ssl@2.1.2: + resolution: {integrity: sha512-TEU+nJVUUnA4CYJFLvK5X9AOeH4KvDvhIfm0vV1GaQRtchnG0hgK5p8hw/xjv8cunWYCsiPCSDzObPyhEwq3KQ==} + engines: {node: '>=0.4.0'} + yallist@3.1.1: resolution: {integrity: sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g==} @@ -8734,27 +8801,27 @@ snapshots: '@alloc/quick-lru@5.2.0': {} - '@amplitude/analytics-browser@2.38.1': + '@amplitude/analytics-browser@2.39.0': dependencies: - '@amplitude/analytics-core': 2.44.1 - '@amplitude/plugin-autocapture-browser': 1.25.1 - '@amplitude/plugin-custom-enrichment-browser': 0.1.3 - '@amplitude/plugin-network-capture-browser': 1.9.12 - '@amplitude/plugin-page-url-enrichment-browser': 0.7.4 - '@amplitude/plugin-page-view-tracking-browser': 2.9.5 - '@amplitude/plugin-web-vitals-browser': 1.1.27 + '@amplitude/analytics-core': 2.45.0 + '@amplitude/plugin-autocapture-browser': 1.25.2 + '@amplitude/plugin-custom-enrichment-browser': 0.1.4 + '@amplitude/plugin-network-capture-browser': 1.9.13 + '@amplitude/plugin-page-url-enrichment-browser': 0.7.5 + '@amplitude/plugin-page-view-tracking-browser': 2.9.6 + '@amplitude/plugin-web-vitals-browser': 1.1.28 tslib: 2.8.1 - '@amplitude/analytics-client-common@2.4.42': + '@amplitude/analytics-client-common@2.4.43': dependencies: '@amplitude/analytics-connector': 1.6.4 - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-core': 2.45.0 '@amplitude/analytics-types': 2.11.1 tslib: 2.8.1 '@amplitude/analytics-connector@1.6.4': {} - '@amplitude/analytics-core@2.44.1': + '@amplitude/analytics-core@2.45.0': dependencies: '@amplitude/analytics-connector': 1.6.4 '@types/zen-observable': 0.8.3 @@ -8768,48 +8835,48 @@ snapshots: dependencies: js-base64: 3.7.8 - '@amplitude/plugin-autocapture-browser@1.25.1': + '@amplitude/plugin-autocapture-browser@1.25.2': dependencies: - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-core': 2.45.0 tslib: 2.8.1 - '@amplitude/plugin-custom-enrichment-browser@0.1.3': + '@amplitude/plugin-custom-enrichment-browser@0.1.4': dependencies: - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-core': 2.45.0 tslib: 2.8.1 - '@amplitude/plugin-network-capture-browser@1.9.12': + '@amplitude/plugin-network-capture-browser@1.9.13': dependencies: - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-core': 2.45.0 tslib: 2.8.1 - '@amplitude/plugin-page-url-enrichment-browser@0.7.4': + '@amplitude/plugin-page-url-enrichment-browser@0.7.5': dependencies: - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-core': 2.45.0 tslib: 2.8.1 - '@amplitude/plugin-page-view-tracking-browser@2.9.5': + '@amplitude/plugin-page-view-tracking-browser@2.9.6': dependencies: - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-core': 2.45.0 tslib: 2.8.1 - '@amplitude/plugin-session-replay-browser@1.27.6(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0)': + '@amplitude/plugin-session-replay-browser@1.27.7(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0)': dependencies: - '@amplitude/analytics-client-common': 2.4.42 - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-client-common': 2.4.43 + '@amplitude/analytics-core': 2.45.0 '@amplitude/analytics-types': 2.11.1 '@amplitude/rrweb-plugin-console-record': 2.0.0-alpha.36(@amplitude/rrweb@2.0.0-alpha.37) '@amplitude/rrweb-record': 2.0.0-alpha.36 - '@amplitude/session-replay-browser': 1.35.1(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0) + '@amplitude/session-replay-browser': 1.36.0(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0) idb-keyval: 6.2.2 tslib: 2.8.1 transitivePeerDependencies: - '@amplitude/rrweb' - rollup - '@amplitude/plugin-web-vitals-browser@1.1.27': + '@amplitude/plugin-web-vitals-browser@1.1.28': dependencies: - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-core': 2.45.0 tslib: 2.8.1 web-vitals: 5.1.0 @@ -8854,10 +8921,10 @@ snapshots: base64-arraybuffer: 1.0.2 mitt: 3.0.1 - '@amplitude/session-replay-browser@1.35.1(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0)': + '@amplitude/session-replay-browser@1.36.0(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0)': dependencies: - '@amplitude/analytics-client-common': 2.4.42 - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-client-common': 2.4.43 + '@amplitude/analytics-core': 2.45.0 '@amplitude/analytics-types': 2.11.1 '@amplitude/experiment-core': 0.7.2 '@amplitude/rrweb-packer': 2.0.0-alpha.36 @@ -8875,24 +8942,24 @@ snapshots: '@amplitude/targeting@0.2.0': dependencies: - '@amplitude/analytics-client-common': 2.4.42 - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-client-common': 2.4.43 + '@amplitude/analytics-core': 2.45.0 '@amplitude/analytics-types': 2.11.1 '@amplitude/experiment-core': 0.7.2 idb: 8.0.0 tslib: 2.8.1 - '@antfu/eslint-config@8.0.0(@eslint-react/eslint-plugin@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@next/eslint-plugin-next@16.2.2)(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.1(typescript@6.0.2))(@typescript-eslint/utils@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(@vue/compiler-sfc@3.5.31)(eslint-plugin-react-refresh@0.5.2(eslint@10.2.0(jiti@2.6.1)))(eslint@10.2.0(jiti@2.6.1))(oxlint@1.58.0(oxlint-tsgolint@0.20.0))(typescript@6.0.2)': + '@antfu/eslint-config@8.2.0(@eslint-react/eslint-plugin@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@next/eslint-plugin-next@16.2.3)(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2))(@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.18)(@vue/compiler-sfc@3.5.31)(eslint-plugin-react-refresh@0.5.2(eslint@10.2.0(jiti@2.6.1)))(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0))(typescript@6.0.2)': dependencies: '@antfu/install-pkg': 1.1.0 '@clack/prompts': 1.2.0 - '@e18e/eslint-plugin': 0.3.0(eslint@10.2.0(jiti@2.6.1))(oxlint@1.58.0(oxlint-tsgolint@0.20.0)) + '@e18e/eslint-plugin': 0.3.0(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0)) '@eslint-community/eslint-plugin-eslint-comments': 4.7.1(eslint@10.2.0(jiti@2.6.1)) '@eslint/markdown': 8.0.1 '@stylistic/eslint-plugin': 5.10.0(eslint@10.2.0(jiti@2.6.1)) - '@typescript-eslint/eslint-plugin': 8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/parser': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@vitest/eslint-plugin': 1.6.14(@typescript-eslint/eslint-plugin@8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/eslint-plugin': 8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/parser': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@vitest/eslint-plugin': 1.6.15(@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.18)(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) ansis: 4.2.0 cac: 7.0.0 eslint: 10.2.0(jiti@2.6.1) @@ -8900,22 +8967,22 @@ snapshots: eslint-flat-config-utils: 3.1.0 eslint-merge-processors: 2.0.0(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-antfu: 3.2.2(eslint@10.2.0(jiti@2.6.1)) - eslint-plugin-command: 3.5.2(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.1(typescript@6.0.2))(@typescript-eslint/utils@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-command: 3.5.2(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2))(@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-import-lite: 0.6.0(eslint@10.2.0(jiti@2.6.1)) - eslint-plugin-jsdoc: 62.8.1(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-jsdoc: 62.9.0(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-jsonc: 3.1.2(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-n: 17.24.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint-plugin-no-only-tests: 3.3.0 - eslint-plugin-perfectionist: 5.7.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + eslint-plugin-perfectionist: 5.8.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint-plugin-pnpm: 1.6.0(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-regexp: 3.1.0(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-toml: 1.3.1(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-unicorn: 64.0.0(eslint@10.2.0(jiti@2.6.1)) - eslint-plugin-unused-imports: 4.4.1(@typescript-eslint/eslint-plugin@8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)) - eslint-plugin-vue: 10.8.0(@stylistic/eslint-plugin@5.10.0(eslint@10.2.0(jiti@2.6.1)))(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(vue-eslint-parser@10.4.0(eslint@10.2.0(jiti@2.6.1))) + eslint-plugin-unused-imports: 4.4.1(@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-vue: 10.8.0(@stylistic/eslint-plugin@5.10.0(eslint@10.2.0(jiti@2.6.1)))(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(vue-eslint-parser@10.4.0(eslint@10.2.0(jiti@2.6.1))) eslint-plugin-yml: 3.3.1(eslint@10.2.0(jiti@2.6.1)) eslint-processor-vue-blocks: 2.0.0(@vue/compiler-sfc@3.5.31)(eslint@10.2.0(jiti@2.6.1)) - globals: 17.4.0 + globals: 17.5.0 local-pkg: 1.1.2 parse-gitignore: 2.0.0 toml-eslint-parser: 1.0.3 @@ -8923,7 +8990,61 @@ snapshots: yaml-eslint-parser: 2.0.0 optionalDependencies: '@eslint-react/eslint-plugin': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@next/eslint-plugin-next': 16.2.2 + '@next/eslint-plugin-next': 16.2.3 + eslint-plugin-react-refresh: 0.5.2(eslint@10.2.0(jiti@2.6.1)) + transitivePeerDependencies: + - '@eslint/json' + - '@typescript-eslint/rule-tester' + - '@typescript-eslint/typescript-estree' + - '@typescript-eslint/utils' + - '@vue/compiler-sfc' + - oxlint + - supports-color + - typescript + - vitest + + '@antfu/eslint-config@8.2.0(@eslint-react/eslint-plugin@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@next/eslint-plugin-next@16.2.3)(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2))(@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@vue/compiler-sfc@3.5.31)(eslint-plugin-react-refresh@0.5.2(eslint@10.2.0(jiti@2.6.1)))(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0))(typescript@6.0.2)(vitest@4.1.4)': + dependencies: + '@antfu/install-pkg': 1.1.0 + '@clack/prompts': 1.2.0 + '@e18e/eslint-plugin': 0.3.0(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0)) + '@eslint-community/eslint-plugin-eslint-comments': 4.7.1(eslint@10.2.0(jiti@2.6.1)) + '@eslint/markdown': 8.0.1 + '@stylistic/eslint-plugin': 5.10.0(eslint@10.2.0(jiti@2.6.1)) + '@typescript-eslint/eslint-plugin': 8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/parser': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@vitest/eslint-plugin': 1.6.15(@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)(vitest@4.1.4) + ansis: 4.2.0 + cac: 7.0.0 + eslint: 10.2.0(jiti@2.6.1) + eslint-config-flat-gitignore: 2.3.0(eslint@10.2.0(jiti@2.6.1)) + eslint-flat-config-utils: 3.1.0 + eslint-merge-processors: 2.0.0(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-antfu: 3.2.2(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-command: 3.5.2(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2))(@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-import-lite: 0.6.0(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-jsdoc: 62.9.0(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-jsonc: 3.1.2(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-n: 17.24.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + eslint-plugin-no-only-tests: 3.3.0 + eslint-plugin-perfectionist: 5.8.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + eslint-plugin-pnpm: 1.6.0(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-regexp: 3.1.0(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-toml: 1.3.1(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-unicorn: 64.0.0(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-unused-imports: 4.4.1(@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-vue: 10.8.0(@stylistic/eslint-plugin@5.10.0(eslint@10.2.0(jiti@2.6.1)))(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(vue-eslint-parser@10.4.0(eslint@10.2.0(jiti@2.6.1))) + eslint-plugin-yml: 3.3.1(eslint@10.2.0(jiti@2.6.1)) + eslint-processor-vue-blocks: 2.0.0(@vue/compiler-sfc@3.5.31)(eslint@10.2.0(jiti@2.6.1)) + globals: 17.5.0 + local-pkg: 1.1.2 + parse-gitignore: 2.0.0 + toml-eslint-parser: 1.0.3 + vue-eslint-parser: 10.4.0(eslint@10.2.0(jiti@2.6.1)) + yaml-eslint-parser: 2.0.0 + optionalDependencies: + '@eslint-react/eslint-plugin': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@next/eslint-plugin-next': 16.2.3 eslint-plugin-react-refresh: 0.5.2(eslint@10.2.0(jiti@2.6.1)) transitivePeerDependencies: - '@eslint/json' @@ -9045,27 +9166,28 @@ snapshots: '@babel/helper-string-parser': 7.27.1 '@babel/helper-validator-identifier': 7.28.5 - '@base-ui/react@1.3.0(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@base-ui/react@1.4.0(@date-fns/tz@1.4.1)(@types/react@19.2.14)(date-fns@4.1.0)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: '@babel/runtime': 7.29.2 - '@base-ui/utils': 0.2.6(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@floating-ui/react-dom': 2.1.8(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + '@base-ui/utils': 0.2.7(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@date-fns/tz': 1.4.1 + '@floating-ui/react-dom': 2.1.8(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@floating-ui/utils': 0.2.11 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - tabbable: 6.4.0 - use-sync-external-store: 1.6.0(react@19.2.4) + date-fns: 4.1.0 + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + use-sync-external-store: 1.6.0(react@19.2.5) optionalDependencies: '@types/react': 19.2.14 - '@base-ui/utils@0.2.6(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@base-ui/utils@0.2.7(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: '@babel/runtime': 7.29.2 '@floating-ui/utils': 0.2.11 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) reselect: 5.1.1 - use-sync-external-store: 1.6.0(react@19.2.4) + use-sync-external-store: 1.6.0(react@19.2.5) optionalDependencies: '@types/react': 19.2.14 @@ -9090,13 +9212,13 @@ snapshots: '@chevrotain/utils@11.1.2': {} - '@chromatic-com/storybook@5.1.1(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))': + '@chromatic-com/storybook@5.1.2(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))': dependencies: '@neoconfetti/react': 1.0.0 chromatic: 13.3.5 filesize: 10.1.6 jsonfile: 6.2.0 - storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) strip-ansi: 7.2.0 transitivePeerDependencies: - '@chromatic-com/cypress' @@ -9176,18 +9298,18 @@ snapshots: dependencies: regexp-match-indices: 1.0.2 - '@cucumber/cucumber@12.7.0': + '@cucumber/cucumber@12.8.0': dependencies: '@cucumber/ci-environment': 13.0.0 '@cucumber/cucumber-expressions': 19.0.0 '@cucumber/gherkin': 38.0.0 - '@cucumber/gherkin-streams': 6.0.0(@cucumber/gherkin@38.0.0)(@cucumber/message-streams@4.0.1(@cucumber/messages@32.0.1))(@cucumber/messages@32.0.1) + '@cucumber/gherkin-streams': 6.0.0(@cucumber/gherkin@38.0.0)(@cucumber/message-streams@4.1.1(@cucumber/messages@32.2.0))(@cucumber/messages@32.2.0) '@cucumber/gherkin-utils': 11.0.0 - '@cucumber/html-formatter': 23.0.0(@cucumber/messages@32.0.1) - '@cucumber/junit-xml-formatter': 0.9.0(@cucumber/messages@32.0.1) - '@cucumber/message-streams': 4.0.1(@cucumber/messages@32.0.1) - '@cucumber/messages': 32.0.1 - '@cucumber/pretty-formatter': 1.0.1(@cucumber/cucumber@12.7.0)(@cucumber/messages@32.0.1) + '@cucumber/html-formatter': 23.0.0(@cucumber/messages@32.2.0) + '@cucumber/junit-xml-formatter': 0.13.2(@cucumber/messages@32.2.0) + '@cucumber/message-streams': 4.1.1(@cucumber/messages@32.2.0) + '@cucumber/messages': 32.2.0 + '@cucumber/pretty-formatter': 1.0.1(@cucumber/cucumber@12.8.0)(@cucumber/messages@32.2.0) '@cucumber/tag-expressions': 9.1.0 assertion-error-formatter: 3.0.0 capital-case: 1.0.4 @@ -9206,7 +9328,6 @@ snapshots: lodash.merge: 4.6.2 lodash.mergewith: 4.6.2 luxon: 3.7.2 - mime: 3.0.0 mkdirp: 3.0.1 mz: 2.7.0 progress: 2.0.3 @@ -9219,79 +9340,82 @@ snapshots: yaml: 2.8.3 yup: 1.7.1 - '@cucumber/gherkin-streams@6.0.0(@cucumber/gherkin@38.0.0)(@cucumber/message-streams@4.0.1(@cucumber/messages@32.0.1))(@cucumber/messages@32.0.1)': + '@cucumber/gherkin-streams@6.0.0(@cucumber/gherkin@38.0.0)(@cucumber/message-streams@4.1.1(@cucumber/messages@32.2.0))(@cucumber/messages@32.2.0)': dependencies: '@cucumber/gherkin': 38.0.0 - '@cucumber/message-streams': 4.0.1(@cucumber/messages@32.0.1) - '@cucumber/messages': 32.0.1 + '@cucumber/message-streams': 4.1.1(@cucumber/messages@32.2.0) + '@cucumber/messages': 32.2.0 commander: 14.0.0 source-map-support: 0.5.21 '@cucumber/gherkin-utils@11.0.0': dependencies: '@cucumber/gherkin': 38.0.0 - '@cucumber/messages': 32.0.1 + '@cucumber/messages': 32.2.0 '@teppeis/multimaps': 3.0.0 commander: 14.0.2 source-map-support: 0.5.21 '@cucumber/gherkin@38.0.0': dependencies: - '@cucumber/messages': 32.0.1 + '@cucumber/messages': 32.2.0 - '@cucumber/html-formatter@23.0.0(@cucumber/messages@32.0.1)': + '@cucumber/html-formatter@23.0.0(@cucumber/messages@32.2.0)': dependencies: - '@cucumber/messages': 32.0.1 + '@cucumber/messages': 32.2.0 - '@cucumber/junit-xml-formatter@0.9.0(@cucumber/messages@32.0.1)': + '@cucumber/junit-xml-formatter@0.13.2(@cucumber/messages@32.2.0)': dependencies: - '@cucumber/messages': 32.0.1 - '@cucumber/query': 14.7.0(@cucumber/messages@32.0.1) + '@cucumber/messages': 32.2.0 + '@cucumber/query': 14.7.0(@cucumber/messages@32.2.0) '@teppeis/multimaps': 3.0.0 luxon: 3.7.2 xmlbuilder: 15.1.1 - '@cucumber/message-streams@4.0.1(@cucumber/messages@32.0.1)': + '@cucumber/message-streams@4.1.1(@cucumber/messages@32.2.0)': dependencies: - '@cucumber/messages': 32.0.1 + '@cucumber/messages': 32.2.0 + mime: 3.0.0 - '@cucumber/messages@32.0.1': + '@cucumber/messages@32.2.0': dependencies: class-transformer: 0.5.1 reflect-metadata: 0.2.2 - '@cucumber/pretty-formatter@1.0.1(@cucumber/cucumber@12.7.0)(@cucumber/messages@32.0.1)': + '@cucumber/pretty-formatter@1.0.1(@cucumber/cucumber@12.8.0)(@cucumber/messages@32.2.0)': dependencies: - '@cucumber/cucumber': 12.7.0 - '@cucumber/messages': 32.0.1 + '@cucumber/cucumber': 12.8.0 + '@cucumber/messages': 32.2.0 ansi-styles: 5.2.0 cli-table3: 0.6.5 figures: 3.2.0 ts-dedent: 2.2.0 - '@cucumber/query@14.7.0(@cucumber/messages@32.0.1)': + '@cucumber/query@14.7.0(@cucumber/messages@32.2.0)': dependencies: - '@cucumber/messages': 32.0.1 + '@cucumber/messages': 32.2.0 '@teppeis/multimaps': 3.0.0 lodash.sortby: 4.7.0 '@cucumber/tag-expressions@9.1.0': {} - '@e18e/eslint-plugin@0.3.0(eslint@10.2.0(jiti@2.6.1))(oxlint@1.58.0(oxlint-tsgolint@0.20.0))': + '@date-fns/tz@1.4.1': {} + + '@e18e/eslint-plugin@0.3.0(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0))': dependencies: eslint-plugin-depend: 1.5.0(eslint@10.2.0(jiti@2.6.1)) optionalDependencies: eslint: 10.2.0(jiti@2.6.1) - oxlint: 1.58.0(oxlint-tsgolint@0.20.0) + oxlint: 1.60.0(oxlint-tsgolint@0.20.0) '@egoist/tailwindcss-icons@1.9.2(tailwindcss@4.2.2)': dependencies: '@iconify/utils': 3.1.0 tailwindcss: 4.2.2 - '@emnapi/core@1.9.1': + '@emnapi/core@1.9.2': dependencies: - '@emnapi/wasi-threads': 1.2.0 + '@emnapi/wasi-threads': 1.2.1 tslib: 2.8.1 optional: true @@ -9300,7 +9424,12 @@ snapshots: tslib: 2.8.1 optional: true - '@emnapi/wasi-threads@1.2.0': + '@emnapi/runtime@1.9.2': + dependencies: + tslib: 2.8.1 + optional: true + + '@emnapi/wasi-threads@1.2.1': dependencies: tslib: 2.8.1 optional: true @@ -9310,11 +9439,19 @@ snapshots: '@es-joy/jsdoccomment@0.84.0': dependencies: '@types/estree': 1.0.8 - '@typescript-eslint/types': 8.58.1 + '@typescript-eslint/types': 8.58.2 comment-parser: 1.4.5 esquery: 1.7.0 jsdoc-type-pratt-parser: 7.1.1 + '@es-joy/jsdoccomment@0.86.0': + dependencies: + '@types/estree': 1.0.8 + '@typescript-eslint/types': 8.58.2 + comment-parser: 1.4.6 + esquery: 1.7.0 + jsdoc-type-pratt-parser: 7.2.0 + '@es-joy/resolve.exports@1.2.0': {} '@esbuild/aix-ppc64@0.27.2': @@ -9415,9 +9552,9 @@ snapshots: '@eslint-react/ast@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/typescript-estree': 8.58.1(typescript@6.0.2) - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/typescript-estree': 8.58.2(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) string-ts: 2.3.1 typescript: 6.0.2 @@ -9429,9 +9566,9 @@ snapshots: '@eslint-react/ast': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/var': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) ts-pattern: 5.9.0 typescript: 6.0.2 @@ -9441,10 +9578,10 @@ snapshots: '@eslint-react/eslint-plugin@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/type-utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/type-utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) eslint-plugin-react-dom: 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint-plugin-react-naming-convention: 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) @@ -9458,7 +9595,7 @@ snapshots: '@eslint-react/shared@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) ts-pattern: 5.9.0 typescript: 6.0.2 @@ -9470,9 +9607,9 @@ snapshots: dependencies: '@eslint-react/ast': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) ts-pattern: 5.9.0 typescript: 6.0.2 @@ -9523,9 +9660,9 @@ snapshots: dependencies: '@types/json-schema': 7.0.15 - '@eslint/css-tree@3.6.9': + '@eslint/css-tree@4.0.1': dependencies: - mdn-data: 2.23.0 + mdn-data: 2.27.1 source-map-js: 1.2.1 '@eslint/eslintrc@3.3.5': @@ -9611,53 +9748,53 @@ snapshots: '@floating-ui/core': 1.7.5 '@floating-ui/utils': 0.2.11 - '@floating-ui/react-dom@2.1.8(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@floating-ui/react-dom@2.1.8(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: '@floating-ui/dom': 1.7.6 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) - '@floating-ui/react@0.26.28(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@floating-ui/react@0.26.28(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@floating-ui/react-dom': 2.1.8(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + '@floating-ui/react-dom': 2.1.8(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@floating-ui/utils': 0.2.11 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) tabbable: 6.4.0 - '@floating-ui/react@0.27.19(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@floating-ui/react@0.27.19(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@floating-ui/react-dom': 2.1.8(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + '@floating-ui/react-dom': 2.1.8(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@floating-ui/utils': 0.2.11 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) tabbable: 6.4.0 '@floating-ui/utils@0.2.11': {} - '@formatjs/fast-memoize@3.1.1': {} + '@formatjs/fast-memoize@3.1.2': {} - '@formatjs/intl-localematcher@0.8.2': + '@formatjs/intl-localematcher@0.8.3': dependencies: - '@formatjs/fast-memoize': 3.1.1 + '@formatjs/fast-memoize': 3.1.2 - '@headlessui/react@2.2.10(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@headlessui/react@2.2.10(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@floating-ui/react': 0.26.28(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@react-aria/focus': 3.21.5(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@react-aria/interactions': 3.27.1(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@tanstack/react-virtual': 3.13.23(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - use-sync-external-store: 1.6.0(react@19.2.4) + '@floating-ui/react': 0.26.28(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@react-aria/focus': 3.21.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@react-aria/interactions': 3.27.1(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@tanstack/react-virtual': 3.13.23(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + use-sync-external-store: 1.6.0(react@19.2.5) - '@heroicons/react@2.2.0(react@19.2.4)': + '@heroicons/react@2.2.0(react@19.2.5)': dependencies: - react: 19.2.4 + react: 19.2.5 - '@hono/node-server@1.19.13(hono@4.12.12)': + '@hono/node-server@1.19.14(hono@4.12.14)': dependencies: - hono: 4.12.12 + hono: 4.12.14 '@humanfs/core@0.19.1': {} @@ -9813,11 +9950,11 @@ snapshots: dependencies: minipass: 7.1.3 - '@joshwooding/vite-plugin-react-docgen-typescript@0.7.0(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)': + '@joshwooding/vite-plugin-react-docgen-typescript@0.7.0(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)': dependencies: glob: 13.0.6 react-docgen-typescript: 2.4.0(typescript@6.0.2) - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' optionalDependencies: typescript: 6.0.2 @@ -9845,169 +9982,169 @@ snapshots: '@jridgewell/resolve-uri': 3.1.2 '@jridgewell/sourcemap-codec': 1.5.5 - '@lexical/clipboard@0.42.0': + '@lexical/clipboard@0.43.0': dependencies: - '@lexical/html': 0.42.0 - '@lexical/list': 0.42.0 - '@lexical/selection': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/html': 0.43.0 + '@lexical/list': 0.43.0 + '@lexical/selection': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/code-core@0.42.0': + '@lexical/code-core@0.43.0': dependencies: - lexical: 0.42.0 + lexical: 0.43.0 - '@lexical/devtools-core@0.42.0(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@lexical/devtools-core@0.43.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@lexical/html': 0.42.0 - '@lexical/link': 0.42.0 - '@lexical/mark': 0.42.0 - '@lexical/table': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + '@lexical/html': 0.43.0 + '@lexical/link': 0.43.0 + '@lexical/mark': 0.43.0 + '@lexical/table': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) - '@lexical/dragon@0.42.0': + '@lexical/dragon@0.43.0': dependencies: - '@lexical/extension': 0.42.0 - lexical: 0.42.0 + '@lexical/extension': 0.43.0 + lexical: 0.43.0 - '@lexical/extension@0.42.0': + '@lexical/extension@0.43.0': dependencies: - '@lexical/utils': 0.42.0 - '@preact/signals-core': 1.14.0 - lexical: 0.42.0 + '@lexical/utils': 0.43.0 + '@preact/signals-core': 1.14.1 + lexical: 0.43.0 - '@lexical/hashtag@0.42.0': + '@lexical/hashtag@0.43.0': dependencies: - '@lexical/text': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/text': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/history@0.42.0': + '@lexical/history@0.43.0': dependencies: - '@lexical/extension': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/extension': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/html@0.42.0': + '@lexical/html@0.43.0': dependencies: - '@lexical/selection': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/selection': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/link@0.42.0': + '@lexical/link@0.43.0': dependencies: - '@lexical/extension': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/extension': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/list@0.42.0': + '@lexical/list@0.43.0': dependencies: - '@lexical/extension': 0.42.0 - '@lexical/selection': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/extension': 0.43.0 + '@lexical/selection': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/mark@0.42.0': + '@lexical/mark@0.43.0': dependencies: - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/markdown@0.42.0': + '@lexical/markdown@0.43.0': dependencies: - '@lexical/code-core': 0.42.0 - '@lexical/link': 0.42.0 - '@lexical/list': 0.42.0 - '@lexical/rich-text': 0.42.0 - '@lexical/text': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/code-core': 0.43.0 + '@lexical/link': 0.43.0 + '@lexical/list': 0.43.0 + '@lexical/rich-text': 0.43.0 + '@lexical/text': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/offset@0.42.0': + '@lexical/offset@0.43.0': dependencies: - lexical: 0.42.0 + lexical: 0.43.0 - '@lexical/overflow@0.42.0': + '@lexical/overflow@0.43.0': dependencies: - lexical: 0.42.0 + lexical: 0.43.0 - '@lexical/plain-text@0.42.0': + '@lexical/plain-text@0.43.0': dependencies: - '@lexical/clipboard': 0.42.0 - '@lexical/dragon': 0.42.0 - '@lexical/selection': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/clipboard': 0.43.0 + '@lexical/dragon': 0.43.0 + '@lexical/selection': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/react@0.42.0(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(yjs@13.6.30)': + '@lexical/react@0.43.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(yjs@13.6.30)': dependencies: - '@floating-ui/react': 0.27.19(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@lexical/devtools-core': 0.42.0(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@lexical/dragon': 0.42.0 - '@lexical/extension': 0.42.0 - '@lexical/hashtag': 0.42.0 - '@lexical/history': 0.42.0 - '@lexical/link': 0.42.0 - '@lexical/list': 0.42.0 - '@lexical/mark': 0.42.0 - '@lexical/markdown': 0.42.0 - '@lexical/overflow': 0.42.0 - '@lexical/plain-text': 0.42.0 - '@lexical/rich-text': 0.42.0 - '@lexical/table': 0.42.0 - '@lexical/text': 0.42.0 - '@lexical/utils': 0.42.0 - '@lexical/yjs': 0.42.0(yjs@13.6.30) - lexical: 0.42.0 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - react-error-boundary: 6.1.1(react@19.2.4) + '@floating-ui/react': 0.27.19(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@lexical/devtools-core': 0.43.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@lexical/dragon': 0.43.0 + '@lexical/extension': 0.43.0 + '@lexical/hashtag': 0.43.0 + '@lexical/history': 0.43.0 + '@lexical/link': 0.43.0 + '@lexical/list': 0.43.0 + '@lexical/mark': 0.43.0 + '@lexical/markdown': 0.43.0 + '@lexical/overflow': 0.43.0 + '@lexical/plain-text': 0.43.0 + '@lexical/rich-text': 0.43.0 + '@lexical/table': 0.43.0 + '@lexical/text': 0.43.0 + '@lexical/utils': 0.43.0 + '@lexical/yjs': 0.43.0(yjs@13.6.30) + lexical: 0.43.0 + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + react-error-boundary: 6.1.1(react@19.2.5) transitivePeerDependencies: - yjs - '@lexical/rich-text@0.42.0': + '@lexical/rich-text@0.43.0': dependencies: - '@lexical/clipboard': 0.42.0 - '@lexical/dragon': 0.42.0 - '@lexical/selection': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/clipboard': 0.43.0 + '@lexical/dragon': 0.43.0 + '@lexical/selection': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/selection@0.42.0': + '@lexical/selection@0.43.0': dependencies: - lexical: 0.42.0 + lexical: 0.43.0 - '@lexical/table@0.42.0': + '@lexical/table@0.43.0': dependencies: - '@lexical/clipboard': 0.42.0 - '@lexical/extension': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/clipboard': 0.43.0 + '@lexical/extension': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/text@0.42.0': + '@lexical/text@0.43.0': dependencies: - lexical: 0.42.0 + lexical: 0.43.0 - '@lexical/utils@0.42.0': + '@lexical/utils@0.43.0': dependencies: - '@lexical/selection': 0.42.0 - lexical: 0.42.0 + '@lexical/selection': 0.43.0 + lexical: 0.43.0 - '@lexical/yjs@0.42.0(yjs@13.6.30)': + '@lexical/yjs@0.43.0(yjs@13.6.30)': dependencies: - '@lexical/offset': 0.42.0 - '@lexical/selection': 0.42.0 - lexical: 0.42.0 + '@lexical/offset': 0.43.0 + '@lexical/selection': 0.43.0 + lexical: 0.43.0 yjs: 13.6.30 - '@mdx-js/loader@3.1.1(webpack@5.105.4(uglify-js@3.19.3))': + '@mdx-js/loader@3.1.1(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3))': dependencies: '@mdx-js/mdx': 3.1.1 source-map: 0.7.6 optionalDependencies: - webpack: 5.105.4(uglify-js@3.19.3) + webpack: 5.105.4(esbuild@0.27.2)(uglify-js@3.19.3) transitivePeerDependencies: - supports-color @@ -10041,11 +10178,11 @@ snapshots: transitivePeerDependencies: - supports-color - '@mdx-js/react@3.1.1(@types/react@19.2.14)(react@19.2.4)': + '@mdx-js/react@3.1.1(@types/react@19.2.14)(react@19.2.5)': dependencies: '@types/mdx': 2.0.13 '@types/react': 19.2.14 - react: 19.2.4 + react: 19.2.5 '@mdx-js/rollup@3.1.1(rollup@4.59.0)': dependencies: @@ -10065,17 +10202,17 @@ snapshots: dependencies: state-local: 1.0.7 - '@monaco-editor/react@4.7.0(monaco-editor@0.55.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@monaco-editor/react@4.7.0(monaco-editor@0.55.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: '@monaco-editor/loader': 1.7.0 monaco-editor: 0.55.1 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) - '@napi-rs/wasm-runtime@1.1.2(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)': + '@napi-rs/wasm-runtime@1.1.2(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)': dependencies: - '@emnapi/core': 1.9.1 - '@emnapi/runtime': 1.9.1 + '@emnapi/core': 1.9.2 + '@emnapi/runtime': 1.9.2 '@tybys/wasm-util': 0.10.1 optional: true @@ -10083,41 +10220,41 @@ snapshots: '@next/env@16.0.0': {} - '@next/env@16.2.2': {} + '@next/env@16.2.3': {} - '@next/eslint-plugin-next@16.2.2': + '@next/eslint-plugin-next@16.2.3': dependencies: fast-glob: 3.3.1 - '@next/mdx@16.2.2(@mdx-js/loader@3.1.1(webpack@5.105.4(uglify-js@3.19.3)))(@mdx-js/react@3.1.1(@types/react@19.2.14)(react@19.2.4))': + '@next/mdx@16.2.3(@mdx-js/loader@3.1.1(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)))(@mdx-js/react@3.1.1(@types/react@19.2.14)(react@19.2.5))': dependencies: source-map: 0.7.6 optionalDependencies: - '@mdx-js/loader': 3.1.1(webpack@5.105.4(uglify-js@3.19.3)) - '@mdx-js/react': 3.1.1(@types/react@19.2.14)(react@19.2.4) + '@mdx-js/loader': 3.1.1(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) + '@mdx-js/react': 3.1.1(@types/react@19.2.14)(react@19.2.5) - '@next/swc-darwin-arm64@16.2.2': + '@next/swc-darwin-arm64@16.2.3': optional: true - '@next/swc-darwin-x64@16.2.2': + '@next/swc-darwin-x64@16.2.3': optional: true - '@next/swc-linux-arm64-gnu@16.2.2': + '@next/swc-linux-arm64-gnu@16.2.3': optional: true - '@next/swc-linux-arm64-musl@16.2.2': + '@next/swc-linux-arm64-musl@16.2.3': optional: true - '@next/swc-linux-x64-gnu@16.2.2': + '@next/swc-linux-x64-gnu@16.2.3': optional: true - '@next/swc-linux-x64-musl@16.2.2': + '@next/swc-linux-x64-musl@16.2.3': optional: true - '@next/swc-win32-arm64-msvc@16.2.2': + '@next/swc-win32-arm64-msvc@16.2.3': optional: true - '@next/swc-win32-x64-msvc@16.2.2': + '@next/swc-win32-x64-msvc@16.2.3': optional: true '@nodelib/fs.scandir@2.1.5': @@ -10138,63 +10275,63 @@ snapshots: '@nolyfill/side-channel@1.0.44': {} - '@orpc/client@1.13.13': + '@orpc/client@1.13.14': dependencies: - '@orpc/shared': 1.13.13 - '@orpc/standard-server': 1.13.13 - '@orpc/standard-server-fetch': 1.13.13 - '@orpc/standard-server-peer': 1.13.13 + '@orpc/shared': 1.13.14 + '@orpc/standard-server': 1.13.14 + '@orpc/standard-server-fetch': 1.13.14 + '@orpc/standard-server-peer': 1.13.14 transitivePeerDependencies: - '@opentelemetry/api' - '@orpc/contract@1.13.13': + '@orpc/contract@1.13.14': dependencies: - '@orpc/client': 1.13.13 - '@orpc/shared': 1.13.13 + '@orpc/client': 1.13.14 + '@orpc/shared': 1.13.14 '@standard-schema/spec': 1.1.0 openapi-types: 12.1.3 transitivePeerDependencies: - '@opentelemetry/api' - '@orpc/openapi-client@1.13.13': + '@orpc/openapi-client@1.13.14': dependencies: - '@orpc/client': 1.13.13 - '@orpc/contract': 1.13.13 - '@orpc/shared': 1.13.13 - '@orpc/standard-server': 1.13.13 + '@orpc/client': 1.13.14 + '@orpc/contract': 1.13.14 + '@orpc/shared': 1.13.14 + '@orpc/standard-server': 1.13.14 transitivePeerDependencies: - '@opentelemetry/api' - '@orpc/shared@1.13.13': + '@orpc/shared@1.13.14': dependencies: radash: 12.1.1 type-fest: 5.5.0 - '@orpc/standard-server-fetch@1.13.13': + '@orpc/standard-server-fetch@1.13.14': dependencies: - '@orpc/shared': 1.13.13 - '@orpc/standard-server': 1.13.13 + '@orpc/shared': 1.13.14 + '@orpc/standard-server': 1.13.14 transitivePeerDependencies: - '@opentelemetry/api' - '@orpc/standard-server-peer@1.13.13': + '@orpc/standard-server-peer@1.13.14': dependencies: - '@orpc/shared': 1.13.13 - '@orpc/standard-server': 1.13.13 + '@orpc/shared': 1.13.14 + '@orpc/standard-server': 1.13.14 transitivePeerDependencies: - '@opentelemetry/api' - '@orpc/standard-server@1.13.13': + '@orpc/standard-server@1.13.14': dependencies: - '@orpc/shared': 1.13.13 + '@orpc/shared': 1.13.14 transitivePeerDependencies: - '@opentelemetry/api' - '@orpc/tanstack-query@1.13.13(@orpc/client@1.13.13)(@tanstack/query-core@5.96.2)': + '@orpc/tanstack-query@1.13.14(@orpc/client@1.13.14)(@tanstack/query-core@5.99.0)': dependencies: - '@orpc/client': 1.13.13 - '@orpc/shared': 1.13.13 - '@tanstack/query-core': 5.96.2 + '@orpc/client': 1.13.14 + '@orpc/shared': 1.13.14 + '@tanstack/query-core': 5.99.0 transitivePeerDependencies: - '@opentelemetry/api' @@ -10248,9 +10385,9 @@ snapshots: '@oxc-parser/binding-openharmony-arm64@0.121.0': optional: true - '@oxc-parser/binding-wasm32-wasi@0.121.0(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)': + '@oxc-parser/binding-wasm32-wasi@0.121.0(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)': dependencies: - '@napi-rs/wasm-runtime': 1.1.2(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) + '@napi-rs/wasm-runtime': 1.1.2(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) transitivePeerDependencies: - '@emnapi/core' - '@emnapi/runtime' @@ -10265,13 +10402,11 @@ snapshots: '@oxc-parser/binding-win32-x64-msvc@0.121.0': optional: true - '@oxc-project/runtime@0.123.0': {} + '@oxc-project/runtime@0.124.0': {} '@oxc-project/types@0.121.0': {} - '@oxc-project/types@0.122.0': {} - - '@oxc-project/types@0.123.0': {} + '@oxc-project/types@0.124.0': {} '@oxc-resolver/binding-android-arm-eabi@11.19.1': optional: true @@ -10321,9 +10456,9 @@ snapshots: '@oxc-resolver/binding-openharmony-arm64@11.19.1': optional: true - '@oxc-resolver/binding-wasm32-wasi@11.19.1(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)': + '@oxc-resolver/binding-wasm32-wasi@11.19.1(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)': dependencies: - '@napi-rs/wasm-runtime': 1.1.2(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) + '@napi-rs/wasm-runtime': 1.1.2(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) transitivePeerDependencies: - '@emnapi/core' - '@emnapi/runtime' @@ -10338,61 +10473,61 @@ snapshots: '@oxc-resolver/binding-win32-x64-msvc@11.19.1': optional: true - '@oxfmt/binding-android-arm-eabi@0.43.0': + '@oxfmt/binding-android-arm-eabi@0.45.0': optional: true - '@oxfmt/binding-android-arm64@0.43.0': + '@oxfmt/binding-android-arm64@0.45.0': optional: true - '@oxfmt/binding-darwin-arm64@0.43.0': + '@oxfmt/binding-darwin-arm64@0.45.0': optional: true - '@oxfmt/binding-darwin-x64@0.43.0': + '@oxfmt/binding-darwin-x64@0.45.0': optional: true - '@oxfmt/binding-freebsd-x64@0.43.0': + '@oxfmt/binding-freebsd-x64@0.45.0': optional: true - '@oxfmt/binding-linux-arm-gnueabihf@0.43.0': + '@oxfmt/binding-linux-arm-gnueabihf@0.45.0': optional: true - '@oxfmt/binding-linux-arm-musleabihf@0.43.0': + '@oxfmt/binding-linux-arm-musleabihf@0.45.0': optional: true - '@oxfmt/binding-linux-arm64-gnu@0.43.0': + '@oxfmt/binding-linux-arm64-gnu@0.45.0': optional: true - '@oxfmt/binding-linux-arm64-musl@0.43.0': + '@oxfmt/binding-linux-arm64-musl@0.45.0': optional: true - '@oxfmt/binding-linux-ppc64-gnu@0.43.0': + '@oxfmt/binding-linux-ppc64-gnu@0.45.0': optional: true - '@oxfmt/binding-linux-riscv64-gnu@0.43.0': + '@oxfmt/binding-linux-riscv64-gnu@0.45.0': optional: true - '@oxfmt/binding-linux-riscv64-musl@0.43.0': + '@oxfmt/binding-linux-riscv64-musl@0.45.0': optional: true - '@oxfmt/binding-linux-s390x-gnu@0.43.0': + '@oxfmt/binding-linux-s390x-gnu@0.45.0': optional: true - '@oxfmt/binding-linux-x64-gnu@0.43.0': + '@oxfmt/binding-linux-x64-gnu@0.45.0': optional: true - '@oxfmt/binding-linux-x64-musl@0.43.0': + '@oxfmt/binding-linux-x64-musl@0.45.0': optional: true - '@oxfmt/binding-openharmony-arm64@0.43.0': + '@oxfmt/binding-openharmony-arm64@0.45.0': optional: true - '@oxfmt/binding-win32-arm64-msvc@0.43.0': + '@oxfmt/binding-win32-arm64-msvc@0.45.0': optional: true - '@oxfmt/binding-win32-ia32-msvc@0.43.0': + '@oxfmt/binding-win32-ia32-msvc@0.45.0': optional: true - '@oxfmt/binding-win32-x64-msvc@0.43.0': + '@oxfmt/binding-win32-x64-msvc@0.45.0': optional: true '@oxlint-tsgolint/darwin-arm64@0.20.0': @@ -10413,61 +10548,61 @@ snapshots: '@oxlint-tsgolint/win32-x64@0.20.0': optional: true - '@oxlint/binding-android-arm-eabi@1.58.0': + '@oxlint/binding-android-arm-eabi@1.60.0': optional: true - '@oxlint/binding-android-arm64@1.58.0': + '@oxlint/binding-android-arm64@1.60.0': optional: true - '@oxlint/binding-darwin-arm64@1.58.0': + '@oxlint/binding-darwin-arm64@1.60.0': optional: true - '@oxlint/binding-darwin-x64@1.58.0': + '@oxlint/binding-darwin-x64@1.60.0': optional: true - '@oxlint/binding-freebsd-x64@1.58.0': + '@oxlint/binding-freebsd-x64@1.60.0': optional: true - '@oxlint/binding-linux-arm-gnueabihf@1.58.0': + '@oxlint/binding-linux-arm-gnueabihf@1.60.0': optional: true - '@oxlint/binding-linux-arm-musleabihf@1.58.0': + '@oxlint/binding-linux-arm-musleabihf@1.60.0': optional: true - '@oxlint/binding-linux-arm64-gnu@1.58.0': + '@oxlint/binding-linux-arm64-gnu@1.60.0': optional: true - '@oxlint/binding-linux-arm64-musl@1.58.0': + '@oxlint/binding-linux-arm64-musl@1.60.0': optional: true - '@oxlint/binding-linux-ppc64-gnu@1.58.0': + '@oxlint/binding-linux-ppc64-gnu@1.60.0': optional: true - '@oxlint/binding-linux-riscv64-gnu@1.58.0': + '@oxlint/binding-linux-riscv64-gnu@1.60.0': optional: true - '@oxlint/binding-linux-riscv64-musl@1.58.0': + '@oxlint/binding-linux-riscv64-musl@1.60.0': optional: true - '@oxlint/binding-linux-s390x-gnu@1.58.0': + '@oxlint/binding-linux-s390x-gnu@1.60.0': optional: true - '@oxlint/binding-linux-x64-gnu@1.58.0': + '@oxlint/binding-linux-x64-gnu@1.60.0': optional: true - '@oxlint/binding-linux-x64-musl@1.58.0': + '@oxlint/binding-linux-x64-musl@1.60.0': optional: true - '@oxlint/binding-openharmony-arm64@1.58.0': + '@oxlint/binding-openharmony-arm64@1.60.0': optional: true - '@oxlint/binding-win32-arm64-msvc@1.58.0': + '@oxlint/binding-win32-arm64-msvc@1.60.0': optional: true - '@oxlint/binding-win32-ia32-msvc@1.58.0': + '@oxlint/binding-win32-ia32-msvc@1.60.0': optional: true - '@oxlint/binding-win32-x64-msvc@1.58.0': + '@oxlint/binding-win32-x64-msvc@1.60.0': optional: true '@parcel/watcher-android-arm64@2.5.6': @@ -10539,239 +10674,239 @@ snapshots: '@polka/url@1.0.0-next.29': {} - '@preact/signals-core@1.14.0': {} + '@preact/signals-core@1.14.1': {} '@radix-ui/primitive@1.1.3': {} - '@radix-ui/react-compose-refs@1.1.2(@types/react@19.2.14)(react@19.2.4)': + '@radix-ui/react-compose-refs@1.1.2(@types/react@19.2.14)(react@19.2.5)': dependencies: - react: 19.2.4 + react: 19.2.5 optionalDependencies: '@types/react': 19.2.14 - '@radix-ui/react-context@1.1.2(@types/react@19.2.14)(react@19.2.4)': + '@radix-ui/react-context@1.1.2(@types/react@19.2.14)(react@19.2.5)': dependencies: - react: 19.2.4 + react: 19.2.5 optionalDependencies: '@types/react': 19.2.14 - '@radix-ui/react-dialog@1.1.15(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@radix-ui/react-dialog@1.1.15(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: '@radix-ui/primitive': 1.1.3 - '@radix-ui/react-compose-refs': 1.1.2(@types/react@19.2.14)(react@19.2.4) - '@radix-ui/react-context': 1.1.2(@types/react@19.2.14)(react@19.2.4) - '@radix-ui/react-dismissable-layer': 1.1.11(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@radix-ui/react-focus-guards': 1.1.3(@types/react@19.2.14)(react@19.2.4) - '@radix-ui/react-focus-scope': 1.1.7(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@radix-ui/react-id': 1.1.1(@types/react@19.2.14)(react@19.2.4) - '@radix-ui/react-portal': 1.1.9(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@radix-ui/react-presence': 1.1.5(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@radix-ui/react-primitive': 2.1.3(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@radix-ui/react-slot': 1.2.3(@types/react@19.2.14)(react@19.2.4) - '@radix-ui/react-use-controllable-state': 1.2.2(@types/react@19.2.14)(react@19.2.4) + '@radix-ui/react-compose-refs': 1.1.2(@types/react@19.2.14)(react@19.2.5) + '@radix-ui/react-context': 1.1.2(@types/react@19.2.14)(react@19.2.5) + '@radix-ui/react-dismissable-layer': 1.1.11(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@radix-ui/react-focus-guards': 1.1.3(@types/react@19.2.14)(react@19.2.5) + '@radix-ui/react-focus-scope': 1.1.7(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@radix-ui/react-id': 1.1.1(@types/react@19.2.14)(react@19.2.5) + '@radix-ui/react-portal': 1.1.9(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@radix-ui/react-presence': 1.1.5(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@radix-ui/react-primitive': 2.1.3(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@radix-ui/react-slot': 1.2.3(@types/react@19.2.14)(react@19.2.5) + '@radix-ui/react-use-controllable-state': 1.2.2(@types/react@19.2.14)(react@19.2.5) aria-hidden: 1.2.6 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - react-remove-scroll: 2.7.2(@types/react@19.2.14)(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + react-remove-scroll: 2.7.2(@types/react@19.2.14)(react@19.2.5) optionalDependencies: '@types/react': 19.2.14 '@types/react-dom': 19.2.3(@types/react@19.2.14) - '@radix-ui/react-dismissable-layer@1.1.11(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@radix-ui/react-dismissable-layer@1.1.11(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: '@radix-ui/primitive': 1.1.3 - '@radix-ui/react-compose-refs': 1.1.2(@types/react@19.2.14)(react@19.2.4) - '@radix-ui/react-primitive': 2.1.3(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@radix-ui/react-use-callback-ref': 1.1.1(@types/react@19.2.14)(react@19.2.4) - '@radix-ui/react-use-escape-keydown': 1.1.1(@types/react@19.2.14)(react@19.2.4) - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + '@radix-ui/react-compose-refs': 1.1.2(@types/react@19.2.14)(react@19.2.5) + '@radix-ui/react-primitive': 2.1.3(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@radix-ui/react-use-callback-ref': 1.1.1(@types/react@19.2.14)(react@19.2.5) + '@radix-ui/react-use-escape-keydown': 1.1.1(@types/react@19.2.14)(react@19.2.5) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) optionalDependencies: '@types/react': 19.2.14 '@types/react-dom': 19.2.3(@types/react@19.2.14) - '@radix-ui/react-focus-guards@1.1.3(@types/react@19.2.14)(react@19.2.4)': + '@radix-ui/react-focus-guards@1.1.3(@types/react@19.2.14)(react@19.2.5)': dependencies: - react: 19.2.4 + react: 19.2.5 optionalDependencies: '@types/react': 19.2.14 - '@radix-ui/react-focus-scope@1.1.7(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@radix-ui/react-focus-scope@1.1.7(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@radix-ui/react-compose-refs': 1.1.2(@types/react@19.2.14)(react@19.2.4) - '@radix-ui/react-primitive': 2.1.3(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@radix-ui/react-use-callback-ref': 1.1.1(@types/react@19.2.14)(react@19.2.4) - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + '@radix-ui/react-compose-refs': 1.1.2(@types/react@19.2.14)(react@19.2.5) + '@radix-ui/react-primitive': 2.1.3(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@radix-ui/react-use-callback-ref': 1.1.1(@types/react@19.2.14)(react@19.2.5) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) optionalDependencies: '@types/react': 19.2.14 '@types/react-dom': 19.2.3(@types/react@19.2.14) - '@radix-ui/react-id@1.1.1(@types/react@19.2.14)(react@19.2.4)': + '@radix-ui/react-id@1.1.1(@types/react@19.2.14)(react@19.2.5)': dependencies: - '@radix-ui/react-use-layout-effect': 1.1.1(@types/react@19.2.14)(react@19.2.4) - react: 19.2.4 + '@radix-ui/react-use-layout-effect': 1.1.1(@types/react@19.2.14)(react@19.2.5) + react: 19.2.5 optionalDependencies: '@types/react': 19.2.14 - '@radix-ui/react-portal@1.1.9(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@radix-ui/react-portal@1.1.9(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@radix-ui/react-primitive': 2.1.3(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@radix-ui/react-use-layout-effect': 1.1.1(@types/react@19.2.14)(react@19.2.4) - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + '@radix-ui/react-primitive': 2.1.3(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@radix-ui/react-use-layout-effect': 1.1.1(@types/react@19.2.14)(react@19.2.5) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) optionalDependencies: '@types/react': 19.2.14 '@types/react-dom': 19.2.3(@types/react@19.2.14) - '@radix-ui/react-presence@1.1.5(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@radix-ui/react-presence@1.1.5(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@radix-ui/react-compose-refs': 1.1.2(@types/react@19.2.14)(react@19.2.4) - '@radix-ui/react-use-layout-effect': 1.1.1(@types/react@19.2.14)(react@19.2.4) - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + '@radix-ui/react-compose-refs': 1.1.2(@types/react@19.2.14)(react@19.2.5) + '@radix-ui/react-use-layout-effect': 1.1.1(@types/react@19.2.14)(react@19.2.5) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) optionalDependencies: '@types/react': 19.2.14 '@types/react-dom': 19.2.3(@types/react@19.2.14) - '@radix-ui/react-primitive@2.1.3(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@radix-ui/react-primitive@2.1.3(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@radix-ui/react-slot': 1.2.3(@types/react@19.2.14)(react@19.2.4) - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + '@radix-ui/react-slot': 1.2.3(@types/react@19.2.14)(react@19.2.5) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) optionalDependencies: '@types/react': 19.2.14 '@types/react-dom': 19.2.3(@types/react@19.2.14) - '@radix-ui/react-primitive@2.1.4(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@radix-ui/react-primitive@2.1.4(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@radix-ui/react-slot': 1.2.4(@types/react@19.2.14)(react@19.2.4) - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + '@radix-ui/react-slot': 1.2.4(@types/react@19.2.14)(react@19.2.5) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) optionalDependencies: '@types/react': 19.2.14 '@types/react-dom': 19.2.3(@types/react@19.2.14) - '@radix-ui/react-slot@1.2.3(@types/react@19.2.14)(react@19.2.4)': + '@radix-ui/react-slot@1.2.3(@types/react@19.2.14)(react@19.2.5)': dependencies: - '@radix-ui/react-compose-refs': 1.1.2(@types/react@19.2.14)(react@19.2.4) - react: 19.2.4 + '@radix-ui/react-compose-refs': 1.1.2(@types/react@19.2.14)(react@19.2.5) + react: 19.2.5 optionalDependencies: '@types/react': 19.2.14 - '@radix-ui/react-slot@1.2.4(@types/react@19.2.14)(react@19.2.4)': + '@radix-ui/react-slot@1.2.4(@types/react@19.2.14)(react@19.2.5)': dependencies: - '@radix-ui/react-compose-refs': 1.1.2(@types/react@19.2.14)(react@19.2.4) - react: 19.2.4 + '@radix-ui/react-compose-refs': 1.1.2(@types/react@19.2.14)(react@19.2.5) + react: 19.2.5 optionalDependencies: '@types/react': 19.2.14 - '@radix-ui/react-use-callback-ref@1.1.1(@types/react@19.2.14)(react@19.2.4)': + '@radix-ui/react-use-callback-ref@1.1.1(@types/react@19.2.14)(react@19.2.5)': dependencies: - react: 19.2.4 + react: 19.2.5 optionalDependencies: '@types/react': 19.2.14 - '@radix-ui/react-use-controllable-state@1.2.2(@types/react@19.2.14)(react@19.2.4)': + '@radix-ui/react-use-controllable-state@1.2.2(@types/react@19.2.14)(react@19.2.5)': dependencies: - '@radix-ui/react-use-effect-event': 0.0.2(@types/react@19.2.14)(react@19.2.4) - '@radix-ui/react-use-layout-effect': 1.1.1(@types/react@19.2.14)(react@19.2.4) - react: 19.2.4 + '@radix-ui/react-use-effect-event': 0.0.2(@types/react@19.2.14)(react@19.2.5) + '@radix-ui/react-use-layout-effect': 1.1.1(@types/react@19.2.14)(react@19.2.5) + react: 19.2.5 optionalDependencies: '@types/react': 19.2.14 - '@radix-ui/react-use-effect-event@0.0.2(@types/react@19.2.14)(react@19.2.4)': + '@radix-ui/react-use-effect-event@0.0.2(@types/react@19.2.14)(react@19.2.5)': dependencies: - '@radix-ui/react-use-layout-effect': 1.1.1(@types/react@19.2.14)(react@19.2.4) - react: 19.2.4 + '@radix-ui/react-use-layout-effect': 1.1.1(@types/react@19.2.14)(react@19.2.5) + react: 19.2.5 optionalDependencies: '@types/react': 19.2.14 - '@radix-ui/react-use-escape-keydown@1.1.1(@types/react@19.2.14)(react@19.2.4)': + '@radix-ui/react-use-escape-keydown@1.1.1(@types/react@19.2.14)(react@19.2.5)': dependencies: - '@radix-ui/react-use-callback-ref': 1.1.1(@types/react@19.2.14)(react@19.2.4) - react: 19.2.4 + '@radix-ui/react-use-callback-ref': 1.1.1(@types/react@19.2.14)(react@19.2.5) + react: 19.2.5 optionalDependencies: '@types/react': 19.2.14 - '@radix-ui/react-use-layout-effect@1.1.1(@types/react@19.2.14)(react@19.2.4)': + '@radix-ui/react-use-layout-effect@1.1.1(@types/react@19.2.14)(react@19.2.5)': dependencies: - react: 19.2.4 + react: 19.2.5 optionalDependencies: '@types/react': 19.2.14 - '@react-aria/focus@3.21.5(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@react-aria/focus@3.21.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@react-aria/interactions': 3.27.1(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@react-aria/utils': 3.33.1(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@react-types/shared': 3.33.1(react@19.2.4) + '@react-aria/interactions': 3.27.1(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@react-aria/utils': 3.33.1(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@react-types/shared': 3.33.1(react@19.2.5) '@swc/helpers': 0.5.20 clsx: 2.1.1 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) - '@react-aria/interactions@3.27.1(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@react-aria/interactions@3.27.1(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@react-aria/ssr': 3.9.10(react@19.2.4) - '@react-aria/utils': 3.33.1(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + '@react-aria/ssr': 3.9.10(react@19.2.5) + '@react-aria/utils': 3.33.1(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@react-stately/flags': 3.1.2 - '@react-types/shared': 3.33.1(react@19.2.4) + '@react-types/shared': 3.33.1(react@19.2.5) '@swc/helpers': 0.5.20 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) - '@react-aria/ssr@3.9.10(react@19.2.4)': + '@react-aria/ssr@3.9.10(react@19.2.5)': dependencies: '@swc/helpers': 0.5.20 - react: 19.2.4 + react: 19.2.5 - '@react-aria/utils@3.33.1(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@react-aria/utils@3.33.1(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@react-aria/ssr': 3.9.10(react@19.2.4) + '@react-aria/ssr': 3.9.10(react@19.2.5) '@react-stately/flags': 3.1.2 - '@react-stately/utils': 3.11.0(react@19.2.4) - '@react-types/shared': 3.33.1(react@19.2.4) + '@react-stately/utils': 3.11.0(react@19.2.5) + '@react-types/shared': 3.33.1(react@19.2.5) '@swc/helpers': 0.5.20 clsx: 2.1.1 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) '@react-stately/flags@3.1.2': dependencies: '@swc/helpers': 0.5.20 - '@react-stately/utils@3.11.0(react@19.2.4)': + '@react-stately/utils@3.11.0(react@19.2.5)': dependencies: '@swc/helpers': 0.5.20 - react: 19.2.4 + react: 19.2.5 - '@react-types/shared@3.33.1(react@19.2.4)': + '@react-types/shared@3.33.1(react@19.2.5)': dependencies: - react: 19.2.4 + react: 19.2.5 - '@reactflow/background@11.3.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@reactflow/background@11.3.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@reactflow/core': 11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + '@reactflow/core': 11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) classcat: 5.0.5 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - zustand: 4.5.7(@types/react@19.2.14)(immer@11.1.4)(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + zustand: 4.5.7(@types/react@19.2.14)(immer@11.1.4)(react@19.2.5) transitivePeerDependencies: - '@types/react' - immer - '@reactflow/controls@11.2.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@reactflow/controls@11.2.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@reactflow/core': 11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + '@reactflow/core': 11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) classcat: 5.0.5 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - zustand: 4.5.7(@types/react@19.2.14)(immer@11.1.4)(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + zustand: 4.5.7(@types/react@19.2.14)(immer@11.1.4)(react@19.2.5) transitivePeerDependencies: - '@types/react' - immer - '@reactflow/core@11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@reactflow/core@11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: '@types/d3': 7.4.3 '@types/d3-drag': 3.0.7 @@ -10781,113 +10916,61 @@ snapshots: d3-drag: 3.0.0 d3-selection: 3.0.0 d3-zoom: 3.0.0 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - zustand: 4.5.7(@types/react@19.2.14)(immer@11.1.4)(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + zustand: 4.5.7(@types/react@19.2.14)(immer@11.1.4)(react@19.2.5) transitivePeerDependencies: - '@types/react' - immer - '@reactflow/minimap@11.7.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@reactflow/minimap@11.7.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@reactflow/core': 11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + '@reactflow/core': 11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@types/d3-selection': 3.0.11 '@types/d3-zoom': 3.0.8 classcat: 5.0.5 d3-selection: 3.0.0 d3-zoom: 3.0.0 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - zustand: 4.5.7(@types/react@19.2.14)(immer@11.1.4)(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + zustand: 4.5.7(@types/react@19.2.14)(immer@11.1.4)(react@19.2.5) transitivePeerDependencies: - '@types/react' - immer - '@reactflow/node-resizer@2.2.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@reactflow/node-resizer@2.2.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@reactflow/core': 11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + '@reactflow/core': 11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) classcat: 5.0.5 d3-drag: 3.0.0 d3-selection: 3.0.0 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - zustand: 4.5.7(@types/react@19.2.14)(immer@11.1.4)(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + zustand: 4.5.7(@types/react@19.2.14)(immer@11.1.4)(react@19.2.5) transitivePeerDependencies: - '@types/react' - immer - '@reactflow/node-toolbar@1.3.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@reactflow/node-toolbar@1.3.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@reactflow/core': 11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + '@reactflow/core': 11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) classcat: 5.0.5 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - zustand: 4.5.7(@types/react@19.2.14)(immer@11.1.4)(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + zustand: 4.5.7(@types/react@19.2.14)(immer@11.1.4)(react@19.2.5) transitivePeerDependencies: - '@types/react' - immer - '@remixicon/react@4.9.0(react@19.2.4)': + '@remixicon/react@4.9.0(react@19.2.5)': dependencies: - react: 19.2.4 + react: 19.2.5 '@resvg/resvg-wasm@2.4.0': {} '@rgrove/parse-xml@4.2.0': {} - '@rolldown/binding-android-arm64@1.0.0-rc.12': - optional: true - - '@rolldown/binding-darwin-arm64@1.0.0-rc.12': - optional: true - - '@rolldown/binding-darwin-x64@1.0.0-rc.12': - optional: true - - '@rolldown/binding-freebsd-x64@1.0.0-rc.12': - optional: true - - '@rolldown/binding-linux-arm-gnueabihf@1.0.0-rc.12': - optional: true - - '@rolldown/binding-linux-arm64-gnu@1.0.0-rc.12': - optional: true - - '@rolldown/binding-linux-arm64-musl@1.0.0-rc.12': - optional: true - - '@rolldown/binding-linux-ppc64-gnu@1.0.0-rc.12': - optional: true - - '@rolldown/binding-linux-s390x-gnu@1.0.0-rc.12': - optional: true - - '@rolldown/binding-linux-x64-gnu@1.0.0-rc.12': - optional: true - - '@rolldown/binding-linux-x64-musl@1.0.0-rc.12': - optional: true - - '@rolldown/binding-openharmony-arm64@1.0.0-rc.12': - optional: true - - '@rolldown/binding-wasm32-wasi@1.0.0-rc.12(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)': - dependencies: - '@napi-rs/wasm-runtime': 1.1.2(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) - transitivePeerDependencies: - - '@emnapi/core' - - '@emnapi/runtime' - optional: true - - '@rolldown/binding-win32-arm64-msvc@1.0.0-rc.12': - optional: true - - '@rolldown/binding-win32-x64-msvc@1.0.0-rc.12': - optional: true - - '@rolldown/pluginutils@1.0.0-rc.12': {} - - '@rolldown/pluginutils@1.0.0-rc.13': {} + '@rolldown/pluginutils@1.0.0-rc.15': {} '@rolldown/pluginutils@1.0.0-rc.7': {} @@ -10981,39 +11064,39 @@ snapshots: '@rollup/rollup-win32-x64-msvc@4.59.0': optional: true - '@sentry-internal/browser-utils@10.47.0': + '@sentry-internal/browser-utils@10.48.0': dependencies: - '@sentry/core': 10.47.0 + '@sentry/core': 10.48.0 - '@sentry-internal/feedback@10.47.0': + '@sentry-internal/feedback@10.48.0': dependencies: - '@sentry/core': 10.47.0 + '@sentry/core': 10.48.0 - '@sentry-internal/replay-canvas@10.47.0': + '@sentry-internal/replay-canvas@10.48.0': dependencies: - '@sentry-internal/replay': 10.47.0 - '@sentry/core': 10.47.0 + '@sentry-internal/replay': 10.48.0 + '@sentry/core': 10.48.0 - '@sentry-internal/replay@10.47.0': + '@sentry-internal/replay@10.48.0': dependencies: - '@sentry-internal/browser-utils': 10.47.0 - '@sentry/core': 10.47.0 + '@sentry-internal/browser-utils': 10.48.0 + '@sentry/core': 10.48.0 - '@sentry/browser@10.47.0': + '@sentry/browser@10.48.0': dependencies: - '@sentry-internal/browser-utils': 10.47.0 - '@sentry-internal/feedback': 10.47.0 - '@sentry-internal/replay': 10.47.0 - '@sentry-internal/replay-canvas': 10.47.0 - '@sentry/core': 10.47.0 + '@sentry-internal/browser-utils': 10.48.0 + '@sentry-internal/feedback': 10.48.0 + '@sentry-internal/replay': 10.48.0 + '@sentry-internal/replay-canvas': 10.48.0 + '@sentry/core': 10.48.0 - '@sentry/core@10.47.0': {} + '@sentry/core@10.48.0': {} - '@sentry/react@10.47.0(react@19.2.4)': + '@sentry/react@10.48.0(react@19.2.5)': dependencies: - '@sentry/browser': 10.47.0 - '@sentry/core': 10.47.0 - react: 19.2.4 + '@sentry/browser': 10.48.0 + '@sentry/core': 10.48.0 + react: 19.2.5 '@shikijs/core@4.0.2': dependencies: @@ -11062,6 +11145,8 @@ snapshots: '@sindresorhus/base62@1.0.0': {} + '@socket.io/component-emitter@3.1.2': {} + '@solid-primitives/event-listener@2.4.5(solid-js@1.9.11)': dependencies: '@solid-primitives/utils': 6.4.0(solid-js@1.9.11) @@ -11100,15 +11185,15 @@ snapshots: '@standard-schema/spec@1.1.0': {} - '@storybook/addon-docs@10.3.5(@types/react@19.2.14)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(webpack@5.105.4(uglify-js@3.19.3))': + '@storybook/addon-docs@10.3.5(@types/react@19.2.14)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3))': dependencies: - '@mdx-js/react': 3.1.1(@types/react@19.2.14)(react@19.2.4) - '@storybook/csf-plugin': 10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(webpack@5.105.4(uglify-js@3.19.3)) - '@storybook/icons': 2.0.1(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@storybook/react-dom-shim': 10.3.5(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)) - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + '@mdx-js/react': 3.1.1(@types/react@19.2.14)(react@19.2.5) + '@storybook/csf-plugin': 10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) + '@storybook/icons': 2.0.1(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@storybook/react-dom-shim': 10.3.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) ts-dedent: 2.2.0 transitivePeerDependencies: - '@types/react' @@ -11117,61 +11202,62 @@ snapshots: - vite - webpack - '@storybook/addon-links@10.3.5(react@19.2.4)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))': + '@storybook/addon-links@10.3.5(react@19.2.5)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))': dependencies: '@storybook/global': 5.0.0 - storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) optionalDependencies: - react: 19.2.4 + react: 19.2.5 - '@storybook/addon-onboarding@10.3.5(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))': + '@storybook/addon-onboarding@10.3.5(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))': dependencies: - storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) - '@storybook/addon-themes@10.3.5(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))': + '@storybook/addon-themes@10.3.5(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))': dependencies: - storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) ts-dedent: 2.2.0 - '@storybook/builder-vite@10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(webpack@5.105.4(uglify-js@3.19.3))': + '@storybook/builder-vite@10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3))': dependencies: - '@storybook/csf-plugin': 10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(webpack@5.105.4(uglify-js@3.19.3)) - storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + '@storybook/csf-plugin': 10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) + storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) ts-dedent: 2.2.0 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' transitivePeerDependencies: - esbuild - rollup - webpack - '@storybook/csf-plugin@10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(webpack@5.105.4(uglify-js@3.19.3))': + '@storybook/csf-plugin@10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3))': dependencies: - storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) unplugin: 2.3.11 optionalDependencies: + esbuild: 0.27.2 rollup: 4.59.0 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - webpack: 5.105.4(uglify-js@3.19.3) + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + webpack: 5.105.4(esbuild@0.27.2)(uglify-js@3.19.3) '@storybook/global@5.0.0': {} - '@storybook/icons@2.0.1(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@storybook/icons@2.0.1(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) - '@storybook/nextjs-vite@10.3.5(@babel/core@7.29.0)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0))(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(typescript@6.0.2)(webpack@5.105.4(uglify-js@3.19.3))': + '@storybook/nextjs-vite@10.3.5(@babel/core@7.29.0)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3))': dependencies: - '@storybook/builder-vite': 10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(webpack@5.105.4(uglify-js@3.19.3)) - '@storybook/react': 10.3.5(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(typescript@6.0.2) - '@storybook/react-vite': 10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(typescript@6.0.2)(webpack@5.105.4(uglify-js@3.19.3)) - next: 16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0) - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - styled-jsx: 5.1.6(@babel/core@7.29.0)(react@19.2.4) - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - vite-plugin-storybook-nextjs: 3.2.4(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0))(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(typescript@6.0.2) + '@storybook/builder-vite': 10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) + '@storybook/react': 10.3.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2) + '@storybook/react-vite': 10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) + next: 16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + styled-jsx: 5.1.6(@babel/core@7.29.0)(react@19.2.5) + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite-plugin-storybook-nextjs: 3.2.4(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2) optionalDependencies: typescript: 6.0.2 transitivePeerDependencies: @@ -11182,27 +11268,27 @@ snapshots: - supports-color - webpack - '@storybook/react-dom-shim@10.3.5(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))': + '@storybook/react-dom-shim@10.3.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))': dependencies: - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) - '@storybook/react-vite@10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(typescript@6.0.2)(webpack@5.105.4(uglify-js@3.19.3))': + '@storybook/react-vite@10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3))': dependencies: - '@joshwooding/vite-plugin-react-docgen-typescript': 0.7.0(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2) + '@joshwooding/vite-plugin-react-docgen-typescript': 0.7.0(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2) '@rollup/pluginutils': 5.3.0(rollup@4.59.0) - '@storybook/builder-vite': 10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(webpack@5.105.4(uglify-js@3.19.3)) - '@storybook/react': 10.3.5(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(typescript@6.0.2) + '@storybook/builder-vite': 10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) + '@storybook/react': 10.3.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2) empathic: 2.0.0 magic-string: 0.30.21 - react: 19.2.4 + react: 19.2.5 react-docgen: 8.0.3 - react-dom: 19.2.4(react@19.2.4) + react-dom: 19.2.5(react@19.2.5) resolve: 1.22.11 - storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) tsconfig-paths: 4.2.0 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' transitivePeerDependencies: - esbuild - rollup @@ -11210,24 +11296,24 @@ snapshots: - typescript - webpack - '@storybook/react@10.3.5(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(typescript@6.0.2)': + '@storybook/react@10.3.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2)': dependencies: '@storybook/global': 5.0.0 - '@storybook/react-dom-shim': 10.3.5(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)) - react: 19.2.4 + '@storybook/react-dom-shim': 10.3.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)) + react: 19.2.5 react-docgen: 8.0.3 react-docgen-typescript: 2.4.0(typescript@6.0.2) - react-dom: 19.2.4(react@19.2.4) - storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + react-dom: 19.2.5(react@19.2.5) + storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) optionalDependencies: typescript: 6.0.2 transitivePeerDependencies: - supports-color - '@streamdown/math@1.0.2(react@19.2.4)': + '@streamdown/math@1.0.2(react@19.2.5)': dependencies: katex: 0.16.45 - react: 19.2.4 + react: 19.2.5 rehype-katex: 7.0.1 remark-math: 6.0.0 transitivePeerDependencies: @@ -11236,7 +11322,7 @@ snapshots: '@stylistic/eslint-plugin@5.10.0(eslint@10.2.0(jiti@2.6.1))': dependencies: '@eslint-community/eslint-utils': 4.9.1(eslint@10.2.0(jiti@2.6.1)) - '@typescript-eslint/types': 8.58.1 + '@typescript-eslint/types': 8.58.2 eslint: 10.2.0(jiti@2.6.1) eslint-visitor-keys: 4.2.1 espree: 10.4.0 @@ -11341,12 +11427,12 @@ snapshots: postcss-selector-parser: 6.0.10 tailwindcss: 4.2.2 - '@tailwindcss/vite@4.2.2(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))': + '@tailwindcss/vite@4.2.2(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))': dependencies: '@tailwindcss/node': 4.2.2 '@tailwindcss/oxide': 4.2.2 tailwindcss: 4.2.2 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' '@tanstack/devtools-client@0.0.6': dependencies: @@ -11370,10 +11456,10 @@ snapshots: transitivePeerDependencies: - csstype - '@tanstack/devtools-utils@0.4.0(@types/react@19.2.14)(react@19.2.4)(solid-js@1.9.11)': + '@tanstack/devtools-utils@0.4.0(@types/react@19.2.14)(react@19.2.5)(solid-js@1.9.11)': optionalDependencies: '@types/react': 19.2.14 - react: 19.2.4 + react: 19.2.5 solid-js: 1.9.11 '@tanstack/devtools@0.11.2(csstype@3.2.3)(solid-js@1.9.11)': @@ -11392,26 +11478,26 @@ snapshots: - csstype - utf-8-validate - '@tanstack/eslint-plugin-query@5.96.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': + '@tanstack/eslint-plugin-query@5.99.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) optionalDependencies: typescript: 6.0.2 transitivePeerDependencies: - supports-color - '@tanstack/form-core@1.28.6': + '@tanstack/form-core@1.29.0': dependencies: '@tanstack/devtools-event-client': 0.4.3 '@tanstack/pacer-lite': 0.1.1 '@tanstack/store': 0.9.3 - '@tanstack/form-devtools@0.2.20(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.4)(solid-js@1.9.11)': + '@tanstack/form-devtools@0.2.21(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.5)(solid-js@1.9.11)': dependencies: '@tanstack/devtools-ui': 0.5.1(csstype@3.2.3)(solid-js@1.9.11) - '@tanstack/devtools-utils': 0.4.0(@types/react@19.2.14)(react@19.2.4)(solid-js@1.9.11) - '@tanstack/form-core': 1.28.6 + '@tanstack/devtools-utils': 0.4.0(@types/react@19.2.14)(react@19.2.5)(solid-js@1.9.11) + '@tanstack/form-core': 1.29.0 clsx: 2.1.1 dayjs: 1.11.20 goober: 2.1.18(csstype@3.2.3) @@ -11425,28 +11511,28 @@ snapshots: '@tanstack/pacer-lite@0.1.1': {} - '@tanstack/query-core@5.96.2': {} + '@tanstack/query-core@5.99.0': {} - '@tanstack/query-devtools@5.96.2': {} + '@tanstack/query-devtools@5.99.0': {} - '@tanstack/react-devtools@0.10.2(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(csstype@3.2.3)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(solid-js@1.9.11)': + '@tanstack/react-devtools@0.10.2(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(csstype@3.2.3)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(solid-js@1.9.11)': dependencies: '@tanstack/devtools': 0.11.2(csstype@3.2.3)(solid-js@1.9.11) '@types/react': 19.2.14 '@types/react-dom': 19.2.3(@types/react@19.2.14) - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) transitivePeerDependencies: - bufferutil - csstype - solid-js - utf-8-validate - '@tanstack/react-form-devtools@0.2.20(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.4)(solid-js@1.9.11)': + '@tanstack/react-form-devtools@0.2.21(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.5)(solid-js@1.9.11)': dependencies: - '@tanstack/devtools-utils': 0.4.0(@types/react@19.2.14)(react@19.2.4)(solid-js@1.9.11) - '@tanstack/form-devtools': 0.2.20(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.4)(solid-js@1.9.11) - react: 19.2.4 + '@tanstack/devtools-utils': 0.4.0(@types/react@19.2.14)(react@19.2.5)(solid-js@1.9.11) + '@tanstack/form-devtools': 0.2.21(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.5)(solid-js@1.9.11) + react: 19.2.5 transitivePeerDependencies: - '@types/react' - csstype @@ -11454,37 +11540,37 @@ snapshots: - solid-js - vue - '@tanstack/react-form@1.28.6(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@tanstack/react-form@1.29.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@tanstack/form-core': 1.28.6 - '@tanstack/react-store': 0.9.3(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - react: 19.2.4 + '@tanstack/form-core': 1.29.0 + '@tanstack/react-store': 0.9.3(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + react: 19.2.5 transitivePeerDependencies: - react-dom - '@tanstack/react-query-devtools@5.96.2(@tanstack/react-query@5.96.2(react@19.2.4))(react@19.2.4)': + '@tanstack/react-query-devtools@5.99.0(@tanstack/react-query@5.99.0(react@19.2.5))(react@19.2.5)': dependencies: - '@tanstack/query-devtools': 5.96.2 - '@tanstack/react-query': 5.96.2(react@19.2.4) - react: 19.2.4 + '@tanstack/query-devtools': 5.99.0 + '@tanstack/react-query': 5.99.0(react@19.2.5) + react: 19.2.5 - '@tanstack/react-query@5.96.2(react@19.2.4)': + '@tanstack/react-query@5.99.0(react@19.2.5)': dependencies: - '@tanstack/query-core': 5.96.2 - react: 19.2.4 + '@tanstack/query-core': 5.99.0 + react: 19.2.5 - '@tanstack/react-store@0.9.3(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@tanstack/react-store@0.9.3(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: '@tanstack/store': 0.9.3 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - use-sync-external-store: 1.6.0(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + use-sync-external-store: 1.6.0(react@19.2.5) - '@tanstack/react-virtual@3.13.23(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@tanstack/react-virtual@3.13.23(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: '@tanstack/virtual-core': 3.13.23 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) '@tanstack/store@0.9.3': {} @@ -11512,12 +11598,12 @@ snapshots: picocolors: 1.1.1 redent: 3.0.0 - '@testing-library/react@16.3.2(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@testing-library/react@16.3.2(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: '@babel/runtime': 7.29.2 '@testing-library/dom': 10.4.1 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) optionalDependencies: '@types/react': 19.2.14 '@types/react-dom': 19.2.3(@types/react@19.2.14) @@ -11526,11 +11612,11 @@ snapshots: dependencies: '@testing-library/dom': 10.4.1 - '@tsslint/cli@3.0.2(@tsslint/compat-eslint@3.0.2(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2)': + '@tsslint/cli@3.0.3(@tsslint/compat-eslint@3.0.3(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2)': dependencies: '@clack/prompts': 0.8.2 - '@tsslint/config': 3.0.2(@tsslint/compat-eslint@3.0.2(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2) - '@tsslint/core': 3.0.2 + '@tsslint/config': 3.0.3(@tsslint/compat-eslint@3.0.3(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2) + '@tsslint/core': 3.0.3 '@volar/language-core': 2.4.28 '@volar/language-hub': 0.0.1 '@volar/typescript': 2.4.28 @@ -11540,32 +11626,32 @@ snapshots: - '@tsslint/compat-eslint' - tsl - '@tsslint/compat-eslint@3.0.2(jiti@2.6.1)(typescript@6.0.2)': + '@tsslint/compat-eslint@3.0.3(jiti@2.6.1)(typescript@6.0.2)': dependencies: - '@tsslint/types': 3.0.2 - '@typescript-eslint/parser': 8.58.1(eslint@9.27.0(jiti@2.6.1))(typescript@6.0.2) + '@tsslint/types': 3.0.3 + '@typescript-eslint/parser': 8.58.2(eslint@9.27.0(jiti@2.6.1))(typescript@6.0.2) eslint: 9.27.0(jiti@2.6.1) transitivePeerDependencies: - jiti - supports-color - typescript - '@tsslint/config@3.0.2(@tsslint/compat-eslint@3.0.2(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2)': + '@tsslint/config@3.0.3(@tsslint/compat-eslint@3.0.3(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2)': dependencies: - '@tsslint/types': 3.0.2 + '@tsslint/types': 3.0.3 minimatch: 10.2.4 ts-api-utils: 2.5.0(typescript@6.0.2) optionalDependencies: - '@tsslint/compat-eslint': 3.0.2(jiti@2.6.1)(typescript@6.0.2) + '@tsslint/compat-eslint': 3.0.3(jiti@2.6.1)(typescript@6.0.2) transitivePeerDependencies: - typescript - '@tsslint/core@3.0.2': + '@tsslint/core@3.0.3': dependencies: - '@tsslint/types': 3.0.2 + '@tsslint/types': 3.0.3 minimatch: 10.2.4 - '@tsslint/types@3.0.2': {} + '@tsslint/types@3.0.3': {} '@tybys/wasm-util@0.10.1': dependencies: @@ -11769,15 +11855,15 @@ snapshots: '@types/negotiator@0.6.4': {} - '@types/node@25.5.2': + '@types/node@25.6.0': dependencies: - undici-types: 7.18.2 + undici-types: 7.19.2 '@types/normalize-package-data@2.4.4': {} '@types/papaparse@5.5.2': dependencies: - '@types/node': 25.5.2 + '@types/node': 25.6.0 '@types/qs@6.15.0': {} @@ -11804,23 +11890,23 @@ snapshots: '@types/ws@8.18.1': dependencies: - '@types/node': 25.5.2 + '@types/node': 25.6.0 '@types/yauzl@2.10.3': dependencies: - '@types/node': 25.5.2 + '@types/node': 25.6.0 optional: true '@types/zen-observable@0.8.3': {} - '@typescript-eslint/eslint-plugin@8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': + '@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: '@eslint-community/regexpp': 4.12.2 - '@typescript-eslint/parser': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/type-utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/visitor-keys': 8.58.1 + '@typescript-eslint/parser': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/type-utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/visitor-keys': 8.58.2 eslint: 10.2.0(jiti@2.6.1) ignore: 7.0.5 natural-compare: 1.4.0 @@ -11841,24 +11927,24 @@ snapshots: transitivePeerDependencies: - supports-color - '@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': + '@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/typescript-estree': 8.58.1(typescript@6.0.2) - '@typescript-eslint/visitor-keys': 8.58.1 + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/typescript-estree': 8.58.2(typescript@6.0.2) + '@typescript-eslint/visitor-keys': 8.58.2 debug: 4.4.3(supports-color@8.1.1) eslint: 10.2.0(jiti@2.6.1) typescript: 6.0.2 transitivePeerDependencies: - supports-color - '@typescript-eslint/parser@8.58.1(eslint@9.27.0(jiti@2.6.1))(typescript@6.0.2)': + '@typescript-eslint/parser@8.58.2(eslint@9.27.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/typescript-estree': 8.58.1(typescript@6.0.2) - '@typescript-eslint/visitor-keys': 8.58.1 + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/typescript-estree': 8.58.2(typescript@6.0.2) + '@typescript-eslint/visitor-keys': 8.58.2 debug: 4.4.3(supports-color@8.1.1) eslint: 9.27.0(jiti@2.6.1) typescript: 6.0.2 @@ -11867,17 +11953,17 @@ snapshots: '@typescript-eslint/project-service@8.57.2(typescript@6.0.2)': dependencies: - '@typescript-eslint/tsconfig-utils': 8.58.1(typescript@6.0.2) - '@typescript-eslint/types': 8.58.1 + '@typescript-eslint/tsconfig-utils': 8.58.2(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 debug: 4.4.3(supports-color@8.1.1) typescript: 6.0.2 transitivePeerDependencies: - supports-color - '@typescript-eslint/project-service@8.58.1(typescript@6.0.2)': + '@typescript-eslint/project-service@8.58.2(typescript@6.0.2)': dependencies: - '@typescript-eslint/tsconfig-utils': 8.58.1(typescript@6.0.2) - '@typescript-eslint/types': 8.58.1 + '@typescript-eslint/tsconfig-utils': 8.58.2(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 debug: 4.4.3(supports-color@8.1.1) typescript: 6.0.2 transitivePeerDependencies: @@ -11902,24 +11988,24 @@ snapshots: '@typescript-eslint/types': 8.57.2 '@typescript-eslint/visitor-keys': 8.57.2 - '@typescript-eslint/scope-manager@8.58.1': + '@typescript-eslint/scope-manager@8.58.2': dependencies: - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/visitor-keys': 8.58.1 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/visitor-keys': 8.58.2 '@typescript-eslint/tsconfig-utils@8.57.2(typescript@6.0.2)': dependencies: typescript: 6.0.2 - '@typescript-eslint/tsconfig-utils@8.58.1(typescript@6.0.2)': + '@typescript-eslint/tsconfig-utils@8.58.2(typescript@6.0.2)': dependencies: typescript: 6.0.2 - '@typescript-eslint/type-utils@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': + '@typescript-eslint/type-utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/typescript-estree': 8.58.1(typescript@6.0.2) - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/typescript-estree': 8.58.2(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) debug: 4.4.3(supports-color@8.1.1) eslint: 10.2.0(jiti@2.6.1) ts-api-utils: 2.5.0(typescript@6.0.2) @@ -11929,7 +12015,7 @@ snapshots: '@typescript-eslint/types@8.57.2': {} - '@typescript-eslint/types@8.58.1': {} + '@typescript-eslint/types@8.58.2': {} '@typescript-eslint/typescript-estree@8.57.2(typescript@6.0.2)': dependencies: @@ -11940,18 +12026,18 @@ snapshots: debug: 4.4.3(supports-color@8.1.1) minimatch: 10.2.4 semver: 7.7.4 - tinyglobby: 0.2.15 + tinyglobby: 0.2.16 ts-api-utils: 2.5.0(typescript@6.0.2) typescript: 6.0.2 transitivePeerDependencies: - supports-color - '@typescript-eslint/typescript-estree@8.58.1(typescript@6.0.2)': + '@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2)': dependencies: - '@typescript-eslint/project-service': 8.58.1(typescript@6.0.2) - '@typescript-eslint/tsconfig-utils': 8.58.1(typescript@6.0.2) - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/visitor-keys': 8.58.1 + '@typescript-eslint/project-service': 8.58.2(typescript@6.0.2) + '@typescript-eslint/tsconfig-utils': 8.58.2(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/visitor-keys': 8.58.2 debug: 4.4.3(supports-color@8.1.1) minimatch: 10.2.4 semver: 7.7.4 @@ -11972,12 +12058,12 @@ snapshots: transitivePeerDependencies: - supports-color - '@typescript-eslint/utils@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': + '@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: '@eslint-community/eslint-utils': 4.9.1(eslint@10.2.0(jiti@2.6.1)) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/typescript-estree': 8.58.1(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/typescript-estree': 8.58.2(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) typescript: 6.0.2 transitivePeerDependencies: @@ -11988,41 +12074,41 @@ snapshots: '@typescript-eslint/types': 8.57.2 eslint-visitor-keys: 5.0.1 - '@typescript-eslint/visitor-keys@8.58.1': + '@typescript-eslint/visitor-keys@8.58.2': dependencies: - '@typescript-eslint/types': 8.58.1 + '@typescript-eslint/types': 8.58.2 eslint-visitor-keys: 5.0.1 - '@typescript/native-preview-darwin-arm64@7.0.0-dev.20260407.1': + '@typescript/native-preview-darwin-arm64@7.0.0-dev.20260413.1': optional: true - '@typescript/native-preview-darwin-x64@7.0.0-dev.20260407.1': + '@typescript/native-preview-darwin-x64@7.0.0-dev.20260413.1': optional: true - '@typescript/native-preview-linux-arm64@7.0.0-dev.20260407.1': + '@typescript/native-preview-linux-arm64@7.0.0-dev.20260413.1': optional: true - '@typescript/native-preview-linux-arm@7.0.0-dev.20260407.1': + '@typescript/native-preview-linux-arm@7.0.0-dev.20260413.1': optional: true - '@typescript/native-preview-linux-x64@7.0.0-dev.20260407.1': + '@typescript/native-preview-linux-x64@7.0.0-dev.20260413.1': optional: true - '@typescript/native-preview-win32-arm64@7.0.0-dev.20260407.1': + '@typescript/native-preview-win32-arm64@7.0.0-dev.20260413.1': optional: true - '@typescript/native-preview-win32-x64@7.0.0-dev.20260407.1': + '@typescript/native-preview-win32-x64@7.0.0-dev.20260413.1': optional: true - '@typescript/native-preview@7.0.0-dev.20260407.1': + '@typescript/native-preview@7.0.0-dev.20260413.1': optionalDependencies: - '@typescript/native-preview-darwin-arm64': 7.0.0-dev.20260407.1 - '@typescript/native-preview-darwin-x64': 7.0.0-dev.20260407.1 - '@typescript/native-preview-linux-arm': 7.0.0-dev.20260407.1 - '@typescript/native-preview-linux-arm64': 7.0.0-dev.20260407.1 - '@typescript/native-preview-linux-x64': 7.0.0-dev.20260407.1 - '@typescript/native-preview-win32-arm64': 7.0.0-dev.20260407.1 - '@typescript/native-preview-win32-x64': 7.0.0-dev.20260407.1 + '@typescript/native-preview-darwin-arm64': 7.0.0-dev.20260413.1 + '@typescript/native-preview-darwin-x64': 7.0.0-dev.20260413.1 + '@typescript/native-preview-linux-arm': 7.0.0-dev.20260413.1 + '@typescript/native-preview-linux-arm64': 7.0.0-dev.20260413.1 + '@typescript/native-preview-linux-x64': 7.0.0-dev.20260413.1 + '@typescript/native-preview-win32-arm64': 7.0.0-dev.20260413.1 + '@typescript/native-preview-win32-x64': 7.0.0-dev.20260413.1 '@ungap/structured-clone@1.3.0': {} @@ -12030,13 +12116,13 @@ snapshots: dependencies: unpic: 4.2.2 - '@unpic/react@1.0.2(next@16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0))(react-dom@19.2.4(react@19.2.4))(react@19.2.4)': + '@unpic/react@1.0.2(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: '@unpic/core': 1.0.3 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) optionalDependencies: - next: 16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0) + next: 16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0) '@upsetjs/venn.js@2.0.0': optionalDependencies: @@ -12052,12 +12138,12 @@ snapshots: '@resvg/resvg-wasm': 2.4.0 satori: 0.16.0 - '@vitejs/devtools-kit@0.1.11(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0)': + '@vitejs/devtools-kit@0.1.11(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0)': dependencies: '@vitejs/devtools-rpc': 0.1.11(typescript@6.0.2)(ws@8.20.0) birpc: 4.0.0 ohash: 2.0.11 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' transitivePeerDependencies: - typescript - ws @@ -12074,31 +12160,31 @@ snapshots: transitivePeerDependencies: - typescript - '@vitejs/plugin-react@6.0.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))': + '@vitejs/plugin-react@6.0.1(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))': dependencies: '@rolldown/pluginutils': 1.0.0-rc.7 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - '@vitejs/plugin-rsc@0.5.22(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.4(react@19.2.4))(react-server-dom-webpack@19.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(webpack@5.105.4(uglify-js@3.19.3)))(react@19.2.4)': + '@vitejs/plugin-rsc@0.5.24(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.5(react@19.2.5))(react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)))(react@19.2.5)': dependencies: - '@rolldown/pluginutils': 1.0.0-rc.13 + '@rolldown/pluginutils': 1.0.0-rc.15 es-module-lexer: 2.0.0 estree-walker: 3.0.3 magic-string: 0.30.21 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) srvx: 0.11.15 strip-literal: 3.1.0 turbo-stream: 3.2.0 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - vitefu: 1.1.3(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vitefu: 1.1.3(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) optionalDependencies: - react-server-dom-webpack: 19.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(webpack@5.105.4(uglify-js@3.19.3)) + react-server-dom-webpack: 19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) - '@vitest/coverage-v8@4.1.3(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))': + '@vitest/coverage-v8@4.1.4(@voidzero-dev/vite-plus-test@0.1.18)': dependencies: '@bcoe/v8-coverage': 1.0.2 - '@vitest/utils': 4.1.3 + '@vitest/utils': 4.1.4 ast-v8-to-istanbul: 1.0.0 istanbul-lib-coverage: 3.2.2 istanbul-lib-report: 3.0.1 @@ -12107,12 +12193,12 @@ snapshots: obug: 2.1.1 std-env: 4.0.0 tinyrainbow: 3.1.0 - vitest: '@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vitest: '@voidzero-dev/vite-plus-test@0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - '@vitest/coverage-v8@4.1.3(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(vite@8.0.3(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(yaml@2.8.3))(yaml@2.8.3))': + '@vitest/coverage-v8@4.1.4(vitest@4.1.4)': dependencies: '@bcoe/v8-coverage': 1.0.2 - '@vitest/utils': 4.1.3 + '@vitest/utils': 4.1.4 ast-v8-to-istanbul: 1.0.0 istanbul-lib-coverage: 3.2.2 istanbul-lib-report: 3.0.1 @@ -12121,17 +12207,30 @@ snapshots: obug: 2.1.1 std-env: 4.0.0 tinyrainbow: 3.1.0 - vitest: '@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(vite@8.0.3(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(yaml@2.8.3))(yaml@2.8.3)' + vitest: 4.1.4(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.9.0) + optional: true - '@vitest/eslint-plugin@1.6.14(@typescript-eslint/eslint-plugin@8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': + '@vitest/eslint-plugin@1.6.15(@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.18)(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) optionalDependencies: - '@typescript-eslint/eslint-plugin': 8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/eslint-plugin': 8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) typescript: 6.0.2 - vitest: '@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vitest: '@voidzero-dev/vite-plus-test@0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + transitivePeerDependencies: + - supports-color + + '@vitest/eslint-plugin@1.6.15(@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)(vitest@4.1.4)': + dependencies: + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + eslint: 10.2.0(jiti@2.6.1) + optionalDependencies: + '@typescript-eslint/eslint-plugin': 8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + typescript: 6.0.2 + vitest: 4.1.4(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.9.0) transitivePeerDependencies: - supports-color @@ -12143,38 +12242,75 @@ snapshots: chai: 5.3.3 tinyrainbow: 2.0.0 + '@vitest/expect@4.1.4': + dependencies: + '@standard-schema/spec': 1.1.0 + '@types/chai': 5.2.3 + '@vitest/spy': 4.1.4 + '@vitest/utils': 4.1.4 + chai: 6.2.2 + tinyrainbow: 3.1.0 + optional: true + + '@vitest/mocker@4.1.4(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))': + dependencies: + '@vitest/spy': 4.1.4 + estree-walker: 3.0.3 + magic-string: 0.30.21 + optionalDependencies: + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + optional: true + '@vitest/pretty-format@3.2.4': dependencies: tinyrainbow: 2.0.0 - '@vitest/pretty-format@4.1.3': + '@vitest/pretty-format@4.1.4': dependencies: tinyrainbow: 3.1.0 + '@vitest/runner@4.1.4': + dependencies: + '@vitest/utils': 4.1.4 + pathe: 2.0.3 + optional: true + + '@vitest/snapshot@4.1.4': + dependencies: + '@vitest/pretty-format': 4.1.4 + '@vitest/utils': 4.1.4 + magic-string: 0.30.21 + pathe: 2.0.3 + optional: true + '@vitest/spy@3.2.4': dependencies: tinyspy: 4.0.4 + '@vitest/spy@4.1.4': + optional: true + '@vitest/utils@3.2.4': dependencies: '@vitest/pretty-format': 3.2.4 loupe: 3.2.1 tinyrainbow: 2.0.0 - '@vitest/utils@4.1.3': + '@vitest/utils@4.1.4': dependencies: - '@vitest/pretty-format': 4.1.3 + '@vitest/pretty-format': 4.1.4 convert-source-map: 2.0.0 tinyrainbow: 3.1.0 - '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)': + '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)': dependencies: - '@oxc-project/runtime': 0.123.0 - '@oxc-project/types': 0.123.0 + '@oxc-project/runtime': 0.124.0 + '@oxc-project/types': 0.124.0 lightningcss: 1.32.0 postcss: 8.5.9 optionalDependencies: - '@types/node': 25.5.2 + '@types/node': 25.6.0 + esbuild: 0.27.2 fsevents: 2.3.3 jiti: 2.6.1 sass: 1.98.0 @@ -12183,29 +12319,29 @@ snapshots: typescript: 6.0.2 yaml: 2.8.3 - '@voidzero-dev/vite-plus-darwin-arm64@0.1.16': + '@voidzero-dev/vite-plus-darwin-arm64@0.1.18': optional: true - '@voidzero-dev/vite-plus-darwin-x64@0.1.16': + '@voidzero-dev/vite-plus-darwin-x64@0.1.18': optional: true - '@voidzero-dev/vite-plus-linux-arm64-gnu@0.1.16': + '@voidzero-dev/vite-plus-linux-arm64-gnu@0.1.18': optional: true - '@voidzero-dev/vite-plus-linux-arm64-musl@0.1.16': + '@voidzero-dev/vite-plus-linux-arm64-musl@0.1.18': optional: true - '@voidzero-dev/vite-plus-linux-x64-gnu@0.1.16': + '@voidzero-dev/vite-plus-linux-x64-gnu@0.1.18': optional: true - '@voidzero-dev/vite-plus-linux-x64-musl@0.1.16': + '@voidzero-dev/vite-plus-linux-x64-musl@0.1.18': optional: true - '@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)': + '@voidzero-dev/vite-plus-test@0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)': dependencies: '@standard-schema/spec': 1.1.0 '@types/chai': 5.2.3 - '@voidzero-dev/vite-plus-core': 0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + '@voidzero-dev/vite-plus-core': 0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) es-module-lexer: 1.7.0 obug: 2.1.1 pixelmatch: 7.1.0 @@ -12214,12 +12350,13 @@ snapshots: std-env: 4.0.0 tinybench: 2.9.0 tinyexec: 1.0.4 - tinyglobby: 0.2.15 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + tinyglobby: 0.2.16 + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' ws: 8.20.0 optionalDependencies: - '@types/node': 25.5.2 - happy-dom: 20.8.9 + '@types/node': 25.6.0 + '@vitest/coverage-v8': 4.1.4(@voidzero-dev/vite-plus-test@0.1.18) + happy-dom: 20.9.0 transitivePeerDependencies: - '@arethetypeswrong/core' - '@tsdown/css' @@ -12241,50 +12378,10 @@ snapshots: - utf-8-validate - yaml - '@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(vite@8.0.3(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(yaml@2.8.3))(yaml@2.8.3)': - dependencies: - '@standard-schema/spec': 1.1.0 - '@types/chai': 5.2.3 - '@voidzero-dev/vite-plus-core': 0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) - es-module-lexer: 1.7.0 - obug: 2.1.1 - pixelmatch: 7.1.0 - pngjs: 7.0.0 - sirv: 3.0.2 - std-env: 4.0.0 - tinybench: 2.9.0 - tinyexec: 1.0.4 - tinyglobby: 0.2.15 - vite: 8.0.3(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(yaml@2.8.3) - ws: 8.20.0 - optionalDependencies: - '@types/node': 25.5.2 - happy-dom: 20.8.9 - transitivePeerDependencies: - - '@arethetypeswrong/core' - - '@tsdown/css' - - '@tsdown/exe' - - '@vitejs/devtools' - - bufferutil - - esbuild - - jiti - - less - - publint - - sass - - sass-embedded - - stylus - - sugarss - - terser - - tsx - - typescript - - unplugin-unused - - utf-8-validate - - yaml - - '@voidzero-dev/vite-plus-win32-arm64-msvc@0.1.16': + '@voidzero-dev/vite-plus-win32-arm64-msvc@0.1.18': optional: true - '@voidzero-dev/vite-plus-win32-x64-msvc@0.1.16': + '@voidzero-dev/vite-plus-win32-x64-msvc@0.1.18': optional: true '@volar/language-core@2.4.28': @@ -12433,12 +12530,12 @@ snapshots: acorn@8.16.0: {} - agentation@3.0.2(react-dom@19.2.4(react@19.2.4))(react@19.2.4): + agentation@3.0.2(react-dom@19.2.5(react@19.2.5))(react@19.2.5): optionalDependencies: - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) - ahooks@3.9.7(react-dom@19.2.4(react@19.2.4))(react@19.2.4): + ahooks@3.9.7(react-dom@19.2.5(react@19.2.5))(react@19.2.5): dependencies: '@babel/runtime': 7.29.2 '@types/js-cookie': 3.0.6 @@ -12446,8 +12543,8 @@ snapshots: intersection-observer: 0.12.2 js-cookie: 3.0.5 lodash: 4.18.0 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) react-fast-compare: 3.2.2 resize-observer-polyfill: 1.5.1 screenfull: 5.2.0 @@ -12627,6 +12724,9 @@ snapshots: loupe: 3.2.1 pathval: 2.0.1 + chai@6.2.2: + optional: true + chalk@4.1.1: dependencies: ansi-styles: 4.3.0 @@ -12726,14 +12826,14 @@ snapshots: clsx@2.1.1: {} - cmdk@1.1.1(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4): + cmdk@1.1.1(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5): dependencies: - '@radix-ui/react-compose-refs': 1.1.2(@types/react@19.2.14)(react@19.2.4) - '@radix-ui/react-dialog': 1.1.15(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@radix-ui/react-id': 1.1.1(@types/react@19.2.14)(react@19.2.4) - '@radix-ui/react-primitive': 2.1.4(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + '@radix-ui/react-compose-refs': 1.1.2(@types/react@19.2.14)(react@19.2.5) + '@radix-ui/react-dialog': 1.1.15(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@radix-ui/react-id': 1.1.1(@types/react@19.2.14)(react@19.2.5) + '@radix-ui/react-primitive': 2.1.4(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) transitivePeerDependencies: - '@types/react' - '@types/react-dom' @@ -13042,6 +13142,8 @@ snapshots: d3: 7.9.0 lodash-es: 4.18.0 + date-fns@4.1.0: {} + dayjs@1.11.20: {} debug@4.4.3(supports-color@8.1.1): @@ -13119,7 +13221,7 @@ snapshots: optionalDependencies: '@types/trusted-types': 2.0.7 - dompurify@3.3.3: + dompurify@3.4.0: optionalDependencies: '@types/trusted-types': 2.0.7 @@ -13131,11 +13233,11 @@ snapshots: dotenv@16.6.1: {} - echarts-for-react@3.0.6(echarts@6.0.0)(react@19.2.4): + echarts-for-react@3.0.6(echarts@6.0.0)(react@19.2.5): dependencies: echarts: 6.0.0 fast-deep-equal: 3.1.3 - react: 19.2.4 + react: 19.2.5 size-sensor: 1.0.3 echarts@6.0.0: @@ -13151,11 +13253,11 @@ snapshots: dependencies: embla-carousel: 8.6.0 - embla-carousel-react@8.6.0(react@19.2.4): + embla-carousel-react@8.6.0(react@19.2.5): dependencies: embla-carousel: 8.6.0 embla-carousel-reactive-utils: 8.6.0(embla-carousel@8.6.0) - react: 19.2.4 + react: 19.2.5 embla-carousel-reactive-utils@8.6.0(embla-carousel@8.6.0): dependencies: @@ -13178,6 +13280,20 @@ snapshots: dependencies: once: 1.4.0 + engine.io-client@6.6.4: + dependencies: + '@socket.io/component-emitter': 3.1.2 + debug: 4.4.3(supports-color@8.1.1) + engine.io-parser: 5.2.3 + ws: 8.18.3 + xmlhttprequest-ssl: 2.1.2 + transitivePeerDependencies: + - bufferutil + - supports-color + - utf-8-validate + + engine.io-parser@5.2.3: {} + enhanced-resolve@5.20.1: dependencies: graceful-fs: 4.2.11 @@ -13275,7 +13391,7 @@ snapshots: esquery: 1.7.0 jsonc-eslint-parser: 3.1.0 - eslint-markdown@0.6.0(eslint@10.2.0(jiti@2.6.1)): + eslint-markdown@0.6.1(eslint@10.2.0(jiti@2.6.1)): dependencies: '@eslint/markdown': 7.5.1 micromark-util-normalize-identifier: 2.0.1 @@ -13293,30 +13409,30 @@ snapshots: dependencies: eslint: 10.2.0(jiti@2.6.1) - eslint-plugin-better-tailwindcss@4.3.2(eslint@10.2.0(jiti@2.6.1))(oxlint@1.58.0(oxlint-tsgolint@0.20.0))(tailwindcss@4.2.2)(typescript@6.0.2): + eslint-plugin-better-tailwindcss@4.4.1(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0))(tailwindcss@4.2.2)(typescript@6.0.2): dependencies: - '@eslint/css-tree': 3.6.9 + '@eslint/css-tree': 4.0.1 '@valibot/to-json-schema': 1.6.0(valibot@1.3.1(typescript@6.0.2)) enhanced-resolve: 5.20.1 jiti: 2.6.1 synckit: 0.11.12 - tailwind-csstree: 0.1.5 + tailwind-csstree: 0.3.1 tailwindcss: 4.2.2 tsconfig-paths-webpack-plugin: 4.2.0 valibot: 1.3.1(typescript@6.0.2) optionalDependencies: eslint: 10.2.0(jiti@2.6.1) - oxlint: 1.58.0(oxlint-tsgolint@0.20.0) + oxlint: 1.60.0(oxlint-tsgolint@0.20.0) transitivePeerDependencies: - '@eslint/css' - typescript - eslint-plugin-command@3.5.2(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.1(typescript@6.0.2))(@typescript-eslint/utils@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)): + eslint-plugin-command@3.5.2(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2))(@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)): dependencies: '@es-joy/jsdoccomment': 0.84.0 '@typescript-eslint/rule-tester': 8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/typescript-estree': 8.58.1(typescript@6.0.2) - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/typescript-estree': 8.58.2(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) eslint-plugin-depend@1.5.0(eslint@10.2.0(jiti@2.6.1)): @@ -13341,12 +13457,12 @@ snapshots: dependencies: eslint: 10.2.0(jiti@2.6.1) - eslint-plugin-jsdoc@62.8.1(eslint@10.2.0(jiti@2.6.1)): + eslint-plugin-jsdoc@62.9.0(eslint@10.2.0(jiti@2.6.1)): dependencies: - '@es-joy/jsdoccomment': 0.84.0 + '@es-joy/jsdoccomment': 0.86.0 '@es-joy/resolve.exports': 1.2.0 are-docs-informative: 0.0.2 - comment-parser: 1.4.5 + comment-parser: 1.4.6 debug: 4.4.3(supports-color@8.1.1) escape-string-regexp: 4.0.0 eslint: 10.2.0(jiti@2.6.1) @@ -13376,7 +13492,7 @@ snapshots: transitivePeerDependencies: - '@eslint/json' - eslint-plugin-markdown-preferences@0.41.0(@eslint/markdown@8.0.1)(eslint@10.2.0(jiti@2.6.1)): + eslint-plugin-markdown-preferences@0.41.1(@eslint/markdown@8.0.1)(eslint@10.2.0(jiti@2.6.1)): dependencies: '@eslint/markdown': 8.0.1 diff-sequences: 29.6.3 @@ -13411,19 +13527,19 @@ snapshots: transitivePeerDependencies: - typescript - eslint-plugin-no-barrel-files@1.2.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2): + eslint-plugin-no-barrel-files@1.3.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2): dependencies: - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + eslint: 10.2.0(jiti@2.6.1) transitivePeerDependencies: - - eslint - supports-color - typescript eslint-plugin-no-only-tests@3.3.0: {} - eslint-plugin-perfectionist@5.7.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2): + eslint-plugin-perfectionist@5.8.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2): dependencies: - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) natural-orderby: 5.0.0 transitivePeerDependencies: @@ -13437,7 +13553,7 @@ snapshots: jsonc-eslint-parser: 3.1.0 pathe: 2.0.3 pnpm-workspace-yaml: 1.6.0 - tinyglobby: 0.2.15 + tinyglobby: 0.2.16 yaml: 2.8.3 yaml-eslint-parser: 2.0.0 @@ -13447,9 +13563,9 @@ snapshots: '@eslint-react/core': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/var': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) compare-versions: 6.1.1 eslint: 10.2.0(jiti@2.6.1) ts-pattern: 5.9.0 @@ -13463,10 +13579,10 @@ snapshots: '@eslint-react/core': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/var': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/type-utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/type-utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) compare-versions: 6.1.1 eslint: 10.2.0(jiti@2.6.1) string-ts: 2.3.1 @@ -13484,10 +13600,10 @@ snapshots: '@eslint-react/ast': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/var': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/type-utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/type-utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) ts-pattern: 5.9.0 typescript: 6.0.2 @@ -13500,9 +13616,9 @@ snapshots: '@eslint-react/core': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/var': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) birecord: 0.1.1 eslint: 10.2.0(jiti@2.6.1) ts-pattern: 5.9.0 @@ -13516,10 +13632,10 @@ snapshots: '@eslint-react/core': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/var': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/type-utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/type-utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) compare-versions: 6.1.1 eslint: 10.2.0(jiti@2.6.1) string-ts: 2.3.1 @@ -13535,7 +13651,7 @@ snapshots: '@eslint-community/regexpp': 4.12.2 comment-parser: 1.4.6 eslint: 10.2.0(jiti@2.6.1) - jsdoc-type-pratt-parser: 7.1.1 + jsdoc-type-pratt-parser: 7.2.0 refa: 0.12.1 regexp-ast-analysis: 0.7.1 scslre: 0.3.0 @@ -13547,7 +13663,7 @@ snapshots: bytes: 3.1.2 eslint: 10.2.0(jiti@2.6.1) functional-red-black-tree: 1.0.1 - globals: 17.4.0 + globals: 17.5.0 jsx-ast-utils-x: 0.1.0 lodash.merge: 4.6.2 minimatch: 10.2.4 @@ -13556,11 +13672,11 @@ snapshots: ts-api-utils: 2.5.0(typescript@6.0.2) typescript: 6.0.2 - eslint-plugin-storybook@10.3.5(eslint@10.2.0(jiti@2.6.1))(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(typescript@6.0.2): + eslint-plugin-storybook@10.3.5(eslint@10.2.0(jiti@2.6.1))(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2): dependencies: - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) - storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) transitivePeerDependencies: - supports-color - typescript @@ -13586,7 +13702,7 @@ snapshots: core-js-compat: 3.49.0 eslint: 10.2.0(jiti@2.6.1) find-up-simple: 1.0.1 - globals: 17.4.0 + globals: 17.5.0 indent-string: 5.0.0 is-builtin-module: 5.0.0 jsesc: 3.1.0 @@ -13596,13 +13712,13 @@ snapshots: semver: 7.7.4 strip-indent: 4.1.1 - eslint-plugin-unused-imports@4.4.1(@typescript-eslint/eslint-plugin@8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)): + eslint-plugin-unused-imports@4.4.1(@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)): dependencies: eslint: 10.2.0(jiti@2.6.1) optionalDependencies: - '@typescript-eslint/eslint-plugin': 8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/eslint-plugin': 8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - eslint-plugin-vue@10.8.0(@stylistic/eslint-plugin@5.10.0(eslint@10.2.0(jiti@2.6.1)))(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(vue-eslint-parser@10.4.0(eslint@10.2.0(jiti@2.6.1))): + eslint-plugin-vue@10.8.0(@stylistic/eslint-plugin@5.10.0(eslint@10.2.0(jiti@2.6.1)))(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(vue-eslint-parser@10.4.0(eslint@10.2.0(jiti@2.6.1))): dependencies: '@eslint-community/eslint-utils': 4.9.1(eslint@10.2.0(jiti@2.6.1)) eslint: 10.2.0(jiti@2.6.1) @@ -13614,7 +13730,7 @@ snapshots: xml-name-validator: 4.0.0 optionalDependencies: '@stylistic/eslint-plugin': 5.10.0(eslint@10.2.0(jiti@2.6.1)) - '@typescript-eslint/parser': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/parser': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint-plugin-yml@3.3.1(eslint@10.2.0(jiti@2.6.1)): dependencies: @@ -13800,13 +13916,14 @@ snapshots: esutils@2.0.3: {} - event-target-bus@1.0.0: {} - events@3.3.0: {} expand-template@2.0.3: optional: true + expect-type@1.3.0: + optional: true + exsolve@1.0.8: {} extend@3.0.2: {} @@ -13909,15 +14026,6 @@ snapshots: dependencies: fd-package-json: 2.0.0 - foxact@0.3.0(react-dom@19.2.4(react@19.2.4))(react@19.2.4): - dependencies: - client-only: 0.0.1 - event-target-bus: 1.0.0 - server-only: 0.0.1 - optionalDependencies: - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - fs-constants@1.0.0: optional: true @@ -13972,7 +14080,7 @@ snapshots: globals@15.15.0: {} - globals@17.4.0: {} + globals@17.5.0: {} globrex@0.1.2: {} @@ -13984,9 +14092,9 @@ snapshots: hachure-fill@0.5.2: {} - happy-dom@20.8.9: + happy-dom@20.9.0: dependencies: - '@types/node': 25.5.2 + '@types/node': 25.6.0 '@types/whatwg-mimetype': 3.0.2 '@types/ws': 8.18.1 entities: 7.0.1 @@ -14151,7 +14259,7 @@ snapshots: hex-rgb@4.3.0: {} - hono@4.12.12: {} + hono@4.12.14: {} hosted-git-info@9.0.2: dependencies: @@ -14182,7 +14290,7 @@ snapshots: dependencies: '@babel/runtime': 7.29.2 - i18next@26.0.3(typescript@6.0.2): + i18next@26.0.4(typescript@6.0.2): dependencies: '@babel/runtime': 7.29.2 optionalDependencies: @@ -14312,18 +14420,18 @@ snapshots: jest-worker@27.5.1: dependencies: - '@types/node': 25.5.2 + '@types/node': 25.6.0 merge-stream: 2.0.0 supports-color: 8.1.1 jiti@2.6.1: {} - jotai@2.19.1(@babel/core@7.29.0)(@babel/template@7.28.6)(@types/react@19.2.14)(react@19.2.4): + jotai@2.19.1(@babel/core@7.29.0)(@babel/template@7.28.6)(@types/react@19.2.14)(react@19.2.5): optionalDependencies: '@babel/core': 7.29.0 '@babel/template': 7.28.6 '@types/react': 19.2.14 - react: 19.2.4 + react: 19.2.5 js-audio-recorder@1.0.7: {} @@ -14343,6 +14451,8 @@ snapshots: jsdoc-type-pratt-parser@7.1.1: {} + jsdoc-type-pratt-parser@7.2.0: {} + jsesc@3.1.0: {} json-buffer@3.0.1: {} @@ -14383,7 +14493,7 @@ snapshots: khroma@2.1.0: {} - knip@6.3.0(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1): + knip@6.4.1(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2): dependencies: '@nodelib/fs.walk': 1.2.8 fast-glob: 3.3.3 @@ -14391,8 +14501,8 @@ snapshots: get-tsconfig: 4.13.7 jiti: 2.6.1 minimist: 1.2.8 - oxc-parser: 0.121.0(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) - oxc-resolver: 11.19.1(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) + oxc-parser: 0.121.0(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) + oxc-resolver: 11.19.1(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) picocolors: 1.1.1 picomatch: 4.0.4 smol-toml: 1.6.1 @@ -14438,12 +14548,12 @@ snapshots: prelude-ls: 1.2.1 type-check: 0.4.0 - lexical-code-no-prism@0.41.0(@lexical/utils@0.42.0)(lexical@0.42.0): + lexical-code-no-prism@0.41.0(@lexical/utils@0.43.0)(lexical@0.43.0): dependencies: - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - lexical@0.42.0: {} + lexical@0.43.0: {} lib0@0.2.117: dependencies: @@ -14531,6 +14641,8 @@ snapshots: dependencies: js-tokens: 4.0.0 + loro-crdt@1.10.8: {} + loupe@3.2.1: {} lower-case@2.0.2: @@ -14780,7 +14892,7 @@ snapshots: mdn-data@2.0.30: {} - mdn-data@2.23.0: {} + mdn-data@2.27.1: {} merge-stream@2.0.0: {} @@ -15187,30 +15299,30 @@ snapshots: neo-async@2.6.2: {} - next-themes@0.4.6(react-dom@19.2.4(react@19.2.4))(react@19.2.4): + next-themes@0.4.6(react-dom@19.2.5(react@19.2.5))(react@19.2.5): dependencies: - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) - next@16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0): + next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0): dependencies: - '@next/env': 16.2.2 + '@next/env': 16.2.3 '@swc/helpers': 0.5.15 baseline-browser-mapping: 2.10.12 caniuse-lite: 1.0.30001781 postcss: 8.4.31 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - styled-jsx: 5.1.6(@babel/core@7.29.0)(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + styled-jsx: 5.1.6(@babel/core@7.29.0)(react@19.2.5) optionalDependencies: - '@next/swc-darwin-arm64': 16.2.2 - '@next/swc-darwin-x64': 16.2.2 - '@next/swc-linux-arm64-gnu': 16.2.2 - '@next/swc-linux-arm64-musl': 16.2.2 - '@next/swc-linux-x64-gnu': 16.2.2 - '@next/swc-linux-x64-musl': 16.2.2 - '@next/swc-win32-arm64-msvc': 16.2.2 - '@next/swc-win32-x64-msvc': 16.2.2 + '@next/swc-darwin-arm64': 16.2.3 + '@next/swc-darwin-x64': 16.2.3 + '@next/swc-linux-arm64-gnu': 16.2.3 + '@next/swc-linux-arm64-musl': 16.2.3 + '@next/swc-linux-x64-gnu': 16.2.3 + '@next/swc-linux-x64-musl': 16.2.3 + '@next/swc-win32-arm64-msvc': 16.2.3 + '@next/swc-win32-x64-msvc': 16.2.3 '@playwright/test': 1.59.1 sass: 1.98.0 sharp: 0.34.5 @@ -15245,12 +15357,12 @@ snapshots: dependencies: boolbase: 1.0.0 - nuqs@2.8.9(next@16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0))(react@19.2.4): + nuqs@2.8.9(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(react@19.2.5): dependencies: '@standard-schema/spec': 1.0.0 - react: 19.2.4 + react: 19.2.5 optionalDependencies: - next: 16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0) + next: 16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0) object-assign@4.1.1: {} @@ -15299,7 +15411,7 @@ snapshots: type-check: 0.4.0 word-wrap: 1.2.5 - oxc-parser@0.121.0(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1): + oxc-parser@0.121.0(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2): dependencies: '@oxc-project/types': 0.121.0 optionalDependencies: @@ -15319,7 +15431,7 @@ snapshots: '@oxc-parser/binding-linux-x64-gnu': 0.121.0 '@oxc-parser/binding-linux-x64-musl': 0.121.0 '@oxc-parser/binding-openharmony-arm64': 0.121.0 - '@oxc-parser/binding-wasm32-wasi': 0.121.0(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) + '@oxc-parser/binding-wasm32-wasi': 0.121.0(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) '@oxc-parser/binding-win32-arm64-msvc': 0.121.0 '@oxc-parser/binding-win32-ia32-msvc': 0.121.0 '@oxc-parser/binding-win32-x64-msvc': 0.121.0 @@ -15327,7 +15439,7 @@ snapshots: - '@emnapi/core' - '@emnapi/runtime' - oxc-resolver@11.19.1(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1): + oxc-resolver@11.19.1(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2): optionalDependencies: '@oxc-resolver/binding-android-arm-eabi': 11.19.1 '@oxc-resolver/binding-android-arm64': 11.19.1 @@ -15345,7 +15457,7 @@ snapshots: '@oxc-resolver/binding-linux-x64-gnu': 11.19.1 '@oxc-resolver/binding-linux-x64-musl': 11.19.1 '@oxc-resolver/binding-openharmony-arm64': 11.19.1 - '@oxc-resolver/binding-wasm32-wasi': 11.19.1(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) + '@oxc-resolver/binding-wasm32-wasi': 11.19.1(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) '@oxc-resolver/binding-win32-arm64-msvc': 11.19.1 '@oxc-resolver/binding-win32-ia32-msvc': 11.19.1 '@oxc-resolver/binding-win32-x64-msvc': 11.19.1 @@ -15353,29 +15465,29 @@ snapshots: - '@emnapi/core' - '@emnapi/runtime' - oxfmt@0.43.0: + oxfmt@0.45.0: dependencies: tinypool: 2.1.0 optionalDependencies: - '@oxfmt/binding-android-arm-eabi': 0.43.0 - '@oxfmt/binding-android-arm64': 0.43.0 - '@oxfmt/binding-darwin-arm64': 0.43.0 - '@oxfmt/binding-darwin-x64': 0.43.0 - '@oxfmt/binding-freebsd-x64': 0.43.0 - '@oxfmt/binding-linux-arm-gnueabihf': 0.43.0 - '@oxfmt/binding-linux-arm-musleabihf': 0.43.0 - '@oxfmt/binding-linux-arm64-gnu': 0.43.0 - '@oxfmt/binding-linux-arm64-musl': 0.43.0 - '@oxfmt/binding-linux-ppc64-gnu': 0.43.0 - '@oxfmt/binding-linux-riscv64-gnu': 0.43.0 - '@oxfmt/binding-linux-riscv64-musl': 0.43.0 - '@oxfmt/binding-linux-s390x-gnu': 0.43.0 - '@oxfmt/binding-linux-x64-gnu': 0.43.0 - '@oxfmt/binding-linux-x64-musl': 0.43.0 - '@oxfmt/binding-openharmony-arm64': 0.43.0 - '@oxfmt/binding-win32-arm64-msvc': 0.43.0 - '@oxfmt/binding-win32-ia32-msvc': 0.43.0 - '@oxfmt/binding-win32-x64-msvc': 0.43.0 + '@oxfmt/binding-android-arm-eabi': 0.45.0 + '@oxfmt/binding-android-arm64': 0.45.0 + '@oxfmt/binding-darwin-arm64': 0.45.0 + '@oxfmt/binding-darwin-x64': 0.45.0 + '@oxfmt/binding-freebsd-x64': 0.45.0 + '@oxfmt/binding-linux-arm-gnueabihf': 0.45.0 + '@oxfmt/binding-linux-arm-musleabihf': 0.45.0 + '@oxfmt/binding-linux-arm64-gnu': 0.45.0 + '@oxfmt/binding-linux-arm64-musl': 0.45.0 + '@oxfmt/binding-linux-ppc64-gnu': 0.45.0 + '@oxfmt/binding-linux-riscv64-gnu': 0.45.0 + '@oxfmt/binding-linux-riscv64-musl': 0.45.0 + '@oxfmt/binding-linux-s390x-gnu': 0.45.0 + '@oxfmt/binding-linux-x64-gnu': 0.45.0 + '@oxfmt/binding-linux-x64-musl': 0.45.0 + '@oxfmt/binding-openharmony-arm64': 0.45.0 + '@oxfmt/binding-win32-arm64-msvc': 0.45.0 + '@oxfmt/binding-win32-ia32-msvc': 0.45.0 + '@oxfmt/binding-win32-x64-msvc': 0.45.0 oxlint-tsgolint@0.20.0: optionalDependencies: @@ -15386,27 +15498,27 @@ snapshots: '@oxlint-tsgolint/win32-arm64': 0.20.0 '@oxlint-tsgolint/win32-x64': 0.20.0 - oxlint@1.58.0(oxlint-tsgolint@0.20.0): + oxlint@1.60.0(oxlint-tsgolint@0.20.0): optionalDependencies: - '@oxlint/binding-android-arm-eabi': 1.58.0 - '@oxlint/binding-android-arm64': 1.58.0 - '@oxlint/binding-darwin-arm64': 1.58.0 - '@oxlint/binding-darwin-x64': 1.58.0 - '@oxlint/binding-freebsd-x64': 1.58.0 - '@oxlint/binding-linux-arm-gnueabihf': 1.58.0 - '@oxlint/binding-linux-arm-musleabihf': 1.58.0 - '@oxlint/binding-linux-arm64-gnu': 1.58.0 - '@oxlint/binding-linux-arm64-musl': 1.58.0 - '@oxlint/binding-linux-ppc64-gnu': 1.58.0 - '@oxlint/binding-linux-riscv64-gnu': 1.58.0 - '@oxlint/binding-linux-riscv64-musl': 1.58.0 - '@oxlint/binding-linux-s390x-gnu': 1.58.0 - '@oxlint/binding-linux-x64-gnu': 1.58.0 - '@oxlint/binding-linux-x64-musl': 1.58.0 - '@oxlint/binding-openharmony-arm64': 1.58.0 - '@oxlint/binding-win32-arm64-msvc': 1.58.0 - '@oxlint/binding-win32-ia32-msvc': 1.58.0 - '@oxlint/binding-win32-x64-msvc': 1.58.0 + '@oxlint/binding-android-arm-eabi': 1.60.0 + '@oxlint/binding-android-arm64': 1.60.0 + '@oxlint/binding-darwin-arm64': 1.60.0 + '@oxlint/binding-darwin-x64': 1.60.0 + '@oxlint/binding-freebsd-x64': 1.60.0 + '@oxlint/binding-linux-arm-gnueabihf': 1.60.0 + '@oxlint/binding-linux-arm-musleabihf': 1.60.0 + '@oxlint/binding-linux-arm64-gnu': 1.60.0 + '@oxlint/binding-linux-arm64-musl': 1.60.0 + '@oxlint/binding-linux-ppc64-gnu': 1.60.0 + '@oxlint/binding-linux-riscv64-gnu': 1.60.0 + '@oxlint/binding-linux-riscv64-musl': 1.60.0 + '@oxlint/binding-linux-s390x-gnu': 1.60.0 + '@oxlint/binding-linux-x64-gnu': 1.60.0 + '@oxlint/binding-linux-x64-musl': 1.60.0 + '@oxlint/binding-openharmony-arm64': 1.60.0 + '@oxlint/binding-win32-arm64-msvc': 1.60.0 + '@oxlint/binding-win32-ia32-msvc': 1.60.0 + '@oxlint/binding-win32-x64-msvc': 1.60.0 oxlint-tsgolint: 0.20.0 p-limit@3.1.0: @@ -15518,7 +15630,7 @@ snapshots: picomatch@4.0.4: {} - pinyin-pro@3.28.0: {} + pinyin-pro@3.28.1: {} pixelmatch@7.1.0: dependencies: @@ -15635,11 +15747,11 @@ snapshots: punycode@2.3.1: {} - qrcode.react@4.2.0(react@19.2.4): + qrcode.react@4.2.0(react@19.2.5): dependencies: - react: 19.2.4 + react: 19.2.5 - qs@6.15.0: + qs@6.15.1: dependencies: side-channel: '@nolyfill/side-channel@1.0.44' @@ -15657,15 +15769,15 @@ snapshots: strip-json-comments: 2.0.1 optional: true - re-resizable@6.11.2(react-dom@19.2.4(react@19.2.4))(react@19.2.4): + re-resizable@6.11.2(react-dom@19.2.5(react@19.2.5))(react@19.2.5): dependencies: - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) - react-18-input-autosize@3.0.0(react@19.2.4): + react-18-input-autosize@3.0.0(react@19.2.5): dependencies: prop-types: 15.8.1 - react: 19.2.4 + react: 19.2.5 react-docgen-typescript@2.4.0(typescript@6.0.2): dependencies: @@ -15686,143 +15798,143 @@ snapshots: transitivePeerDependencies: - supports-color - react-dom@19.2.4(react@19.2.4): + react-dom@19.2.5(react@19.2.5): dependencies: - react: 19.2.4 + react: 19.2.5 scheduler: 0.27.0 - react-draggable@4.5.0(react-dom@19.2.4(react@19.2.4))(react@19.2.4): + react-draggable@4.5.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5): dependencies: clsx: 2.1.1 prop-types: 15.8.1 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) - react-easy-crop@5.5.7(react-dom@19.2.4(react@19.2.4))(react@19.2.4): + react-easy-crop@5.5.7(react-dom@19.2.5(react@19.2.5))(react@19.2.5): dependencies: normalize-wheel: 1.0.1 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) tslib: 2.8.1 - react-error-boundary@6.1.1(react@19.2.4): + react-error-boundary@6.1.1(react@19.2.5): dependencies: - react: 19.2.4 + react: 19.2.5 react-fast-compare@3.2.2: {} - react-hotkeys-hook@5.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4): + react-hotkeys-hook@5.2.4(react-dom@19.2.5(react@19.2.5))(react@19.2.5): dependencies: - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) - react-i18next@17.0.2(i18next@26.0.3(typescript@6.0.2))(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(typescript@6.0.2): + react-i18next@16.5.8(i18next@26.0.4(typescript@6.0.2))(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(typescript@6.0.2): dependencies: '@babel/runtime': 7.29.2 html-parse-stringify: 3.0.1 - i18next: 26.0.3(typescript@6.0.2) - react: 19.2.4 - use-sync-external-store: 1.6.0(react@19.2.4) + i18next: 26.0.4(typescript@6.0.2) + react: 19.2.5 + use-sync-external-store: 1.6.0(react@19.2.5) optionalDependencies: - react-dom: 19.2.4(react@19.2.4) + react-dom: 19.2.5(react@19.2.5) typescript: 6.0.2 react-is@16.13.1: {} react-is@17.0.2: {} - react-multi-email@1.0.25(react-dom@19.2.4(react@19.2.4))(react@19.2.4): + react-multi-email@1.0.25(react-dom@19.2.5(react@19.2.5))(react@19.2.5): dependencies: - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) react-papaparse@4.4.0: dependencies: '@types/papaparse': 5.5.2 papaparse: 5.5.3 - react-pdf-highlighter@8.0.0-rc.0(react-dom@19.2.4(react@19.2.4))(react@19.2.4): + react-pdf-highlighter@8.0.0-rc.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5): dependencies: pdfjs-dist: 4.4.168 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - react-rnd: 10.5.3(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + react-rnd: 10.5.3(react-dom@19.2.5(react@19.2.5))(react@19.2.5) ts-debounce: 4.0.0 - react-remove-scroll-bar@2.3.8(@types/react@19.2.14)(react@19.2.4): + react-remove-scroll-bar@2.3.8(@types/react@19.2.14)(react@19.2.5): dependencies: - react: 19.2.4 - react-style-singleton: 2.2.3(@types/react@19.2.14)(react@19.2.4) + react: 19.2.5 + react-style-singleton: 2.2.3(@types/react@19.2.14)(react@19.2.5) tslib: 2.8.1 optionalDependencies: '@types/react': 19.2.14 - react-remove-scroll@2.7.2(@types/react@19.2.14)(react@19.2.4): + react-remove-scroll@2.7.2(@types/react@19.2.14)(react@19.2.5): dependencies: - react: 19.2.4 - react-remove-scroll-bar: 2.3.8(@types/react@19.2.14)(react@19.2.4) - react-style-singleton: 2.2.3(@types/react@19.2.14)(react@19.2.4) + react: 19.2.5 + react-remove-scroll-bar: 2.3.8(@types/react@19.2.14)(react@19.2.5) + react-style-singleton: 2.2.3(@types/react@19.2.14)(react@19.2.5) tslib: 2.8.1 - use-callback-ref: 1.3.3(@types/react@19.2.14)(react@19.2.4) - use-sidecar: 1.1.3(@types/react@19.2.14)(react@19.2.4) + use-callback-ref: 1.3.3(@types/react@19.2.14)(react@19.2.5) + use-sidecar: 1.1.3(@types/react@19.2.14)(react@19.2.5) optionalDependencies: '@types/react': 19.2.14 - react-rnd@10.5.3(react-dom@19.2.4(react@19.2.4))(react@19.2.4): + react-rnd@10.5.3(react-dom@19.2.5(react@19.2.5))(react@19.2.5): dependencies: - re-resizable: 6.11.2(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - react-draggable: 4.5.0(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + re-resizable: 6.11.2(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + react-draggable: 4.5.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5) tslib: 2.6.2 - react-server-dom-webpack@19.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(webpack@5.105.4(uglify-js@3.19.3)): + react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)): dependencies: acorn-loose: 8.5.2 neo-async: 2.6.2 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - webpack: 5.105.4(uglify-js@3.19.3) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + webpack: 5.105.4(esbuild@0.27.2)(uglify-js@3.19.3) webpack-sources: 3.3.4 - react-sortablejs@6.1.4(@types/sortablejs@1.15.9)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sortablejs@1.15.7): + react-sortablejs@6.1.4(@types/sortablejs@1.15.9)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sortablejs@1.15.7): dependencies: '@types/sortablejs': 1.15.9 classnames: 2.3.1 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) sortablejs: 1.15.7 tiny-invariant: 1.2.0 - react-style-singleton@2.2.3(@types/react@19.2.14)(react@19.2.4): + react-style-singleton@2.2.3(@types/react@19.2.14)(react@19.2.5): dependencies: get-nonce: 1.0.1 - react: 19.2.4 + react: 19.2.5 tslib: 2.8.1 optionalDependencies: '@types/react': 19.2.14 - react-textarea-autosize@8.5.9(@types/react@19.2.14)(react@19.2.4): + react-textarea-autosize@8.5.9(@types/react@19.2.14)(react@19.2.5): dependencies: '@babel/runtime': 7.29.2 - react: 19.2.4 - use-composed-ref: 1.4.0(@types/react@19.2.14)(react@19.2.4) - use-latest: 1.3.0(@types/react@19.2.14)(react@19.2.4) + react: 19.2.5 + use-composed-ref: 1.4.0(@types/react@19.2.14)(react@19.2.5) + use-latest: 1.3.0(@types/react@19.2.14)(react@19.2.5) transitivePeerDependencies: - '@types/react' - react@19.2.4: {} + react@19.2.5: {} - reactflow@11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4): + reactflow@11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5): dependencies: - '@reactflow/background': 11.3.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@reactflow/controls': 11.2.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@reactflow/core': 11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@reactflow/minimap': 11.7.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@reactflow/node-resizer': 2.2.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - '@reactflow/node-toolbar': 1.3.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + '@reactflow/background': 11.3.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@reactflow/controls': 11.2.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@reactflow/core': 11.11.4(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@reactflow/minimap': 11.7.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@reactflow/node-resizer': 2.2.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@reactflow/node-toolbar': 1.3.14(@types/react@19.2.14)(immer@11.1.4)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) transitivePeerDependencies: - '@types/react' - immer @@ -16048,30 +16160,6 @@ snapshots: robust-predicates@3.0.3: {} - rolldown@1.0.0-rc.12(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1): - dependencies: - '@oxc-project/types': 0.122.0 - '@rolldown/pluginutils': 1.0.0-rc.12 - optionalDependencies: - '@rolldown/binding-android-arm64': 1.0.0-rc.12 - '@rolldown/binding-darwin-arm64': 1.0.0-rc.12 - '@rolldown/binding-darwin-x64': 1.0.0-rc.12 - '@rolldown/binding-freebsd-x64': 1.0.0-rc.12 - '@rolldown/binding-linux-arm-gnueabihf': 1.0.0-rc.12 - '@rolldown/binding-linux-arm64-gnu': 1.0.0-rc.12 - '@rolldown/binding-linux-arm64-musl': 1.0.0-rc.12 - '@rolldown/binding-linux-ppc64-gnu': 1.0.0-rc.12 - '@rolldown/binding-linux-s390x-gnu': 1.0.0-rc.12 - '@rolldown/binding-linux-x64-gnu': 1.0.0-rc.12 - '@rolldown/binding-linux-x64-musl': 1.0.0-rc.12 - '@rolldown/binding-openharmony-arm64': 1.0.0-rc.12 - '@rolldown/binding-wasm32-wasi': 1.0.0-rc.12(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) - '@rolldown/binding-win32-arm64-msvc': 1.0.0-rc.12 - '@rolldown/binding-win32-x64-msvc': 1.0.0-rc.12 - transitivePeerDependencies: - - '@emnapi/core' - - '@emnapi/runtime' - rollup@4.59.0: dependencies: '@types/estree': 1.0.8 @@ -16177,8 +16265,6 @@ snapshots: seroval@1.5.1: {} - server-only@0.0.1: {} - sharp@0.34.5: dependencies: '@img/colour': 1.1.0 @@ -16227,6 +16313,9 @@ snapshots: '@shikijs/vscode-textmate': 10.0.2 '@types/hast': 3.0.4 + siginfo@2.0.0: + optional: true + simple-concat@1.0.1: optional: true @@ -16249,6 +16338,24 @@ snapshots: smol-toml@1.6.1: {} + socket.io-client@4.8.3: + dependencies: + '@socket.io/component-emitter': 3.1.2 + debug: 4.4.3(supports-color@8.1.1) + engine.io-client: 6.6.4 + socket.io-parser: 4.2.6 + transitivePeerDependencies: + - bufferutil + - supports-color + - utf-8-validate + + socket.io-parser@4.2.6: + dependencies: + '@socket.io/component-emitter': 3.1.2 + debug: 4.4.3(supports-color@8.1.1) + transitivePeerDependencies: + - supports-color + solid-js@1.9.11: dependencies: csstype: 3.2.3 @@ -16291,6 +16398,9 @@ snapshots: srvx@0.11.15: {} + stackback@0.0.2: + optional: true + stackframe@1.3.4: {} state-local@1.0.7: {} @@ -16299,10 +16409,10 @@ snapshots: std-semver@1.0.8: {} - storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4): + storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5): dependencies: '@storybook/global': 5.0.0 - '@storybook/icons': 2.0.1(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + '@storybook/icons': 2.0.1(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@testing-library/jest-dom': 6.9.1 '@testing-library/user-event': 14.6.1(@testing-library/dom@10.4.1) '@vitest/expect': 3.2.4 @@ -16312,7 +16422,7 @@ snapshots: open: 10.2.0 recast: 0.23.11 semver: 7.7.4 - use-sync-external-store: 1.6.0(react@19.2.4) + use-sync-external-store: 1.6.0(react@19.2.5) ws: 8.20.0 transitivePeerDependencies: - '@testing-library/dom' @@ -16321,15 +16431,15 @@ snapshots: - react-dom - utf-8-validate - streamdown@2.5.0(react-dom@19.2.4(react@19.2.4))(react@19.2.4): + streamdown@2.5.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5): dependencies: clsx: 2.1.1 hast-util-to-jsx-runtime: 2.3.6 html-url-attributes: 3.0.1 marked: 17.0.5 mermaid: 11.14.0 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) rehype-harden: 1.1.8 rehype-raw: 7.0.0 rehype-sanitize: 6.0.0 @@ -16398,10 +16508,10 @@ snapshots: dependencies: inline-style-parser: 0.2.7 - styled-jsx@5.1.6(@babel/core@7.29.0)(react@19.2.4): + styled-jsx@5.1.6(@babel/core@7.29.0)(react@19.2.5): dependencies: client-only: 0.0.1 - react: 19.2.4 + react: 19.2.5 optionalDependencies: '@babel/core': 7.29.0 @@ -16435,7 +16545,7 @@ snapshots: tagged-tag@1.0.0: {} - tailwind-csstree@0.1.5: {} + tailwind-csstree@0.3.1: {} tailwind-merge@3.5.0: {} @@ -16468,14 +16578,15 @@ snapshots: minizlib: 3.1.0 yallist: 5.0.0 - terser-webpack-plugin@5.4.0(uglify-js@3.19.3)(webpack@5.105.4(uglify-js@3.19.3)): + terser-webpack-plugin@5.4.0(esbuild@0.27.2)(uglify-js@3.19.3)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)): dependencies: '@jridgewell/trace-mapping': 0.3.31 jest-worker: 27.5.1 schema-utils: 4.3.3 terser: 5.46.1 - webpack: 5.105.4(uglify-js@3.19.3) + webpack: 5.105.4(esbuild@0.27.2)(uglify-js@3.19.3) optionalDependencies: + esbuild: 0.27.2 uglify-js: 3.19.3 terser@5.46.1: @@ -16510,6 +16621,11 @@ snapshots: fdir: 6.5.0(picomatch@4.0.4) picomatch: 4.0.4 + tinyglobby@0.2.16: + dependencies: + fdir: 6.5.0(picomatch@4.0.4) + picomatch: 4.0.4 + tinypool@2.1.0: {} tinyrainbow@2.0.0: {} @@ -16619,7 +16735,7 @@ snapshots: unbash@2.2.0: {} - undici-types@7.18.2: {} + undici-types@7.19.2: {} undici@7.24.0: {} @@ -16707,50 +16823,50 @@ snapshots: dependencies: punycode: 2.3.1 - use-callback-ref@1.3.3(@types/react@19.2.14)(react@19.2.4): + use-callback-ref@1.3.3(@types/react@19.2.14)(react@19.2.5): dependencies: - react: 19.2.4 + react: 19.2.5 tslib: 2.8.1 optionalDependencies: '@types/react': 19.2.14 - use-composed-ref@1.4.0(@types/react@19.2.14)(react@19.2.4): + use-composed-ref@1.4.0(@types/react@19.2.14)(react@19.2.5): dependencies: - react: 19.2.4 + react: 19.2.5 optionalDependencies: '@types/react': 19.2.14 - use-context-selector@2.0.0(react@19.2.4)(scheduler@0.27.0): + use-context-selector@2.0.0(react@19.2.5)(scheduler@0.27.0): dependencies: - react: 19.2.4 + react: 19.2.5 scheduler: 0.27.0 - use-isomorphic-layout-effect@1.2.1(@types/react@19.2.14)(react@19.2.4): + use-isomorphic-layout-effect@1.2.1(@types/react@19.2.14)(react@19.2.5): dependencies: - react: 19.2.4 + react: 19.2.5 optionalDependencies: '@types/react': 19.2.14 - use-latest@1.3.0(@types/react@19.2.14)(react@19.2.4): + use-latest@1.3.0(@types/react@19.2.14)(react@19.2.5): dependencies: - react: 19.2.4 - use-isomorphic-layout-effect: 1.2.1(@types/react@19.2.14)(react@19.2.4) + react: 19.2.5 + use-isomorphic-layout-effect: 1.2.1(@types/react@19.2.14)(react@19.2.5) optionalDependencies: '@types/react': 19.2.14 - use-sidecar@1.1.3(@types/react@19.2.14)(react@19.2.4): + use-sidecar@1.1.3(@types/react@19.2.14)(react@19.2.5): dependencies: detect-node-es: 1.1.0 - react: 19.2.4 + react: 19.2.5 tslib: 2.8.1 optionalDependencies: '@types/react': 19.2.14 use-strict@1.0.1: {} - use-sync-external-store@1.6.0(react@19.2.4): + use-sync-external-store@1.6.0(react@19.2.5): dependencies: - react: 19.2.4 + react: 19.2.5 util-arity@1.1.0: {} @@ -16784,21 +16900,21 @@ snapshots: '@types/unist': 3.0.3 vfile-message: 4.0.3 - vinext@0.0.40(@mdx-js/rollup@3.1.1(rollup@4.59.0))(@vitejs/plugin-react@6.0.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)))(@vitejs/plugin-rsc@0.5.22(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.4(react@19.2.4))(react-server-dom-webpack@19.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(webpack@5.105.4(uglify-js@3.19.3)))(react@19.2.4))(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0))(react-dom@19.2.4(react@19.2.4))(react-server-dom-webpack@19.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(webpack@5.105.4(uglify-js@3.19.3)))(react@19.2.4)(typescript@6.0.2): + vinext@0.0.41(453b4e184a832f83060410b31544dc36): dependencies: - '@unpic/react': 1.0.2(next@16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0))(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + '@unpic/react': 1.0.2(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@vercel/og': 0.8.6 - '@vitejs/plugin-react': 6.0.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + '@vitejs/plugin-react': 6.0.1(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) magic-string: 0.30.21 - react: 19.2.4 - react-dom: 19.2.4(react@19.2.4) - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + react: 19.2.5 + react-dom: 19.2.5(react@19.2.5) + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' vite-plugin-commonjs: 0.10.4 - vite-tsconfig-paths: 6.1.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2) + vite-tsconfig-paths: 6.1.1(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2) optionalDependencies: '@mdx-js/rollup': 3.1.1(rollup@4.59.0) - '@vitejs/plugin-rsc': 0.5.22(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.4(react@19.2.4))(react-server-dom-webpack@19.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(webpack@5.105.4(uglify-js@3.19.3)))(react@19.2.4) - react-server-dom-webpack: 19.2.4(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(webpack@5.105.4(uglify-js@3.19.3)) + '@vitejs/plugin-rsc': 0.5.24(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.5(react@19.2.5))(react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)))(react@19.2.5) + react-server-dom-webpack: 19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) transitivePeerDependencies: - next - supports-color @@ -16817,9 +16933,9 @@ snapshots: fast-glob: 3.3.3 magic-string: 0.30.21 - vite-plugin-inspect@12.0.0-beta.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0): + vite-plugin-inspect@12.0.0-beta.1(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0): dependencies: - '@vitejs/devtools-kit': 0.1.11(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0) + '@vitejs/devtools-kit': 0.1.11(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0) ansis: 4.2.0 error-stack-parser-es: 1.0.5 obug: 2.1.1 @@ -16828,43 +16944,43 @@ snapshots: perfect-debounce: 2.1.0 sirv: 3.0.2 unplugin-utils: 0.3.1 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' transitivePeerDependencies: - typescript - ws - vite-plugin-storybook-nextjs@3.2.4(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0))(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(typescript@6.0.2): + vite-plugin-storybook-nextjs@3.2.4(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2): dependencies: '@next/env': 16.0.0 image-size: 2.0.2 magic-string: 0.30.21 module-alias: 2.3.4 - next: 16.2.2(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)(sass@1.98.0) - storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.4(react@19.2.4))(react@19.2.4) + next: 16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0) + storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) ts-dedent: 2.2.0 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - vite-tsconfig-paths: 5.1.4(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2) + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite-tsconfig-paths: 5.1.4(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2) transitivePeerDependencies: - supports-color - typescript - vite-plus@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3): + vite-plus@0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3): dependencies: - '@oxc-project/types': 0.123.0 - '@voidzero-dev/vite-plus-core': 0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) - '@voidzero-dev/vite-plus-test': 0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) - oxfmt: 0.43.0 - oxlint: 1.58.0(oxlint-tsgolint@0.20.0) + '@oxc-project/types': 0.124.0 + '@voidzero-dev/vite-plus-core': 0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + '@voidzero-dev/vite-plus-test': 0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + oxfmt: 0.45.0 + oxlint: 1.60.0(oxlint-tsgolint@0.20.0) oxlint-tsgolint: 0.20.0 optionalDependencies: - '@voidzero-dev/vite-plus-darwin-arm64': 0.1.16 - '@voidzero-dev/vite-plus-darwin-x64': 0.1.16 - '@voidzero-dev/vite-plus-linux-arm64-gnu': 0.1.16 - '@voidzero-dev/vite-plus-linux-arm64-musl': 0.1.16 - '@voidzero-dev/vite-plus-linux-x64-gnu': 0.1.16 - '@voidzero-dev/vite-plus-linux-x64-musl': 0.1.16 - '@voidzero-dev/vite-plus-win32-arm64-msvc': 0.1.16 - '@voidzero-dev/vite-plus-win32-x64-msvc': 0.1.16 + '@voidzero-dev/vite-plus-darwin-arm64': 0.1.18 + '@voidzero-dev/vite-plus-darwin-x64': 0.1.18 + '@voidzero-dev/vite-plus-linux-arm64-gnu': 0.1.18 + '@voidzero-dev/vite-plus-linux-arm64-musl': 0.1.18 + '@voidzero-dev/vite-plus-linux-x64-gnu': 0.1.18 + '@voidzero-dev/vite-plus-linux-x64-musl': 0.1.18 + '@voidzero-dev/vite-plus-win32-arm64-msvc': 0.1.18 + '@voidzero-dev/vite-plus-win32-x64-msvc': 0.1.18 transitivePeerDependencies: - '@arethetypeswrong/core' - '@edge-runtime/vm' @@ -16873,6 +16989,8 @@ snapshots: - '@tsdown/exe' - '@types/node' - '@vitejs/devtools' + - '@vitest/coverage-istanbul' + - '@vitest/coverage-v8' - '@vitest/ui' - bufferutil - esbuild @@ -16893,100 +17011,66 @@ snapshots: - vite - yaml - vite-plus@0.1.16(@types/node@25.5.2)(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(vite@8.0.3(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(yaml@2.8.3))(yaml@2.8.3): - dependencies: - '@oxc-project/types': 0.123.0 - '@voidzero-dev/vite-plus-core': 0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) - '@voidzero-dev/vite-plus-test': 0.1.16(@types/node@25.5.2)(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(vite@8.0.3(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(yaml@2.8.3))(yaml@2.8.3) - oxfmt: 0.43.0 - oxlint: 1.58.0(oxlint-tsgolint@0.20.0) - oxlint-tsgolint: 0.20.0 - optionalDependencies: - '@voidzero-dev/vite-plus-darwin-arm64': 0.1.16 - '@voidzero-dev/vite-plus-darwin-x64': 0.1.16 - '@voidzero-dev/vite-plus-linux-arm64-gnu': 0.1.16 - '@voidzero-dev/vite-plus-linux-arm64-musl': 0.1.16 - '@voidzero-dev/vite-plus-linux-x64-gnu': 0.1.16 - '@voidzero-dev/vite-plus-linux-x64-musl': 0.1.16 - '@voidzero-dev/vite-plus-win32-arm64-msvc': 0.1.16 - '@voidzero-dev/vite-plus-win32-x64-msvc': 0.1.16 - transitivePeerDependencies: - - '@arethetypeswrong/core' - - '@edge-runtime/vm' - - '@opentelemetry/api' - - '@tsdown/css' - - '@tsdown/exe' - - '@types/node' - - '@vitejs/devtools' - - '@vitest/ui' - - bufferutil - - esbuild - - happy-dom - - jiti - - jsdom - - less - - publint - - sass - - sass-embedded - - stylus - - sugarss - - terser - - tsx - - typescript - - unplugin-unused - - utf-8-validate - - vite - - yaml - - vite-tsconfig-paths@5.1.4(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2): + vite-tsconfig-paths@5.1.4(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2): dependencies: debug: 4.4.3(supports-color@8.1.1) globrex: 0.1.2 tsconfck: 3.1.6(typescript@6.0.2) optionalDependencies: - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' transitivePeerDependencies: - supports-color - typescript - vite-tsconfig-paths@6.1.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2): + vite-tsconfig-paths@6.1.1(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2): dependencies: debug: 4.4.3(supports-color@8.1.1) globrex: 0.1.2 tsconfck: 3.1.6(typescript@6.0.2) - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' transitivePeerDependencies: - supports-color - typescript - vite@8.0.3(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(yaml@2.8.3): - dependencies: - lightningcss: 1.32.0 - picomatch: 4.0.4 - postcss: 8.5.9 - rolldown: 1.0.0-rc.12(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) - tinyglobby: 0.2.15 + vitefu@1.1.3(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)): optionalDependencies: - '@types/node': 25.5.2 - fsevents: 2.3.3 - jiti: 2.6.1 - sass: 1.98.0 - terser: 5.46.1 - tsx: 4.21.0 - yaml: 2.8.3 - transitivePeerDependencies: - - '@emnapi/core' - - '@emnapi/runtime' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - vitefu@1.1.3(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)): - optionalDependencies: - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - - vitest-canvas-mock@1.1.4(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)): + vitest-canvas-mock@1.1.4(@voidzero-dev/vite-plus-test@0.1.18): dependencies: cssfontparser: 1.2.1 moo-color: 1.0.3 - vitest: '@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vitest: '@voidzero-dev/vite-plus-test@0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + + vitest@4.1.4(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.9.0): + dependencies: + '@vitest/expect': 4.1.4 + '@vitest/mocker': 4.1.4(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + '@vitest/pretty-format': 4.1.4 + '@vitest/runner': 4.1.4 + '@vitest/snapshot': 4.1.4 + '@vitest/spy': 4.1.4 + '@vitest/utils': 4.1.4 + es-module-lexer: 2.0.0 + expect-type: 1.3.0 + magic-string: 0.30.21 + obug: 2.1.1 + pathe: 2.0.3 + picomatch: 4.0.4 + std-env: 4.0.0 + tinybench: 2.9.0 + tinyexec: 1.0.4 + tinyglobby: 0.2.16 + tinyrainbow: 3.1.0 + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + why-is-node-running: 2.3.0 + optionalDependencies: + '@types/node': 25.6.0 + '@vitest/coverage-v8': 4.1.4(vitest@4.1.4) + happy-dom: 20.9.0 + transitivePeerDependencies: + - msw + optional: true void-elements@3.1.0: {} @@ -17034,7 +17118,7 @@ snapshots: webpack-virtual-modules@0.6.2: {} - webpack@5.105.4(uglify-js@3.19.3): + webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3): dependencies: '@types/eslint-scope': 3.7.7 '@types/estree': 1.0.8 @@ -17058,7 +17142,7 @@ snapshots: neo-async: 2.6.2 schema-utils: 4.3.3 tapable: 2.3.2 - terser-webpack-plugin: 5.4.0(uglify-js@3.19.3)(webpack@5.105.4(uglify-js@3.19.3)) + terser-webpack-plugin: 5.4.0(esbuild@0.27.2)(uglify-js@3.19.3)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) watchpack: 2.5.1 webpack-sources: 3.3.4 transitivePeerDependencies: @@ -17078,10 +17162,18 @@ snapshots: dependencies: isexe: 2.0.0 + why-is-node-running@2.3.0: + dependencies: + siginfo: 2.0.0 + stackback: 0.0.2 + optional: true + word-wrap@1.2.5: {} wrappy@1.0.2: {} + ws@8.18.3: {} + ws@8.20.0: {} wsl-utils@0.1.0: @@ -17097,6 +17189,8 @@ snapshots: xmlbuilder@15.1.1: {} + xmlhttprequest-ssl@2.1.2: {} + yallist@3.1.1: {} yallist@5.0.0: {} @@ -17138,23 +17232,23 @@ snapshots: dependencies: tslib: 2.3.0 - zundo@2.3.0(zustand@5.0.12(@types/react@19.2.14)(immer@11.1.4)(react@19.2.4)(use-sync-external-store@1.6.0(react@19.2.4))): + zundo@2.3.0(zustand@5.0.12(@types/react@19.2.14)(immer@11.1.4)(react@19.2.5)(use-sync-external-store@1.6.0(react@19.2.5))): dependencies: - zustand: 5.0.12(@types/react@19.2.14)(immer@11.1.4)(react@19.2.4)(use-sync-external-store@1.6.0(react@19.2.4)) + zustand: 5.0.12(@types/react@19.2.14)(immer@11.1.4)(react@19.2.5)(use-sync-external-store@1.6.0(react@19.2.5)) - zustand@4.5.7(@types/react@19.2.14)(immer@11.1.4)(react@19.2.4): + zustand@4.5.7(@types/react@19.2.14)(immer@11.1.4)(react@19.2.5): dependencies: - use-sync-external-store: 1.6.0(react@19.2.4) + use-sync-external-store: 1.6.0(react@19.2.5) optionalDependencies: '@types/react': 19.2.14 immer: 11.1.4 - react: 19.2.4 + react: 19.2.5 - zustand@5.0.12(@types/react@19.2.14)(immer@11.1.4)(react@19.2.4)(use-sync-external-store@1.6.0(react@19.2.4)): + zustand@5.0.12(@types/react@19.2.14)(immer@11.1.4)(react@19.2.5)(use-sync-external-store@1.6.0(react@19.2.5)): optionalDependencies: '@types/react': 19.2.14 immer: 11.1.4 - react: 19.2.4 - use-sync-external-store: 1.6.0(react@19.2.4) + react: 19.2.5 + use-sync-external-store: 1.6.0(react@19.2.5) zwitch@2.0.4: {} diff --git a/pnpm-workspace.yaml b/pnpm-workspace.yaml index 6fe023066a..0bd1303fb3 100644 --- a/pnpm-workspace.yaml +++ b/pnpm-workspace.yaml @@ -1,24 +1,207 @@ -catalogMode: prefer -trustPolicy: no-downgrade -trustPolicyExclude: - - chokidar@4.0.3 - - reselect@5.1.1 - - semver@6.3.1 -blockExoticSubdeps: true -strictDepBuilds: true -allowBuilds: - "@parcel/watcher": false - canvas: false - esbuild: false - sharp: false packages: - web - e2e - sdks/nodejs-client - packages/* +allowBuilds: + '@parcel/watcher': false + canvas: false + esbuild: false + sharp: false +blockExoticSubdeps: true +catalog: + '@amplitude/analytics-browser': 2.39.0 + '@amplitude/plugin-session-replay-browser': 1.27.7 + '@antfu/eslint-config': 8.2.0 + '@base-ui/react': 1.4.0 + '@chromatic-com/storybook': 5.1.2 + '@cucumber/cucumber': 12.8.0 + '@date-fns/tz': 1.4.1 + '@egoist/tailwindcss-icons': 1.9.2 + '@emoji-mart/data': 1.2.1 + '@eslint-react/eslint-plugin': 3.0.0 + '@eslint/js': 10.0.1 + '@floating-ui/react': 0.27.19 + '@formatjs/intl-localematcher': 0.8.3 + '@headlessui/react': 2.2.10 + '@heroicons/react': 2.2.0 + '@hono/node-server': 1.19.14 + '@iconify-json/heroicons': 1.2.3 + '@iconify-json/ri': 1.2.10 + '@lexical/code': 0.43.0 + '@lexical/link': 0.43.0 + '@lexical/list': 0.43.0 + '@lexical/react': 0.43.0 + '@lexical/selection': 0.43.0 + '@lexical/text': 0.43.0 + '@lexical/utils': 0.43.0 + '@mdx-js/loader': 3.1.1 + '@mdx-js/react': 3.1.1 + '@mdx-js/rollup': 3.1.1 + '@monaco-editor/react': 4.7.0 + '@next/eslint-plugin-next': 16.2.3 + '@next/mdx': 16.2.3 + '@orpc/client': 1.13.14 + '@orpc/contract': 1.13.14 + '@orpc/openapi-client': 1.13.14 + '@orpc/tanstack-query': 1.13.14 + '@playwright/test': 1.59.1 + '@remixicon/react': 4.9.0 + '@rgrove/parse-xml': 4.2.0 + '@sentry/react': 10.48.0 + '@storybook/addon-docs': 10.3.5 + '@storybook/addon-links': 10.3.5 + '@storybook/addon-onboarding': 10.3.5 + '@storybook/addon-themes': 10.3.5 + '@storybook/nextjs-vite': 10.3.5 + '@storybook/react': 10.3.5 + '@streamdown/math': 1.0.2 + '@svgdotjs/svg.js': 3.2.5 + '@t3-oss/env-nextjs': 0.13.11 + '@tailwindcss/postcss': 4.2.2 + '@tailwindcss/typography': 0.5.19 + '@tailwindcss/vite': 4.2.2 + '@tanstack/eslint-plugin-query': 5.99.0 + '@tanstack/react-devtools': 0.10.2 + '@tanstack/react-form': 1.29.0 + '@tanstack/react-form-devtools': 0.2.21 + '@tanstack/react-query': 5.99.0 + '@tanstack/react-query-devtools': 5.99.0 + '@tanstack/react-virtual': 3.13.23 + '@testing-library/dom': 10.4.1 + '@testing-library/jest-dom': 6.9.1 + '@testing-library/react': 16.3.2 + '@testing-library/user-event': 14.6.1 + '@tsdown/css': 0.21.8 + '@tsslint/cli': 3.0.3 + '@tsslint/compat-eslint': 3.0.3 + '@tsslint/config': 3.0.3 + '@types/js-cookie': 3.0.6 + '@types/js-yaml': 4.0.9 + '@types/negotiator': 0.6.4 + '@types/node': 25.6.0 + '@types/postcss-js': 4.1.0 + '@types/qs': 6.15.0 + '@types/react': 19.2.14 + '@types/react-dom': 19.2.3 + '@types/sortablejs': 1.15.9 + '@typescript-eslint/eslint-plugin': 8.58.2 + '@typescript-eslint/parser': 8.58.2 + '@typescript/native-preview': 7.0.0-dev.20260413.1 + '@vitejs/plugin-react': 6.0.1 + '@vitejs/plugin-rsc': 0.5.24 + '@vitest/coverage-v8': 4.1.4 + abcjs: 6.6.2 + agentation: 3.0.2 + ahooks: 3.9.7 + autoprefixer: 10.5.0 + class-variance-authority: 0.7.1 + client-only: 0.0.1 + clsx: 2.1.1 + cmdk: 1.1.1 + code-inspector-plugin: 1.5.1 + copy-to-clipboard: 3.3.3 + cron-parser: 5.5.0 + date-fns: 4.1.0 + dayjs: 1.11.20 + decimal.js: 10.6.0 + dompurify: 3.4.0 + echarts: 6.0.0 + echarts-for-react: 3.0.6 + elkjs: 0.11.1 + embla-carousel-autoplay: 8.6.0 + embla-carousel-react: 8.6.0 + emoji-mart: 5.6.0 + es-toolkit: 1.45.1 + eslint: 10.2.0 + eslint-markdown: 0.6.1 + eslint-plugin-better-tailwindcss: 4.4.1 + eslint-plugin-hyoban: 0.14.1 + eslint-plugin-markdown-preferences: 0.41.1 + eslint-plugin-no-barrel-files: 1.3.1 + eslint-plugin-react-refresh: 0.5.2 + eslint-plugin-sonarjs: 4.0.2 + eslint-plugin-storybook: 10.3.5 + fast-deep-equal: 3.1.3 + happy-dom: 20.9.0 + hast-util-to-jsx-runtime: 2.3.6 + hono: 4.12.14 + html-entities: 2.6.0 + html-to-image: 1.11.13 + i18next: 26.0.4 + i18next-resources-to-backend: 1.2.1 + iconify-import-svg: 0.1.2 + immer: 11.1.4 + jotai: 2.19.1 + js-audio-recorder: 1.0.7 + js-cookie: 3.0.5 + js-yaml: 4.1.1 + jsonschema: 1.5.0 + katex: 0.16.45 + knip: 6.4.1 + ky: 2.0.0 + lamejs: 1.2.1 + lexical: 0.43.0 + loro-crdt: 1.10.8 + mermaid: 11.14.0 + mime: 4.1.0 + mitt: 3.0.1 + negotiator: 1.0.0 + next: 16.2.3 + next-themes: 0.4.6 + nuqs: 2.8.9 + pinyin-pro: 3.28.1 + postcss: 8.5.9 + postcss-js: 5.1.0 + qrcode.react: 4.2.0 + qs: 6.15.1 + react: 19.2.5 + react-18-input-autosize: 3.0.0 + react-dom: 19.2.5 + react-easy-crop: 5.5.7 + react-hotkeys-hook: 5.2.4 + react-i18next: 16.5.8 + react-multi-email: 1.0.25 + react-papaparse: 4.4.0 + react-pdf-highlighter: 8.0.0-rc.0 + react-server-dom-webpack: 19.2.5 + react-sortablejs: 6.1.4 + react-textarea-autosize: 8.5.9 + reactflow: 11.11.4 + remark-breaks: 4.0.0 + remark-directive: 4.0.0 + scheduler: 0.27.0 + sharp: 0.34.5 + shiki: 4.0.2 + socket.io-client: 4.8.3 + sortablejs: 1.15.7 + std-semver: 1.0.8 + storybook: 10.3.5 + streamdown: 2.5.0 + string-ts: 2.3.1 + tailwind-merge: 3.5.0 + tailwindcss: 4.2.2 + tldts: 7.0.28 + tsdown: 0.21.8 + tsx: 4.21.0 + typescript: 6.0.2 + uglify-js: 3.19.3 + unist-util-visit: 5.1.0 + use-context-selector: 2.0.0 + uuid: 13.0.0 + vinext: 0.0.41 + vite: npm:@voidzero-dev/vite-plus-core@0.1.18 + vite-plugin-inspect: 12.0.0-beta.1 + vite-plus: 0.1.18 + vitest: npm:@voidzero-dev/vite-plus-test@0.1.18 + vitest-canvas-mock: 1.1.4 + zod: 4.3.6 + zundo: 2.3.0 + zustand: 5.0.12 +catalogMode: prefer overrides: - "@lexical/code": npm:lexical-code-no-prism@0.41.0 - "@monaco-editor/loader": 1.7.0 + '@lexical/code': npm:lexical-code-no-prism@0.41.0 + '@monaco-editor/loader': 1.7.0 brace-expansion@>=2.0.0 <2.0.3: 2.0.3 canvas: ^3.2.2 dompurify@>=3.1.3 <=3.3.1: 3.3.2 @@ -26,8 +209,8 @@ overrides: flatted@<=3.4.1: 3.4.2 glob@>=10.2.0 <10.5.0: 11.1.0 is-core-module: npm:@nolyfill/is-core-module@^1.0.39 - lodash@>=4.0.0 <= 4.17.23: 4.18.0 lodash-es@>=4.0.0 <= 4.17.23: 4.18.0 + lodash@>=4.0.0 <= 4.17.23: 4.18.0 picomatch@<2.3.2: 2.3.2 picomatch@>=4.0.0 <4.0.4: 4.0.4 rollup@>=4.0.0 <4.59.0: 4.59.0 @@ -40,191 +223,13 @@ overrides: svgo@>=3.0.0 <3.3.3: 3.3.3 tar@<=7.5.10: 7.5.11 undici@>=7.0.0 <7.24.0: 7.24.0 - vite: npm:@voidzero-dev/vite-plus-core@0.1.16 - vitest: npm:@voidzero-dev/vite-plus-test@0.1.16 + vite: npm:@voidzero-dev/vite-plus-core@0.1.18 + vitest: npm:@voidzero-dev/vite-plus-test@0.1.18 yaml@>=2.0.0 <2.8.3: 2.8.3 yauzl@<3.2.1: 3.2.1 -catalog: - "@amplitude/analytics-browser": 2.38.1 - "@amplitude/plugin-session-replay-browser": 1.27.6 - "@antfu/eslint-config": 8.0.0 - "@base-ui/react": 1.3.0 - "@chromatic-com/storybook": 5.1.1 - "@cucumber/cucumber": 12.7.0 - "@egoist/tailwindcss-icons": 1.9.2 - "@emoji-mart/data": 1.2.1 - "@eslint-react/eslint-plugin": 3.0.0 - "@eslint/js": 10.0.1 - "@floating-ui/react": 0.27.19 - "@formatjs/intl-localematcher": 0.8.2 - "@headlessui/react": 2.2.10 - "@heroicons/react": 2.2.0 - "@hono/node-server": 1.19.13 - "@iconify-json/heroicons": 1.2.3 - "@iconify-json/ri": 1.2.10 - "@lexical/code": 0.42.0 - "@lexical/link": 0.42.0 - "@lexical/list": 0.42.0 - "@lexical/react": 0.42.0 - "@lexical/selection": 0.42.0 - "@lexical/text": 0.42.0 - "@lexical/utils": 0.42.0 - "@mdx-js/loader": 3.1.1 - "@mdx-js/react": 3.1.1 - "@mdx-js/rollup": 3.1.1 - "@monaco-editor/react": 4.7.0 - "@next/eslint-plugin-next": 16.2.2 - "@next/mdx": 16.2.2 - "@orpc/client": 1.13.13 - "@orpc/contract": 1.13.13 - "@orpc/openapi-client": 1.13.13 - "@orpc/tanstack-query": 1.13.13 - "@playwright/test": 1.59.1 - "@remixicon/react": 4.9.0 - "@rgrove/parse-xml": 4.2.0 - "@sentry/react": 10.47.0 - "@storybook/addon-docs": 10.3.5 - "@storybook/addon-links": 10.3.5 - "@storybook/addon-onboarding": 10.3.5 - "@storybook/addon-themes": 10.3.5 - "@storybook/nextjs-vite": 10.3.5 - "@storybook/react": 10.3.5 - "@streamdown/math": 1.0.2 - "@svgdotjs/svg.js": 3.2.5 - "@t3-oss/env-nextjs": 0.13.11 - "@tailwindcss/postcss": 4.2.2 - "@tailwindcss/typography": 0.5.19 - "@tailwindcss/vite": 4.2.2 - "@tanstack/eslint-plugin-query": 5.96.2 - "@tanstack/react-devtools": 0.10.2 - "@tanstack/react-form": 1.28.6 - "@tanstack/react-form-devtools": 0.2.20 - "@tanstack/react-query": 5.96.2 - "@tanstack/react-query-devtools": 5.96.2 - "@tanstack/react-virtual": 3.13.23 - "@testing-library/dom": 10.4.1 - "@testing-library/jest-dom": 6.9.1 - "@testing-library/react": 16.3.2 - "@testing-library/user-event": 14.6.1 - "@tsslint/cli": 3.0.2 - "@tsslint/compat-eslint": 3.0.2 - "@tsslint/config": 3.0.2 - "@types/js-cookie": 3.0.6 - "@types/js-yaml": 4.0.9 - "@types/negotiator": 0.6.4 - "@types/node": 25.5.2 - "@types/postcss-js": 4.1.0 - "@types/qs": 6.15.0 - "@types/react": 19.2.14 - "@types/react-dom": 19.2.3 - "@types/sortablejs": 1.15.9 - "@typescript-eslint/eslint-plugin": 8.58.1 - "@typescript-eslint/parser": 8.58.1 - "@typescript/native-preview": 7.0.0-dev.20260407.1 - "@vitejs/plugin-react": 6.0.1 - "@vitejs/plugin-rsc": 0.5.22 - "@vitest/coverage-v8": 4.1.3 - abcjs: 6.6.2 - agentation: 3.0.2 - ahooks: 3.9.7 - autoprefixer: 10.4.27 - class-variance-authority: 0.7.1 - clsx: 2.1.1 - cmdk: 1.1.1 - code-inspector-plugin: 1.5.1 - copy-to-clipboard: 3.3.3 - cron-parser: 5.5.0 - dayjs: 1.11.20 - decimal.js: 10.6.0 - dompurify: 3.3.3 - echarts: 6.0.0 - echarts-for-react: 3.0.6 - elkjs: 0.11.1 - embla-carousel-autoplay: 8.6.0 - embla-carousel-react: 8.6.0 - emoji-mart: 5.6.0 - es-toolkit: 1.45.1 - eslint: 10.2.0 - eslint-markdown: 0.6.0 - eslint-plugin-better-tailwindcss: 4.3.2 - eslint-plugin-hyoban: 0.14.1 - eslint-plugin-markdown-preferences: 0.41.0 - eslint-plugin-no-barrel-files: 1.2.2 - eslint-plugin-react-refresh: 0.5.2 - eslint-plugin-sonarjs: 4.0.2 - eslint-plugin-storybook: 10.3.5 - fast-deep-equal: 3.1.3 - foxact: 0.3.0 - happy-dom: 20.8.9 - hast-util-to-jsx-runtime: 2.3.6 - hono: 4.12.12 - html-entities: 2.6.0 - html-to-image: 1.11.13 - i18next: 26.0.3 - i18next-resources-to-backend: 1.2.1 - iconify-import-svg: 0.1.2 - immer: 11.1.4 - jotai: 2.19.1 - js-audio-recorder: 1.0.7 - js-cookie: 3.0.5 - js-yaml: 4.1.1 - jsonschema: 1.5.0 - katex: 0.16.45 - knip: 6.3.0 - ky: 2.0.0 - lamejs: 1.2.1 - lexical: 0.42.0 - mermaid: 11.14.0 - mime: 4.1.0 - mitt: 3.0.1 - negotiator: 1.0.0 - next: 16.2.2 - next-themes: 0.4.6 - nuqs: 2.8.9 - pinyin-pro: 3.28.0 - postcss: 8.5.9 - postcss-js: 5.1.0 - qrcode.react: 4.2.0 - qs: 6.15.0 - react: 19.2.4 - react-18-input-autosize: 3.0.0 - react-dom: 19.2.4 - react-easy-crop: 5.5.7 - react-hotkeys-hook: 5.2.4 - react-i18next: 17.0.2 - react-multi-email: 1.0.25 - react-papaparse: 4.4.0 - react-pdf-highlighter: 8.0.0-rc.0 - react-server-dom-webpack: 19.2.4 - react-sortablejs: 6.1.4 - react-textarea-autosize: 8.5.9 - reactflow: 11.11.4 - remark-breaks: 4.0.0 - remark-directive: 4.0.0 - scheduler: 0.27.0 - sharp: 0.34.5 - shiki: 4.0.2 - sortablejs: 1.15.7 - std-semver: 1.0.8 - storybook: 10.3.5 - streamdown: 2.5.0 - string-ts: 2.3.1 - tailwind-merge: 3.5.0 - tailwindcss: 4.2.2 - tldts: 7.0.28 - tsdown: 0.21.7 - tsx: 4.21.0 - typescript: 6.0.2 - uglify-js: 3.19.3 - unist-util-visit: 5.1.0 - use-context-selector: 2.0.0 - uuid: 13.0.0 - vinext: 0.0.40 - vite: npm:@voidzero-dev/vite-plus-core@0.1.16 - vite-plugin-inspect: 12.0.0-beta.1 - vite-plus: 0.1.16 - vitest: npm:@voidzero-dev/vite-plus-test@0.1.16 - vitest-canvas-mock: 1.1.4 - zod: 4.3.6 - zundo: 2.3.0 - zustand: 5.0.12 +strictDepBuilds: true +trustPolicy: no-downgrade +trustPolicyExclude: + - chokidar@4.0.3 + - reselect@5.1.1 + - semver@6.3.1 diff --git a/sdks/nodejs-client/package.json b/sdks/nodejs-client/package.json index da9f7353ac..28ebcb89c2 100644 --- a/sdks/nodejs-client/package.json +++ b/sdks/nodejs-client/package.json @@ -48,13 +48,14 @@ "build": "vp pack", "lint": "eslint", "lint:fix": "eslint --fix", - "type-check": "tsc -p tsconfig.json --noEmit", + "type-check": "tsc", "test": "vp test", "test:coverage": "vp test --coverage", "publish:check": "./scripts/publish.sh --dry-run", "publish:npm": "./scripts/publish.sh" }, "devDependencies": { + "@dify/tsconfig": "workspace:*", "@eslint/js": "catalog:", "@types/node": "catalog:", "@typescript-eslint/eslint-plugin": "catalog:", @@ -62,6 +63,7 @@ "@vitest/coverage-v8": "catalog:", "eslint": "catalog:", "typescript": "catalog:", + "vite": "catalog:", "vite-plus": "catalog:", "vitest": "catalog:" } diff --git a/sdks/nodejs-client/src/http/sse.test.ts b/sdks/nodejs-client/src/http/sse.test.ts index 70cd11007d..83cde28de3 100644 --- a/sdks/nodejs-client/src/http/sse.test.ts +++ b/sdks/nodejs-client/src/http/sse.test.ts @@ -14,8 +14,8 @@ describe("sse parsing", () => { events.push(event); } expect(events).toHaveLength(1); - expect(events[0].event).toBe("message"); - expect(events[0].data).toEqual({ answer: "hi" }); + expect(events[0]!.event).toBe("message"); + expect(events[0]!.data).toEqual({ answer: "hi" }); }); it("handles multi-line data payloads", async () => { @@ -24,8 +24,8 @@ describe("sse parsing", () => { for await (const event of parseSseStream(stream)) { events.push(event); } - expect(events[0].raw).toBe("line1\nline2"); - expect(events[0].data).toBe("line1\nline2"); + expect(events[0]!.raw).toBe("line1\nline2"); + expect(events[0]!.data).toBe("line1\nline2"); }); it("ignores comments and flushes the last event without a trailing separator", async () => { diff --git a/sdks/nodejs-client/src/index.test.ts b/sdks/nodejs-client/src/index.test.ts index d194680379..8d56b994c4 100644 --- a/sdks/nodejs-client/src/index.test.ts +++ b/sdks/nodejs-client/src/index.test.ts @@ -99,7 +99,7 @@ describe("File uploads", () => { super(); } - _read() {} + override _read() {} append() {} diff --git a/sdks/nodejs-client/tsconfig.json b/sdks/nodejs-client/tsconfig.json index 46055447be..1e55007ed0 100644 --- a/sdks/nodejs-client/tsconfig.json +++ b/sdks/nodejs-client/tsconfig.json @@ -1,18 +1,14 @@ { + "extends": "@dify/tsconfig/node.json", "compilerOptions": { - "target": "ES2022", - "module": "ESNext", - "moduleResolution": "Bundler", + "lib": ["ES2023", "DOM", "DOM.Iterable"], "rootDir": ".", "outDir": "dist", + "noEmit": false, "declaration": true, "declarationMap": true, "sourceMap": true, - "strict": true, - "esModuleInterop": true, - "forceConsistentCasingInFileNames": true, - "skipLibCheck": true, "types": ["node"] }, - "include": ["src/**/*.ts", "tests/**/*.ts"] + "include": ["src/**/*.ts", "tests/**/*.ts", "vite.config.ts"] } diff --git a/vite.config.ts b/vite.config.ts index a34932a4ef..aebcaf8f73 100644 --- a/vite.config.ts +++ b/vite.config.ts @@ -1,5 +1,7 @@ import { defineConfig } from 'vite-plus' export default defineConfig({ - staged: {}, + staged: { + '*': 'eslint --fix --pass-on-unpruned-suppressions', + }, }) diff --git a/web/.env.example b/web/.env.example index 62d4fa6c56..643aba482e 100644 --- a/web/.env.example +++ b/web/.env.example @@ -14,6 +14,8 @@ NEXT_PUBLIC_API_PREFIX=http://localhost:5001/console/api NEXT_PUBLIC_PUBLIC_API_PREFIX=http://localhost:5001/api # When the frontend and backend run on different subdomains, set NEXT_PUBLIC_COOKIE_DOMAIN=1. NEXT_PUBLIC_COOKIE_DOMAIN= +# WebSocket server URL. +NEXT_PUBLIC_SOCKET_URL=ws://localhost:5001 # Dev-only Hono proxy targets. # The frontend keeps requesting http://localhost:5001 directly, @@ -48,6 +50,9 @@ NEXT_PUBLIC_CSP_WHITELIST= # Default is not allow to embed into iframe to prevent Clickjacking: https://owasp.org/www-community/attacks/Clickjacking NEXT_PUBLIC_ALLOW_EMBED= +# Allow inline style attributes in Markdown rendering (self-hosted opt-in). +NEXT_PUBLIC_ALLOW_INLINE_STYLES=false + # Allow rendering unsafe URLs which have "data:" scheme. NEXT_PUBLIC_ALLOW_UNSAFE_DATA_SCHEME=false diff --git a/web/.gitignore b/web/.gitignore index a4ae324795..9de3dc83f9 100644 --- a/web/.gitignore +++ b/web/.gitignore @@ -64,5 +64,3 @@ public/fallback-*.js .vscode/settings.json .vscode/mcp.json - -.eslintcache diff --git a/web/.storybook/utils/form-story-wrapper.tsx b/web/.storybook/utils/form-story-wrapper.tsx index 90349a0325..7503e9905d 100644 --- a/web/.storybook/utils/form-story-wrapper.tsx +++ b/web/.storybook/utils/form-story-wrapper.tsx @@ -47,7 +47,7 @@ export const FormStoryWrapper = ({ {children(form)}