@@ -64,7 +66,7 @@ Dify is an open-source LLM app development platform. Its intuitive interface com
Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
**5. Agent capabilities**:
- You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DELL·E, Stable Diffusion and WolframAlpha.
+ You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DALL·E, Stable Diffusion and WolframAlpha.
**6. LLMOps**:
Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.
@@ -150,7 +152,7 @@ Quickly get Dify running in your environment with this [starter guide](#quick-st
Use our [documentation](https://docs.dify.ai) for further references and more in-depth instructions.
- **Dify for enterprise / organizations**
-We provide additional enterprise-centric features. [Schedule a meeting with us](https://cal.com/guchenhe/30min) or [send us an email](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) to discuss enterprise needs.
+We provide additional enterprise-centric features. [Log your questions for us through this chatbot](https://udify.app/chat/22L1zSxg6yW1cWQg) or [send us an email](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) to discuss enterprise needs.
> For startups and small businesses using AWS, check out [Dify Premium on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) and deploy it to your own AWS VPC with one-click. It's an affordable AMI offering with the option to create apps with custom logo and branding.
@@ -219,23 +221,6 @@ At the same time, please consider supporting Dify by sharing it on social media
* [Discord](https://discord.gg/FngNHpbcY7). Best for: sharing your applications and hanging out with the community.
* [Twitter](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
-Or, schedule a meeting directly with a team member:
-
-
-
-
Point of Contact
-
Purpose
-
-
-
-
Business enquiries & product feedback
-
-
-
-
Contributions, issues & feature requests
-
-
-
## Star history
[](https://star-history.com/#langgenius/dify&Date)
diff --git a/README_AR.md b/README_AR.md
index c91602721e..10d572cc49 100644
--- a/README_AR.md
+++ b/README_AR.md
@@ -4,7 +4,7 @@
Dify Cloud ·
الاستضافة الذاتية ·
التوثيق ·
- استفسارات الشركات
+ استفسار الشركات (للإنجليزية فقط)
@@ -37,6 +37,8 @@
+
+
@@ -56,7 +58,7 @@
**4. خط أنابيب RAG**: قدرات RAG الواسعة التي تغطي كل شيء من استيعاب الوثائق إلى الاسترجاع، مع الدعم الفوري لاستخراج النص من ملفات PDF و PPT وتنسيقات الوثائق الشائعة الأخرى.
-**5. قدرات الوكيل**: يمكنك تعريف الوكلاء بناءً على أمر وظيفة LLM أو ReAct، وإضافة أدوات مدمجة أو مخصصة للوكيل. توفر Dify أكثر من 50 أداة مدمجة لوكلاء الذكاء الاصطناعي، مثل البحث في Google و DELL·E وStable Diffusion و WolframAlpha.
+**5. قدرات الوكيل**: يمكنك تعريف الوكلاء بناءً على أمر وظيفة LLM أو ReAct، وإضافة أدوات مدمجة أو مخصصة للوكيل. توفر Dify أكثر من 50 أداة مدمجة لوكلاء الذكاء الاصطناعي، مثل البحث في Google و DALL·E وStable Diffusion و WolframAlpha.
**6. الـ LLMOps**: راقب وتحلل سجلات التطبيق والأداء على مر الزمن. يمكنك تحسين الأوامر والبيانات والنماذج باستمرار استنادًا إلى البيانات الإنتاجية والتعليقات.
@@ -202,23 +204,6 @@ docker compose up -d
* [Discord](https://discord.gg/FngNHpbcY7). الأفضل لـ: مشاركة تطبيقاتك والترفيه مع المجتمع.
* [تويتر](https://twitter.com/dify_ai). الأفضل لـ: مشاركة تطبيقاتك والترفيه مع المجتمع.
-أو، قم بجدولة اجتماع مباشرة مع أحد أعضاء الفريق:
-
-
#
@@ -69,7 +72,7 @@ Dify es una plataforma de desarrollo de aplicaciones de LLM de código abierto.
**5. Capacidades de agente**:
Puedes definir agent
-es basados en LLM Function Calling o ReAct, y agregar herramientas preconstruidas o personalizadas para el agente. Dify proporciona más de 50 herramientas integradas para agentes de IA, como Búsqueda de Google, DELL·E, Difusión Estable y WolframAlpha.
+es basados en LLM Function Calling o ReAct, y agregar herramientas preconstruidas o personalizadas para el agente. Dify proporciona más de 50 herramientas integradas para agentes de IA, como Búsqueda de Google, DALL·E, Difusión Estable y WolframAlpha.
**6. LLMOps**:
Supervisa y analiza registros de aplicaciones y rendimiento a lo largo del tiempo. Podrías mejorar continuamente prompts, conjuntos de datos y modelos basados en datos de producción y anotaciones.
@@ -155,7 +158,7 @@ Pon rápidamente Dify en funcionamiento en tu entorno con esta [guía de inicio
Usa nuestra [documentación](https://docs.dify.ai) para más referencias e instrucciones más detalladas.
- **Dify para Empresas / Organizaciones**
-Proporcionamos características adicionales centradas en la empresa. [Programa una reunión con nosotros](https://cal.com/guchenhe/30min) o [envíanos un correo electrónico](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) para discutir las necesidades empresariales.
+Proporcionamos características adicionales centradas en la empresa. [Envíanos un correo electrónico](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) para discutir las necesidades empresariales.
> Para startups y pequeñas empresas que utilizan AWS, echa un vistazo a [Dify Premium en AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) e impleméntalo en tu propio VPC de AWS con un clic. Es una AMI asequible que ofrece la opción de crear aplicaciones con logotipo y marca personalizados.
@@ -227,23 +230,6 @@ Al mismo tiempo, considera apoyar a Dify compartiéndolo en redes sociales y en
* [Discord](https://discord.gg/FngNHpbcY7). Lo mejor para: compartir tus aplicaciones y pasar el rato con la comunidad.
* [Twitter](https://twitter.com/dify_ai). Lo mejor para: compartir tus aplicaciones y pasar el rato con la comunidad.
-O, programa una reunión directamente con un miembro del equipo:
-
-
-
-
Punto de Contacto
-
Propósito
-
-
-
-
Consultas comerciales y retroalimentación del producto
-
-
-
-
Contribuciones, problemas y solicitudes de características
-
-
-
## Historial de Estrellas
[](https://star-history.com/#langgenius/dify&Date)
@@ -255,4 +241,4 @@ Para proteger tu privacidad, evita publicar problemas de seguridad en GitHub. En
## Licencia
-Este repositorio está disponible bajo la [Licencia de Código Abierto de Dify](LICENSE), que es esencialmente Apache 2.0 con algunas restricciones adicionales.
\ No newline at end of file
+Este repositorio está disponible bajo la [Licencia de Código Abierto de Dify](LICENSE), que es esencialmente Apache 2.0 con algunas restricciones adicionales.
diff --git a/README_FR.md b/README_FR.md
index 768c9390d8..681d596749 100644
--- a/README_FR.md
+++ b/README_FR.md
@@ -4,7 +4,7 @@
Dify Cloud ·
Auto-hébergement ·
Documentation ·
- Planifier une démo
+ Demande d’entreprise (en anglais seulement)
@@ -29,13 +29,16 @@
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
#
@@ -69,7 +72,7 @@ Dify est une plateforme de développement d'applications LLM open source. Son in
**5. Capac
ités d'agent**:
- Vous pouvez définir des agents basés sur l'appel de fonction LLM ou ReAct, et ajouter des outils pré-construits ou personnalisés pour l'agent. Dify fournit plus de 50 outils intégrés pour les agents d'IA, tels que la recherche Google, DELL·E, Stable Diffusion et WolframAlpha.
+ Vous pouvez définir des agents basés sur l'appel de fonction LLM ou ReAct, et ajouter des outils pré-construits ou personnalisés pour l'agent. Dify fournit plus de 50 outils intégrés pour les agents d'IA, tels que la recherche Google, DALL·E, Stable Diffusion et WolframAlpha.
**6. LLMOps**:
Surveillez et analysez les journaux d'application et les performances au fil du temps. Vous pouvez continuellement améliorer les prompts, les ensembles de données et les modèles en fonction des données de production et des annotations.
@@ -155,7 +158,7 @@ Lancez rapidement Dify dans votre environnement avec ce [guide de démarrage](#q
Utilisez notre [documentation](https://docs.dify.ai) pour plus de références et des instructions plus détaillées.
- **Dify pour les entreprises / organisations**
-Nous proposons des fonctionnalités supplémentaires adaptées aux entreprises. [Planifiez une réunion avec nous](https://cal.com/guchenhe/30min) ou [envoyez-nous un e-mail](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) pour discuter des besoins de l'entreprise.
+Nous proposons des fonctionnalités supplémentaires adaptées aux entreprises. [Envoyez-nous un e-mail](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) pour discuter des besoins de l'entreprise.
> Pour les startups et les petites entreprises utilisant AWS, consultez [Dify Premium sur AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) et déployez-le dans votre propre VPC AWS en un clic. C'est une offre AMI abordable avec la possibilité de créer des applications avec un logo et une marque personnalisés.
@@ -225,23 +228,6 @@ Dans le même temps, veuillez envisager de soutenir Dify en le partageant sur le
* [Discord](https://discord.gg/FngNHpbcY7). Meilleur pour: partager vos applications et passer du temps avec la communauté.
* [Twitter](https://twitter.com/dify_ai). Meilleur pour: partager vos applications et passer du temps avec la communauté.
-Ou, planifiez directement une réunion avec un membre de l'équipe:
-
-
-
-
Point de contact
-
Objectif
-
-
-
-
Demandes commerciales & retours produit
-
-
-
-
Contributions, problèmes & demandes de fonctionnalités
-
-
-
## Historique des étoiles
[](https://star-history.com/#langgenius/dify&Date)
diff --git a/README_JA.md b/README_JA.md
index f4cccd5271..e6a8621e7b 100644
--- a/README_JA.md
+++ b/README_JA.md
@@ -4,7 +4,7 @@
Dify Cloud ·
セルフホスティング ·
ドキュメント ·
- デモの予約
+ 企業のお問い合わせ(英語のみ)
#
@@ -67,7 +70,7 @@ Dify is an open-source LLM app development platform. Its intuitive interface com
Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
**5. Agent capabilities**:
- You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DELL·E, Stable Diffusion and WolframAlpha.
+ You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DALL·E, Stable Diffusion and WolframAlpha.
**6. LLMOps**:
Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.
@@ -155,7 +158,7 @@ Quickly get Dify running in your environment with this [starter guide](#quick-st
Use our [documentation](https://docs.dify.ai) for further references and more in-depth instructions.
- **Dify for Enterprise / Organizations**
-We provide additional enterprise-centric features. [Schedule a meeting with us](https://cal.com/guchenhe/30min) or [send us an email](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) to discuss enterprise needs.
+We provide additional enterprise-centric features. [Send us an email](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) to discuss enterprise needs.
> For startups and small businesses using AWS, check out [Dify Premium on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) and deploy it to your own AWS VPC with one-click. It's an affordable AMI offering with the option to create apps with custom logo and branding.
@@ -227,23 +230,6 @@ At the same time, please consider supporting Dify by sharing it on social media
* [Discord](https://discord.gg/FngNHpbcY7). Best for: sharing your applications and hanging out with the community.
* [Twitter](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
-Or, schedule a meeting directly with a team member:
-
-
-
-
Point of Contact
-
Purpose
-
-
-
-
Business enquiries & product feedback
-
-
-
-
Contributions, issues & feature requests
-
-
-
## Star History
[](https://star-history.com/#langgenius/dify&Date)
@@ -255,4 +241,4 @@ To protect your privacy, please avoid posting security issues on GitHub. Instead
## License
-This repository is available under the [Dify Open Source License](LICENSE), which is essentially Apache 2.0 with a few additional restrictions.
\ No newline at end of file
+This repository is available under the [Dify Open Source License](LICENSE), which is essentially Apache 2.0 with a few additional restrictions.
diff --git a/README_KR.md b/README_KR.md
index bb15fac8ef..a5f3bc68d0 100644
--- a/README_KR.md
+++ b/README_KR.md
@@ -4,7 +4,7 @@
Dify 클라우드 ·
셀프-호스팅 ·
문서 ·
- 기업 문의
+ 기업 문의 (영어만 가능)
@@ -35,7 +35,10 @@
-
+
+
+
+
@@ -63,7 +66,7 @@
문서 수집부터 검색까지 모든 것을 다루며, PDF, PPT 및 기타 일반적인 문서 형식에서 텍스트 추출을 위한 기본 지원이 포함되어 있는 광범위한 RAG 기능을 제공합니다.
**5. 에이전트 기능**:
- LLM 함수 호출 또는 ReAct를 기반으로 에이전트를 정의하고 에이전트에 대해 사전 구축된 도구나 사용자 정의 도구를 추가할 수 있습니다. Dify는 Google Search, DELL·E, Stable Diffusion, WolframAlpha 등 AI 에이전트를 위한 50개 이상의 내장 도구를 제공합니다.
+ LLM 함수 호출 또는 ReAct를 기반으로 에이전트를 정의하고 에이전트에 대해 사전 구축된 도구나 사용자 정의 도구를 추가할 수 있습니다. Dify는 Google Search, DALL·E, Stable Diffusion, WolframAlpha 등 AI 에이전트를 위한 50개 이상의 내장 도구를 제공합니다.
**6. LLMOps**:
시간 경과에 따른 애플리케이션 로그와 성능을 모니터링하고 분석합니다. 생산 데이터와 주석을 기반으로 프롬프트, 데이터세트, 모델을 지속적으로 개선할 수 있습니다.
@@ -148,7 +151,7 @@
추가 참조 및 더 심층적인 지침은 [문서](https://docs.dify.ai)를 사용하세요.
- **기업 / 조직을 위한 Dify**
- 우리는 추가적인 기업 중심 기능을 제공합니다. 당사와 [미팅일정](https://cal.com/guchenhe/30min)을 잡거나 [이메일 보내기](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry)를 통해 기업 요구 사항을 논의하십시오.
+ 우리는 추가적인 기업 중심 기능을 제공합니다. 잡거나 [이메일 보내기](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry)를 통해 기업 요구 사항을 논의하십시오.
> AWS를 사용하는 스타트업 및 중소기업의 경우 [AWS Marketplace에서 Dify Premium](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6)을 확인하고 한 번의 클릭으로 자체 AWS VPC에 배포하십시오. 맞춤형 로고와 브랜딩이 포함된 앱을 생성할 수 있는 옵션이 포함된 저렴한 AMI 제품입니다.
@@ -217,22 +220,6 @@ Dify를 Kubernetes에 배포하고 프리미엄 스케일링 설정을 구성했
* [디스코드](https://discord.gg/FngNHpbcY7). 애플리케이션 공유 및 커뮤니티와 소통하기에 적합합니다.
* [트위터](https://twitter.com/dify_ai). 애플리케이션 공유 및 커뮤니티와 소통하기에 적합합니다.
-또는 팀원과 직접 미팅을 예약하세요:
-
-
-
-
연락처
-
목적
-
-
-
-
비즈니스 문의 및 제품 피드백
-
-
-
-
기여, 이슈 및 기능 요청
-
-
## Star 히스토리
diff --git a/README_TR.md b/README_TR.md
new file mode 100644
index 0000000000..54b6db3f82
--- /dev/null
+++ b/README_TR.md
@@ -0,0 +1,237 @@
+
+
+
+
+
+Dify, açık kaynaklı bir LLM uygulama geliştirme platformudur. Sezgisel arayüzü, AI iş akışı, RAG pipeline'ı, ajan yetenekleri, model yönetimi, gözlemlenebilirlik özellikleri ve daha fazlasını birleştirerek, prototipten üretime hızlıca geçmenizi sağlar. İşte temel özelliklerin bir listesi:
+
+
+**1. Workflow**:
+Görsel bir arayüz üzerinde güçlü AI iş akışları oluşturun ve test edin, aşağıdaki tüm özellikleri ve daha fazlasını kullanarak.
+
+
+ https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
+
+
+
+**2. Kapsamlı model desteği**:
+Çok sayıda çıkarım sağlayıcısı ve kendi kendine barındırılan çözümlerden yüzlerce özel / açık kaynaklı LLM ile sorunsuz entegrasyon sağlar. GPT, Mistral, Llama3 ve OpenAI API uyumlu tüm modelleri kapsar. Desteklenen model sağlayıcılarının tam listesine [buradan](https://docs.dify.ai/getting-started/readme/model-providers) ulaşabilirsiniz.
+
+
+
+
+Özür dilerim, haklısınız. Daha anlamlı ve akıcı bir çeviri yapmaya çalışayım. İşte güncellenmiş çeviri:
+
+**3. Prompt IDE**:
+ Komut istemlerini oluşturmak, model performansını karşılaştırmak ve sohbet tabanlı uygulamalara metin-konuşma gibi ek özellikler eklemek için kullanıcı dostu bir arayüz.
+
+**4. RAG Pipeline**:
+ Belge alımından bilgi çekmeye kadar geniş kapsamlı RAG yetenekleri. PDF'ler, PPT'ler ve diğer yaygın belge formatlarından metin çıkarma için hazır destek sunar.
+
+**5. Ajan yetenekleri**:
+ LLM Fonksiyon Çağırma veya ReAct'a dayalı ajanlar tanımlayabilir ve bu ajanlara önceden hazırlanmış veya özel araçlar ekleyebilirsiniz. Dify, AI ajanları için Google Arama, DALL·E, Stable Diffusion ve WolframAlpha gibi 50'den fazla yerleşik araç sağlar.
+
+**6. LLMOps**:
+ Uygulama loglarını ve performans metriklerini zaman içinde izleme ve analiz etme imkanı. Üretim ortamından elde edilen verilere ve kullanıcı geri bildirimlerine dayanarak, prompt'ları, veri setlerini ve modelleri sürekli olarak optimize edebilirsiniz. Bu sayede, AI uygulamanızın performansını ve doğruluğunu sürekli olarak artırabilirsiniz.
+
+**7. Hizmet Olarak Backend**:
+ Dify'ın tüm özellikleri ilgili API'lerle birlikte gelir, böylece Dify'ı kendi iş mantığınıza kolayca entegre edebilirsiniz.
+
+
+## Özellik karşılaştırması
+
+
+
Özellik
+
Dify.AI
+
LangChain
+
Flowise
+
OpenAI Assistants API
+
+
+
Programlama Yaklaşımı
+
API + Uygulama odaklı
+
Python Kodu
+
Uygulama odaklı
+
API odaklı
+
+
+
Desteklenen LLM'ler
+
Zengin Çeşitlilik
+
Zengin Çeşitlilik
+
Zengin Çeşitlilik
+
Yalnızca OpenAI
+
+
+
RAG Motoru
+
✅
+
✅
+
✅
+
✅
+
+
+
Ajan
+
✅
+
✅
+
❌
+
✅
+
+
+
İş Akışı
+
✅
+
❌
+
✅
+
❌
+
+
+
Gözlemlenebilirlik
+
✅
+
✅
+
❌
+
❌
+
+
+
Kurumsal Özellikler (SSO/Erişim kontrolü)
+
✅
+
❌
+
❌
+
❌
+
+
+
Yerel Dağıtım
+
✅
+
✅
+
✅
+
❌
+
+
+
+## Dify'ı Kullanma
+
+- **Cloud **
+İşte verdiğiniz metnin Türkçe çevirisi, kod bloğu içinde:
+-
+Herkesin sıfır kurulumla denemesi için bir [Dify Cloud](https://dify.ai) hizmeti sunuyoruz. Bu hizmet, kendi kendine dağıtılan versiyonun tüm yeteneklerini sağlar ve sandbox planında 200 ücretsiz GPT-4 çağrısı içerir.
+
+- **Dify Topluluk Sürümünü Kendi Sunucunuzda Barındırma**
+Bu [başlangıç kılavuzu](#quick-start) ile Dify'ı kendi ortamınızda hızlıca çalıştırın.
+Daha fazla referans ve detaylı talimatlar için [dokümantasyonumuzu](https://docs.dify.ai) kullanın.
+
+- **Kurumlar / organizasyonlar için Dify**
+Ek kurumsal odaklı özellikler sunuyoruz. Kurumsal ihtiyaçları görüşmek için [bize bir e-posta gönderin](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry).
+ > AWS kullanan startuplar ve küçük işletmeler için, [AWS Marketplace'deki Dify Premium'a](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) göz atın ve tek tıklamayla kendi AWS VPC'nize dağıtın. Bu, özel logo ve marka ile uygulamalar oluşturma seçeneğine sahip uygun fiyatlı bir AMI teklifdir.
+
+## Güncel Kalma
+
+GitHub'da Dify'a yıldız verin ve yeni sürümlerden anında haberdar olun.
+
+
+
+
+
+## Hızlı başlangıç
+> Dify'ı kurmadan önce, makinenizin aşağıdaki minimum sistem gereksinimlerini karşıladığından emin olun:
+>
+>- CPU >= 2 Çekirdek
+>- RAM >= 4GB
+
+
+İşte verdiğiniz metnin Türkçe çevirisi, kod bloğu içinde:
+
+Dify sunucusunu başlatmanın en kolay yolu, [docker-compose.yml](docker/docker-compose.yaml) dosyamızı çalıştırmaktır. Kurulum komutunu çalıştırmadan önce, makinenizde [Docker](https://docs.docker.com/get-docker/) ve [Docker Compose](https://docs.docker.com/compose/install/)'un kurulu olduğundan emin olun:
+
+```bash
+cd docker
+cp .env.example .env
+docker compose up -d
+```
+
+Çalıştırdıktan sonra, tarayıcınızda [http://localhost/install](http://localhost/install) adresinden Dify kontrol paneline erişebilir ve başlangıç ayarları sürecini başlatabilirsiniz.
+
+> Eğer Dify'a katkıda bulunmak veya ek geliştirmeler yapmak isterseniz, [kaynak koddan dağıtım kılavuzumuza](https://docs.dify.ai/getting-started/install-self-hosted/local-source-code) başvurun.
+
+## Sonraki adımlar
+
+Yapılandırmayı özelleştirmeniz gerekiyorsa, lütfen [.env.example](docker/.env.example) dosyamızdaki yorumlara bakın ve `.env` dosyanızdaki ilgili değerleri güncelleyin. Ayrıca, spesifik dağıtım ortamınıza ve gereksinimlerinize bağlı olarak `docker-compose.yaml` dosyasının kendisinde de, imaj sürümlerini, port eşlemelerini veya hacim bağlantılarını değiştirmek gibi ayarlamalar yapmanız gerekebilir. Herhangi bir değişiklik yaptıktan sonra, lütfen `docker-compose up -d` komutunu tekrar çalıştırın. Kullanılabilir tüm ortam değişkenlerinin tam listesini [burada](https://docs.dify.ai/getting-started/install-self-hosted/environments) bulabilirsiniz.
+
+Yüksek kullanılabilirliğe sahip bir kurulum yapılandırmak isterseniz, Dify'ın Kubernetes üzerine dağıtılmasına olanak tanıyan topluluk katkılı [Helm Charts](https://helm.sh/) ve YAML dosyaları mevcuttur.
+
+- [@LeoQuote tarafından Helm Chart](https://github.com/douban/charts/tree/master/charts/dify)
+- [@BorisPolonsky tarafından Helm Chart](https://github.com/BorisPolonsky/dify-helm)
+- [@Winson-030 tarafından YAML dosyası](https://github.com/Winson-030/dify-kubernetes)
+
+#### Dağıtım için Terraform Kullanımı
+
+##### Azure Global
+[Terraform](https://www.terraform.io/) kullanarak Dify'ı Azure'a tek tıklamayla dağıtın.
+- [@nikawang tarafından Azure Terraform](https://github.com/nikawang/dify-azure-terraform)
+
+## Katkıda Bulunma
+
+Kod katkısında bulunmak isteyenler için [Katkı Kılavuzumuza](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) bakabilirsiniz.
+Aynı zamanda, lütfen Dify'ı sosyal medyada, etkinliklerde ve konferanslarda paylaşarak desteklemeyi düşünün.
+
+> Dify'ı Mandarin veya İngilizce dışındaki dillere çevirmemize yardımcı olacak katkıda bulunanlara ihtiyacımız var. Yardımcı olmakla ilgileniyorsanız, lütfen daha fazla bilgi için [i18n README](https://github.com/langgenius/dify/blob/main/web/i18n/README.md) dosyasına bakın ve [Discord Topluluk Sunucumuzdaki](https://discord.gg/8Tpq4AcN9c) `global-users` kanalında bize bir yorum bırakın.
+
+**Katkıda Bulunanlar**
+
+
+
+
+
+## Topluluk & iletişim
+
+* [Github Tartışmaları](https://github.com/langgenius/dify/discussions). En uygun: geri bildirim paylaşmak ve soru sormak için.
+* [GitHub Sorunları](https://github.com/langgenius/dify/issues). En uygun: Dify.AI kullanırken karşılaştığınız hatalar ve özellik önerileri için. [Katkı Kılavuzumuza](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) bakın.
+* [Discord](https://discord.gg/FngNHpbcY7). En uygun: uygulamalarınızı paylaşmak ve toplulukla vakit geçirmek için.
+* [Twitter](https://twitter.com/dify_ai). En uygun: uygulamalarınızı paylaşmak ve toplulukla vakit geçirmek için.
+
+## Star history
+
+[](https://star-history.com/#langgenius/dify&Date)
+
+## Güvenlik açıklaması
+
+Gizliliğinizi korumak için, lütfen güvenlik sorunlarını GitHub'da paylaşmaktan kaçının. Bunun yerine, sorularınızı security@dify.ai adresine gönderin ve size daha detaylı bir cevap vereceğiz.
+
+## Lisans
+
+Bu depo, temel olarak Apache 2.0 lisansı ve birkaç ek kısıtlama içeren [Dify Açık Kaynak Lisansı](LICENSE) altında kullanıma sunulmuştur.
diff --git a/README_VI.md b/README_VI.md
new file mode 100644
index 0000000000..6d4035eceb
--- /dev/null
+++ b/README_VI.md
@@ -0,0 +1,234 @@
+
+
+
+
+
+Dify là một nền tảng phát triển ứng dụng LLM mã nguồn mở. Giao diện trực quan kết hợp quy trình làm việc AI, mô hình RAG, khả năng tác nhân, quản lý mô hình, tính năng quan sát và hơn thế nữa, cho phép bạn nhanh chóng chuyển từ nguyên mẫu sang sản phẩm. Đây là danh sách các tính năng cốt lõi:
+
+
+**1. Quy trình làm việc**:
+ Xây dựng và kiểm tra các quy trình làm việc AI mạnh mẽ trên một canvas trực quan, tận dụng tất cả các tính năng sau đây và hơn thế nữa.
+
+
+ https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
+
+
+
+**2. Hỗ trợ mô hình toàn diện**:
+ Tích hợp liền mạch với hàng trăm mô hình LLM độc quyền / mã nguồn mở từ hàng chục nhà cung cấp suy luận và giải pháp tự lưu trữ, bao gồm GPT, Mistral, Llama3, và bất kỳ mô hình tương thích API OpenAI nào. Danh sách đầy đủ các nhà cung cấp mô hình được hỗ trợ có thể được tìm thấy [tại đây](https://docs.dify.ai/getting-started/readme/model-providers).
+
+
+
+
+**3. IDE Prompt**:
+ Giao diện trực quan để tạo prompt, so sánh hiệu suất mô hình và thêm các tính năng bổ sung như chuyển văn bản thành giọng nói cho một ứng dụng dựa trên trò chuyện.
+
+**4. Mô hình RAG**:
+ Khả năng RAG mở rộng bao gồm mọi thứ từ nhập tài liệu đến truy xuất, với hỗ trợ sẵn có cho việc trích xuất văn bản từ PDF, PPT và các định dạng tài liệu phổ biến khác.
+
+**5. Khả năng tác nhân**:
+ Bạn có thể định nghĩa các tác nhân dựa trên LLM Function Calling hoặc ReAct, và thêm các công cụ được xây dựng sẵn hoặc tùy chỉnh cho tác nhân. Dify cung cấp hơn 50 công cụ tích hợp sẵn cho các tác nhân AI, như Google Search, DALL·E, Stable Diffusion và WolframAlpha.
+
+**6. LLMOps**:
+ Giám sát và phân tích nhật ký và hiệu suất ứng dụng theo thời gian. Bạn có thể liên tục cải thiện prompt, bộ dữ liệu và mô hình dựa trên dữ liệu sản xuất và chú thích.
+
+**7. Backend-as-a-Service**:
+ Tất cả các dịch vụ của Dify đều đi kèm với các API tương ứng, vì vậy bạn có thể dễ dàng tích hợp Dify vào logic kinh doanh của riêng mình.
+
+
+## So sánh tính năng
+
+
+
Tính năng
+
Dify.AI
+
LangChain
+
Flowise
+
OpenAI Assistants API
+
+
+
Phương pháp lập trình
+
Hướng API + Ứng dụng
+
Mã Python
+
Hướng ứng dụng
+
Hướng API
+
+
+
LLMs được hỗ trợ
+
Đa dạng phong phú
+
Đa dạng phong phú
+
Đa dạng phong phú
+
Chỉ OpenAI
+
+
+
RAG Engine
+
✅
+
✅
+
✅
+
✅
+
+
+
Agent
+
✅
+
✅
+
❌
+
✅
+
+
+
Quy trình làm việc
+
✅
+
❌
+
✅
+
❌
+
+
+
Khả năng quan sát
+
✅
+
✅
+
❌
+
❌
+
+
+
Tính năng doanh nghiệp (SSO/Kiểm soát truy cập)
+
✅
+
❌
+
❌
+
❌
+
+
+
Triển khai cục bộ
+
✅
+
✅
+
✅
+
❌
+
+
+
+## Sử dụng Dify
+
+- **Cloud **
+Chúng tôi lưu trữ dịch vụ [Dify Cloud](https://dify.ai) cho bất kỳ ai muốn thử mà không cần cài đặt. Nó cung cấp tất cả các khả năng của phiên bản tự triển khai và bao gồm 200 lượt gọi GPT-4 miễn phí trong gói sandbox.
+
+- **Tự triển khai Dify Community Edition**
+Nhanh chóng chạy Dify trong môi trường của bạn với [hướng dẫn bắt đầu](#quick-start) này.
+Sử dụng [tài liệu](https://docs.dify.ai) của chúng tôi để tham khảo thêm và nhận hướng dẫn chi tiết hơn.
+
+- **Dify cho doanh nghiệp / tổ chức**
+Chúng tôi cung cấp các tính năng bổ sung tập trung vào doanh nghiệp. [Ghi lại câu hỏi của bạn cho chúng tôi thông qua chatbot này](https://udify.app/chat/22L1zSxg6yW1cWQg) hoặc [gửi email cho chúng tôi](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) để thảo luận về nhu cầu doanh nghiệp.
+ > Đối với các công ty khởi nghiệp và doanh nghiệp nhỏ sử dụng AWS, hãy xem [Dify Premium trên AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) và triển khai nó vào AWS VPC của riêng bạn chỉ với một cú nhấp chuột. Đây là một AMI giá cả phải chăng với tùy chọn tạo ứng dụng với logo và thương hiệu tùy chỉnh.
+
+
+## Luôn cập nhật
+
+Yêu thích Dify trên GitHub và được thông báo ngay lập tức về các bản phát hành mới.
+
+
+
+
+
+## Bắt đầu nhanh
+> Trước khi cài đặt Dify, hãy đảm bảo máy của bạn đáp ứng các yêu cầu hệ thống tối thiểu sau:
+>
+>- CPU >= 2 Core
+>- RAM >= 4GB
+
+
+
+Cách dễ nhất để khởi động máy chủ Dify là chạy tệp [docker-compose.yml](docker/docker-compose.yaml) của chúng tôi. Trước khi chạy lệnh cài đặt, hãy đảm bảo rằng [Docker](https://docs.docker.com/get-docker/) và [Docker Compose](https://docs.docker.com/compose/install/) đã được cài đặt trên máy của bạn:
+
+```bash
+cd docker
+cp .env.example .env
+docker compose up -d
+```
+
+Sau khi chạy, bạn có thể truy cập bảng điều khiển Dify trong trình duyệt của bạn tại [http://localhost/install](http://localhost/install) và bắt đầu quá trình khởi tạo.
+
+> Nếu bạn muốn đóng góp cho Dify hoặc phát triển thêm, hãy tham khảo [hướng dẫn triển khai từ mã nguồn](https://docs.dify.ai/getting-started/install-self-hosted/local-source-code) của chúng tôi
+
+## Các bước tiếp theo
+
+Nếu bạn cần tùy chỉnh cấu hình, vui lòng tham khảo các nhận xét trong tệp [.env.example](docker/.env.example) của chúng tôi và cập nhật các giá trị tương ứng trong tệp `.env` của bạn. Ngoài ra, bạn có thể cần điều chỉnh tệp `docker-compose.yaml`, chẳng hạn như thay đổi phiên bản hình ảnh, ánh xạ cổng hoặc gắn kết khối lượng, dựa trên môi trường triển khai cụ thể và yêu cầu của bạn. Sau khi thực hiện bất kỳ thay đổi nào, vui lòng chạy lại `docker-compose up -d`. Bạn có thể tìm thấy danh sách đầy đủ các biến môi trường có sẵn [tại đây](https://docs.dify.ai/getting-started/install-self-hosted/environments).
+
+Nếu bạn muốn cấu hình một cài đặt có độ sẵn sàng cao, có các [Helm Charts](https://helm.sh/) và tệp YAML do cộng đồng đóng góp cho phép Dify được triển khai trên Kubernetes.
+
+- [Helm Chart bởi @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
+- [Helm Chart bởi @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
+- [Tệp YAML bởi @Winson-030](https://github.com/Winson-030/dify-kubernetes)
+
+#### Sử dụng Terraform để Triển khai
+
+##### Azure Global
+Triển khai Dify lên Azure chỉ với một cú nhấp chuột bằng cách sử dụng [terraform](https://www.terraform.io/).
+- [Azure Terraform bởi @nikawang](https://github.com/nikawang/dify-azure-terraform)
+
+## Đóng góp
+
+Đối với những người muốn đóng góp mã, xem [Hướng dẫn Đóng góp](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) của chúng tôi.
+Đồng thời, vui lòng xem xét hỗ trợ Dify bằng cách chia sẻ nó trên mạng xã hội và tại các sự kiện và hội nghị.
+
+
+> Chúng tôi đang tìm kiếm người đóng góp để giúp dịch Dify sang các ngôn ngữ khác ngoài tiếng Trung hoặc tiếng Anh. Nếu bạn quan tâm đến việc giúp đỡ, vui lòng xem [README i18n](https://github.com/langgenius/dify/blob/main/web/i18n/README.md) để biết thêm thông tin và để lại bình luận cho chúng tôi trong kênh `global-users` của [Máy chủ Cộng đồng Discord](https://discord.gg/8Tpq4AcN9c) của chúng tôi.
+
+**Người đóng góp**
+
+
+
+
+
+## Cộng đồng & liên hệ
+
+* [Thảo luận GitHub](https://github.com/langgenius/dify/discussions). Tốt nhất cho: chia sẻ phản hồi và đặt câu hỏi.
+* [Vấn đề GitHub](https://github.com/langgenius/dify/issues). Tốt nhất cho: lỗi bạn gặp phải khi sử dụng Dify.AI và đề xuất tính năng. Xem [Hướng dẫn Đóng góp](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) của chúng tôi.
+* [Discord](https://discord.gg/FngNHpbcY7). Tốt nhất cho: chia sẻ ứng dụng của bạn và giao lưu với cộng đồng.
+* [Twitter](https://twitter.com/dify_ai). Tốt nhất cho: chia sẻ ứng dụng của bạn và giao lưu với cộng đồng.
+
+## Lịch sử Yêu thích
+
+[](https://star-history.com/#langgenius/dify&Date)
+
+## Tiết lộ bảo mật
+
+Để bảo vệ quyền riêng tư của bạn, vui lòng tránh đăng các vấn đề bảo mật trên GitHub. Thay vào đó, hãy gửi câu hỏi của bạn đến security@dify.ai và chúng tôi sẽ cung cấp cho bạn câu trả lời chi tiết hơn.
+
+## Giấy phép
+
+Kho lưu trữ này có sẵn theo [Giấy phép Mã nguồn Mở Dify](LICENSE), về cơ bản là Apache 2.0 với một vài hạn chế bổ sung.
\ No newline at end of file
diff --git a/api/Dockerfile b/api/Dockerfile
index 55776f80e1..06a6f43631 100644
--- a/api/Dockerfile
+++ b/api/Dockerfile
@@ -12,6 +12,7 @@ ENV POETRY_CACHE_DIR=/tmp/poetry_cache
ENV POETRY_NO_INTERACTION=1
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
ENV POETRY_VIRTUALENVS_CREATE=true
+ENV POETRY_REQUESTS_TIMEOUT=15
FROM base AS packages
@@ -41,8 +42,12 @@ ENV TZ=UTC
WORKDIR /app/api
RUN apt-get update \
- && apt-get install -y --no-install-recommends curl wget vim nodejs ffmpeg libgmp-dev libmpfr-dev libmpc-dev \
- && apt-get autoremove \
+ && apt-get install -y --no-install-recommends curl nodejs libgmp-dev libmpfr-dev libmpc-dev \
+ && echo "deb http://deb.debian.org/debian testing main" > /etc/apt/sources.list \
+ && apt-get update \
+ # For Security
+ && apt-get install -y --no-install-recommends zlib1g=1:1.3.dfsg+really1.3.1-1 expat=2.6.2-1 libldap-2.5-0=2.5.18+dfsg-2 perl=5.38.2-5 libsqlite3-0=3.46.0-1 \
+ && apt-get autoremove -y \
&& rm -rf /var/lib/apt/lists/*
# Copy Python environment and packages
@@ -50,6 +55,9 @@ ENV VIRTUAL_ENV=/app/api/.venv
COPY --from=packages ${VIRTUAL_ENV} ${VIRTUAL_ENV}
ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"
+# Download nltk data
+RUN python -c "import nltk; nltk.download('punkt')"
+
# Copy source code
COPY . /app/api/
diff --git a/api/configs/app_config.py b/api/configs/app_config.py
index a5a4fc788d..b277760edd 100644
--- a/api/configs/app_config.py
+++ b/api/configs/app_config.py
@@ -12,19 +12,14 @@ from configs.packaging import PackagingInfo
class DifyConfig(
# Packaging info
PackagingInfo,
-
# Deployment configs
DeploymentConfig,
-
# Feature configs
FeatureConfig,
-
# Middleware configs
MiddlewareConfig,
-
# Extra service configs
ExtraServiceConfig,
-
# Enterprise feature configs
# **Before using, please contact business@dify.ai by email to inquire about licensing matters.**
EnterpriseFeatureConfig,
@@ -36,7 +31,6 @@ class DifyConfig(
env_file='.env',
env_file_encoding='utf-8',
frozen=True,
-
# ignore extra attributes
extra='ignore',
)
@@ -67,3 +61,5 @@ class DifyConfig(
SSRF_PROXY_HTTPS_URL: str | None = None
MODERATION_BUFFER_SIZE: int = Field(default=300, description='The buffer size for moderation.')
+
+ MAX_VARIABLE_SIZE: int = Field(default=5 * 1024, description='The maximum size of a variable. default is 5KB.')
diff --git a/api/configs/packaging/__init__.py b/api/configs/packaging/__init__.py
index 13c55ca425..1104e298b1 100644
--- a/api/configs/packaging/__init__.py
+++ b/api/configs/packaging/__init__.py
@@ -9,7 +9,7 @@ class PackagingInfo(BaseSettings):
CURRENT_VERSION: str = Field(
description='Dify version',
- default='0.6.15',
+ default='0.6.16',
)
COMMIT_SHA: str = Field(
diff --git a/api/constants/__init__.py b/api/constants/__init__.py
index 08a2786994..e374c04316 100644
--- a/api/constants/__init__.py
+++ b/api/constants/__init__.py
@@ -1,2 +1 @@
-# TODO: Update all string in code to use this constant
-HIDDEN_VALUE = '[__HIDDEN__]'
\ No newline at end of file
+HIDDEN_VALUE = '[__HIDDEN__]'
diff --git a/api/constants/languages.py b/api/constants/languages.py
index efc668d4ee..38e49e0d1e 100644
--- a/api/constants/languages.py
+++ b/api/constants/languages.py
@@ -15,6 +15,8 @@ language_timezone_mapping = {
'ro-RO': 'Europe/Bucharest',
'pl-PL': 'Europe/Warsaw',
'hi-IN': 'Asia/Kolkata',
+ 'tr-TR': 'Europe/Istanbul',
+ 'fa-IR': 'Asia/Tehran',
}
languages = list(language_timezone_mapping.keys())
diff --git a/api/controllers/console/__init__.py b/api/controllers/console/__init__.py
index bef40bea7e..b2b9d8d496 100644
--- a/api/controllers/console/__init__.py
+++ b/api/controllers/console/__init__.py
@@ -17,6 +17,7 @@ from .app import (
audio,
completion,
conversation,
+ conversation_variables,
generator,
message,
model_config,
diff --git a/api/controllers/console/app/annotation.py b/api/controllers/console/app/annotation.py
index 1ac8e60dcd..bc15919a99 100644
--- a/api/controllers/console/app/annotation.py
+++ b/api/controllers/console/app/annotation.py
@@ -23,8 +23,7 @@ class AnnotationReplyActionApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check('annotation')
def post(self, app_id, action):
- # The role of the current user in the ta table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -47,8 +46,7 @@ class AppAnnotationSettingDetailApi(Resource):
@login_required
@account_initialization_required
def get(self, app_id):
- # The role of the current user in the ta table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -61,8 +59,7 @@ class AppAnnotationSettingUpdateApi(Resource):
@login_required
@account_initialization_required
def post(self, app_id, annotation_setting_id):
- # The role of the current user in the ta table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -82,8 +79,7 @@ class AnnotationReplyActionStatusApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check('annotation')
def get(self, app_id, job_id, action):
- # The role of the current user in the ta table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
job_id = str(job_id)
@@ -110,8 +106,7 @@ class AnnotationListApi(Resource):
@login_required
@account_initialization_required
def get(self, app_id):
- # The role of the current user in the ta table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
page = request.args.get('page', default=1, type=int)
@@ -135,8 +130,7 @@ class AnnotationExportApi(Resource):
@login_required
@account_initialization_required
def get(self, app_id):
- # The role of the current user in the ta table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -154,8 +148,7 @@ class AnnotationCreateApi(Resource):
@cloud_edition_billing_resource_check('annotation')
@marshal_with(annotation_fields)
def post(self, app_id):
- # The role of the current user in the ta table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -174,8 +167,7 @@ class AnnotationUpdateDeleteApi(Resource):
@cloud_edition_billing_resource_check('annotation')
@marshal_with(annotation_fields)
def post(self, app_id, annotation_id):
- # The role of the current user in the ta table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -191,8 +183,7 @@ class AnnotationUpdateDeleteApi(Resource):
@login_required
@account_initialization_required
def delete(self, app_id, annotation_id):
- # The role of the current user in the ta table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -207,8 +198,7 @@ class AnnotationBatchImportApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check('annotation')
def post(self, app_id):
- # The role of the current user in the ta table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
app_id = str(app_id)
@@ -232,8 +222,7 @@ class AnnotationBatchImportStatusApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check('annotation')
def get(self, app_id, job_id):
- # The role of the current user in the ta table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
job_id = str(job_id)
@@ -259,8 +248,7 @@ class AnnotationHitHistoryListApi(Resource):
@login_required
@account_initialization_required
def get(self, app_id, annotation_id):
- # The role of the current user in the table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
page = request.args.get('page', default=1, type=int)
diff --git a/api/controllers/console/app/conversation.py b/api/controllers/console/app/conversation.py
index 96cd9a6ea1..844788a9e3 100644
--- a/api/controllers/console/app/conversation.py
+++ b/api/controllers/console/app/conversation.py
@@ -143,7 +143,7 @@ class ChatConversationApi(Resource):
@get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT])
@marshal_with(conversation_with_summary_pagination_fields)
def get(self, app_model):
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument('keyword', type=str, location='args')
@@ -245,7 +245,7 @@ class ChatConversationDetailApi(Resource):
@get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT])
@marshal_with(conversation_detail_fields)
def get(self, app_model, conversation_id):
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
conversation_id = str(conversation_id)
diff --git a/api/controllers/console/app/conversation_variables.py b/api/controllers/console/app/conversation_variables.py
new file mode 100644
index 0000000000..aa0722ea35
--- /dev/null
+++ b/api/controllers/console/app/conversation_variables.py
@@ -0,0 +1,61 @@
+from flask_restful import Resource, marshal_with, reqparse
+from sqlalchemy import select
+from sqlalchemy.orm import Session
+
+from controllers.console import api
+from controllers.console.app.wraps import get_app_model
+from controllers.console.setup import setup_required
+from controllers.console.wraps import account_initialization_required
+from extensions.ext_database import db
+from fields.conversation_variable_fields import paginated_conversation_variable_fields
+from libs.login import login_required
+from models import ConversationVariable
+from models.model import AppMode
+
+
+class ConversationVariablesApi(Resource):
+ @setup_required
+ @login_required
+ @account_initialization_required
+ @get_app_model(mode=AppMode.ADVANCED_CHAT)
+ @marshal_with(paginated_conversation_variable_fields)
+ def get(self, app_model):
+ parser = reqparse.RequestParser()
+ parser.add_argument('conversation_id', type=str, location='args')
+ args = parser.parse_args()
+
+ stmt = (
+ select(ConversationVariable)
+ .where(ConversationVariable.app_id == app_model.id)
+ .order_by(ConversationVariable.created_at)
+ )
+ if args['conversation_id']:
+ stmt = stmt.where(ConversationVariable.conversation_id == args['conversation_id'])
+ else:
+ raise ValueError('conversation_id is required')
+
+ # NOTE: This is a temporary solution to avoid performance issues.
+ page = 1
+ page_size = 100
+ stmt = stmt.limit(page_size).offset((page - 1) * page_size)
+
+ with Session(db.engine) as session:
+ rows = session.scalars(stmt).all()
+
+ return {
+ 'page': page,
+ 'limit': page_size,
+ 'total': len(rows),
+ 'has_more': False,
+ 'data': [
+ {
+ 'created_at': row.created_at,
+ 'updated_at': row.updated_at,
+ **row.to_variable().model_dump(),
+ }
+ for row in rows
+ ],
+ }
+
+
+api.add_resource(ConversationVariablesApi, '/apps//conversation-variables')
diff --git a/api/controllers/console/app/message.py b/api/controllers/console/app/message.py
index 636c071795..056415f19a 100644
--- a/api/controllers/console/app/message.py
+++ b/api/controllers/console/app/message.py
@@ -149,8 +149,7 @@ class MessageAnnotationApi(Resource):
@get_app_model
@marshal_with(annotation_fields)
def post(self, app_model):
- # The role of the current user in the ta table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
parser = reqparse.RequestParser()
diff --git a/api/controllers/console/app/workflow.py b/api/controllers/console/app/workflow.py
index 6c2d4c6c9f..a3820481f9 100644
--- a/api/controllers/console/app/workflow.py
+++ b/api/controllers/console/app/workflow.py
@@ -74,6 +74,7 @@ class DraftWorkflowApi(Resource):
parser.add_argument('hash', type=str, required=False, location='json')
# TODO: set this to required=True after frontend is updated
parser.add_argument('environment_variables', type=list, required=False, location='json')
+ parser.add_argument('conversation_variables', type=list, required=False, location='json')
args = parser.parse_args()
elif 'text/plain' in content_type:
try:
@@ -88,7 +89,8 @@ class DraftWorkflowApi(Resource):
'graph': data.get('graph'),
'features': data.get('features'),
'hash': data.get('hash'),
- 'environment_variables': data.get('environment_variables')
+ 'environment_variables': data.get('environment_variables'),
+ 'conversation_variables': data.get('conversation_variables'),
}
except json.JSONDecodeError:
return {'message': 'Invalid JSON data'}, 400
@@ -100,6 +102,8 @@ class DraftWorkflowApi(Resource):
try:
environment_variables_list = args.get('environment_variables') or []
environment_variables = [factory.build_variable_from_mapping(obj) for obj in environment_variables_list]
+ conversation_variables_list = args.get('conversation_variables') or []
+ conversation_variables = [factory.build_variable_from_mapping(obj) for obj in conversation_variables_list]
workflow = workflow_service.sync_draft_workflow(
app_model=app_model,
graph=args['graph'],
@@ -107,6 +111,7 @@ class DraftWorkflowApi(Resource):
unique_hash=args.get('hash'),
account=current_user,
environment_variables=environment_variables,
+ conversation_variables=conversation_variables,
)
except WorkflowHashNotEqualError:
raise DraftWorkflowNotSync()
diff --git a/api/controllers/console/auth/data_source_oauth.py b/api/controllers/console/auth/data_source_oauth.py
index 6268347244..45cfa9d7eb 100644
--- a/api/controllers/console/auth/data_source_oauth.py
+++ b/api/controllers/console/auth/data_source_oauth.py
@@ -17,8 +17,6 @@ from ..wraps import account_initialization_required
def get_oauth_providers():
with current_app.app_context():
- if not dify_config.NOTION_CLIENT_ID or not dify_config.NOTION_CLIENT_SECRET:
- return {}
notion_oauth = NotionOAuth(client_id=dify_config.NOTION_CLIENT_ID,
client_secret=dify_config.NOTION_CLIENT_SECRET,
redirect_uri=dify_config.CONSOLE_API_URL + '/console/api/oauth/data-source/callback/notion')
diff --git a/api/controllers/console/datasets/datasets.py b/api/controllers/console/datasets/datasets.py
index c446f523b6..3e98843280 100644
--- a/api/controllers/console/datasets/datasets.py
+++ b/api/controllers/console/datasets/datasets.py
@@ -189,8 +189,6 @@ class DatasetApi(Resource):
dataset = DatasetService.get_dataset(dataset_id_str)
if dataset is None:
raise NotFound("Dataset not found.")
- # check user's model setting
- DatasetService.check_dataset_model_setting(dataset)
parser = reqparse.RequestParser()
parser.add_argument('name', nullable=False,
@@ -215,6 +213,13 @@ class DatasetApi(Resource):
args = parser.parse_args()
data = request.get_json()
+ # check embedding model setting
+ if data.get('indexing_technique') == 'high_quality':
+ DatasetService.check_embedding_model_setting(dataset.tenant_id,
+ data.get('embedding_model_provider'),
+ data.get('embedding_model')
+ )
+
# The role of the current user in the ta table must be admin, owner, editor, or dataset_operator
DatasetPermissionService.check_permission(
current_user, dataset, data.get('permission'), data.get('partial_member_list')
@@ -233,7 +238,8 @@ class DatasetApi(Resource):
DatasetPermissionService.update_partial_member_list(
tenant_id, dataset_id_str, data.get('partial_member_list')
)
- else:
+ # clear partial member list when permission is only_me or all_team_members
+ elif data.get('permission') == 'only_me' or data.get('permission') == 'all_team_members':
DatasetPermissionService.clear_partial_member_list(dataset_id_str)
partial_member_list = DatasetPermissionService.get_dataset_partial_member_list(dataset_id_str)
diff --git a/api/controllers/console/datasets/datasets_segments.py b/api/controllers/console/datasets/datasets_segments.py
index 3dcade6152..a4210d5a0c 100644
--- a/api/controllers/console/datasets/datasets_segments.py
+++ b/api/controllers/console/datasets/datasets_segments.py
@@ -223,8 +223,7 @@ class DatasetDocumentSegmentAddApi(Resource):
document = DocumentService.get_document(dataset_id, document_id)
if not document:
raise NotFound('Document not found.')
- # The role of the current user in the ta table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
# check embedding model setting
if dataset.indexing_technique == 'high_quality':
@@ -347,7 +346,7 @@ class DatasetDocumentSegmentUpdateApi(Resource):
if not segment:
raise NotFound('Segment not found.')
# The role of the current user in the ta table must be admin or owner
- if not current_user.is_admin_or_owner:
+ if not current_user.is_editor:
raise Forbidden()
try:
DatasetService.check_dataset_permission(dataset, current_user)
diff --git a/api/controllers/console/extension.py b/api/controllers/console/extension.py
index fa73c44c22..fe73bcb985 100644
--- a/api/controllers/console/extension.py
+++ b/api/controllers/console/extension.py
@@ -1,6 +1,7 @@
from flask_login import current_user
from flask_restful import Resource, marshal_with, reqparse
+from constants import HIDDEN_VALUE
from controllers.console import api
from controllers.console.setup import setup_required
from controllers.console.wraps import account_initialization_required
@@ -89,7 +90,7 @@ class APIBasedExtensionDetailAPI(Resource):
extension_data_from_db.name = args['name']
extension_data_from_db.api_endpoint = args['api_endpoint']
- if args['api_key'] != '[__HIDDEN__]':
+ if args['api_key'] != HIDDEN_VALUE:
extension_data_from_db.api_key = args['api_key']
return APIBasedExtensionService.save(extension_data_from_db)
diff --git a/api/controllers/inner_api/wraps.py b/api/controllers/inner_api/wraps.py
index 2c3c870bce..5c37f5276f 100644
--- a/api/controllers/inner_api/wraps.py
+++ b/api/controllers/inner_api/wraps.py
@@ -19,7 +19,7 @@ def inner_api_only(view):
# get header 'X-Inner-Api-Key'
inner_api_key = request.headers.get('X-Inner-Api-Key')
if not inner_api_key or inner_api_key != dify_config.INNER_API_KEY:
- abort(404)
+ abort(401)
return view(*args, **kwargs)
diff --git a/api/controllers/service_api/app/conversation.py b/api/controllers/service_api/app/conversation.py
index 02158f8b56..44bda8e771 100644
--- a/api/controllers/service_api/app/conversation.py
+++ b/api/controllers/service_api/app/conversation.py
@@ -53,7 +53,7 @@ class ConversationDetailApi(Resource):
ConversationService.delete(app_model, conversation_id, end_user)
except services.errors.conversation.ConversationNotExistsError:
raise NotFound("Conversation Not Exists.")
- return {"result": "success"}, 204
+ return {'result': 'success'}, 200
class ConversationRenameApi(Resource):
diff --git a/api/controllers/service_api/app/message.py b/api/controllers/service_api/app/message.py
index c8b44cfa38..875870e667 100644
--- a/api/controllers/service_api/app/message.py
+++ b/api/controllers/service_api/app/message.py
@@ -131,7 +131,7 @@ class MessageSuggestedApi(Resource):
except services.errors.message.MessageNotExistsError:
raise NotFound("Message Not Exists.")
except SuggestedQuestionsAfterAnswerDisabledError:
- raise BadRequest("Message Not Exists.")
+ raise BadRequest("Suggested Questions Is Disabled.")
except Exception:
logging.exception("internal server error.")
raise InternalServerError()
diff --git a/api/core/agent/cot_agent_runner.py b/api/core/agent/cot_agent_runner.py
index 9bd8f37d85..06492bb12f 100644
--- a/api/core/agent/cot_agent_runner.py
+++ b/api/core/agent/cot_agent_runner.py
@@ -79,6 +79,7 @@ class CotAgentRunner(BaseAgentRunner, ABC):
llm_usage.completion_tokens += usage.completion_tokens
llm_usage.prompt_price += usage.prompt_price
llm_usage.completion_price += usage.completion_price
+ llm_usage.total_price += usage.total_price
model_instance = self.model_instance
diff --git a/api/core/agent/fc_agent_runner.py b/api/core/agent/fc_agent_runner.py
index 7019b5e39f..3ee6e47742 100644
--- a/api/core/agent/fc_agent_runner.py
+++ b/api/core/agent/fc_agent_runner.py
@@ -62,6 +62,7 @@ class FunctionCallAgentRunner(BaseAgentRunner):
llm_usage.completion_tokens += usage.completion_tokens
llm_usage.prompt_price += usage.prompt_price
llm_usage.completion_price += usage.completion_price
+ llm_usage.total_price += usage.total_price
model_instance = self.model_instance
diff --git a/api/core/app/app_config/easy_ui_based_app/dataset/manager.py b/api/core/app/app_config/easy_ui_based_app/dataset/manager.py
index 13da5514d1..ec17db5f06 100644
--- a/api/core/app/app_config/easy_ui_based_app/dataset/manager.py
+++ b/api/core/app/app_config/easy_ui_based_app/dataset/manager.py
@@ -91,7 +91,8 @@ class DatasetConfigManager:
top_k=dataset_configs.get('top_k', 4),
score_threshold=dataset_configs.get('score_threshold'),
reranking_model=dataset_configs.get('reranking_model'),
- weights=dataset_configs.get('weights')
+ weights=dataset_configs.get('weights'),
+ reranking_enabled=dataset_configs.get('reranking_enabled', True),
)
)
diff --git a/api/core/app/app_config/entities.py b/api/core/app/app_config/entities.py
index 9133a35c08..05a42a898e 100644
--- a/api/core/app/app_config/entities.py
+++ b/api/core/app/app_config/entities.py
@@ -3,8 +3,9 @@ from typing import Any, Optional
from pydantic import BaseModel
+from core.file.file_obj import FileExtraConfig
from core.model_runtime.entities.message_entities import PromptMessageRole
-from models.model import AppMode
+from models import AppMode
class ModelConfigEntity(BaseModel):
@@ -158,10 +159,11 @@ class DatasetRetrieveConfigEntity(BaseModel):
retrieve_strategy: RetrieveStrategy
top_k: Optional[int] = None
- score_threshold: Optional[float] = None
+ score_threshold: Optional[float] = .0
rerank_mode: Optional[str] = 'reranking_model'
reranking_model: Optional[dict] = None
weights: Optional[dict] = None
+ reranking_enabled: Optional[bool] = True
@@ -199,11 +201,6 @@ class TracingConfigEntity(BaseModel):
tracing_provider: str
-class FileExtraConfig(BaseModel):
- """
- File Upload Entity.
- """
- image_config: Optional[dict[str, Any]] = None
class AppAdditionalFeatures(BaseModel):
diff --git a/api/core/app/app_config/features/file_upload/manager.py b/api/core/app/app_config/features/file_upload/manager.py
index 86799fb1ab..3da3c2eddb 100644
--- a/api/core/app/app_config/features/file_upload/manager.py
+++ b/api/core/app/app_config/features/file_upload/manager.py
@@ -1,7 +1,7 @@
from collections.abc import Mapping
from typing import Any, Optional
-from core.app.app_config.entities import FileExtraConfig
+from core.file.file_obj import FileExtraConfig
class FileUploadConfigManager:
diff --git a/api/core/app/apps/advanced_chat/app_generator.py b/api/core/app/apps/advanced_chat/app_generator.py
index 58e6248d12..bc2032c2a1 100644
--- a/api/core/app/apps/advanced_chat/app_generator.py
+++ b/api/core/app/apps/advanced_chat/app_generator.py
@@ -89,7 +89,8 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
)
# get tracing instance
- trace_manager = TraceQueueManager(app_id=app_model.id)
+ user_id = user.id if isinstance(user, Account) else user.session_id
+ trace_manager = TraceQueueManager(app_model.id, user_id)
if invoke_from == InvokeFrom.DEBUGGER:
# always enable retriever resource in debugger mode
@@ -112,7 +113,6 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
contexts.tenant_id.set(application_generate_entity.app_config.tenant_id)
return self._generate(
- app_model=app_model,
workflow=workflow,
user=user,
invoke_from=invoke_from,
@@ -121,7 +121,7 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
stream=stream
)
- def _generate(self, app_model: App,
+ def _generate(self, *,
workflow: Workflow,
user: Union[Account, EndUser],
invoke_from: InvokeFrom,
diff --git a/api/core/app/apps/advanced_chat/app_generator_tts_publisher.py b/api/core/app/apps/advanced_chat/app_generator_tts_publisher.py
index 8325994608..0caff4a2e3 100644
--- a/api/core/app/apps/advanced_chat/app_generator_tts_publisher.py
+++ b/api/core/app/apps/advanced_chat/app_generator_tts_publisher.py
@@ -5,7 +5,12 @@ import queue
import re
import threading
-from core.app.entities.queue_entities import QueueAgentMessageEvent, QueueLLMChunkEvent, QueueTextChunkEvent
+from core.app.entities.queue_entities import (
+ QueueAgentMessageEvent,
+ QueueLLMChunkEvent,
+ QueueNodeSucceededEvent,
+ QueueTextChunkEvent,
+)
from core.model_manager import ModelManager
from core.model_runtime.entities.model_entities import ModelType
@@ -88,6 +93,8 @@ class AppGeneratorTTSPublisher:
self.msg_text += message.event.chunk.delta.message.content
elif isinstance(message.event, QueueTextChunkEvent):
self.msg_text += message.event.text
+ elif isinstance(message.event, QueueNodeSucceededEvent):
+ self.msg_text += message.event.outputs.get('output', '')
self.last_message = message
sentence_arr, text_tmp = self._extract_sentence(self.msg_text)
if len(sentence_arr) >= min(self.MAX_SENTENCE, 7):
diff --git a/api/core/app/apps/advanced_chat/app_runner.py b/api/core/app/apps/advanced_chat/app_runner.py
index 14fc5d993e..f3eb23f810 100644
--- a/api/core/app/apps/advanced_chat/app_runner.py
+++ b/api/core/app/apps/advanced_chat/app_runner.py
@@ -3,6 +3,9 @@ import os
from collections.abc import Mapping
from typing import Any, Optional, cast
+from sqlalchemy import select
+from sqlalchemy.orm import Session
+
from core.app.apps.advanced_chat.app_config_manager import AdvancedChatAppConfig
from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom
from core.app.apps.base_app_runner import AppRunner
@@ -32,6 +35,7 @@ from core.app.entities.queue_entities import (
from core.moderation.base import ModerationException
from core.workflow.callbacks.base_workflow_callback import WorkflowCallback
from core.workflow.entities.node_entities import SystemVariable, UserFrom
+from core.workflow.entities.variable_pool import VariablePool
from core.workflow.graph_engine.entities.event import (
GraphEngineEvent,
GraphRunFailedEvent,
@@ -53,7 +57,7 @@ from core.workflow.graph_engine.entities.event import (
from core.workflow.workflow_entry import WorkflowEntry
from extensions.ext_database import db
from models.model import App, Conversation, EndUser, Message
-from models.workflow import Workflow
+from models.workflow import ConversationVariable, Workflow
logger = logging.getLogger(__name__)
@@ -91,11 +95,11 @@ class AdvancedChatAppRunner(AppRunner):
app_record = db.session.query(App).filter(App.id == app_config.app_id).first()
if not app_record:
- raise ValueError("App not found")
+ raise ValueError('App not found')
workflow = self.get_workflow(app_model=app_record, workflow_id=app_config.workflow_id)
if not workflow:
- raise ValueError("Workflow not initialized")
+ raise ValueError('Workflow not initialized')
inputs = self.application_generate_entity.inputs
query = self.application_generate_entity.query
@@ -134,6 +138,38 @@ class AdvancedChatAppRunner(AppRunner):
if bool(os.environ.get("DEBUG", 'False').lower() == 'true'):
workflow_callbacks.append(WorkflowLoggingCallback())
+ # Init conversation variables
+ stmt = select(ConversationVariable).where(
+ ConversationVariable.app_id == conversation.app_id, ConversationVariable.conversation_id == conversation.id
+ )
+ with Session(db.engine) as session:
+ conversation_variables = session.scalars(stmt).all()
+ if not conversation_variables:
+ conversation_variables = [
+ ConversationVariable.from_variable(
+ app_id=conversation.app_id, conversation_id=conversation.id, variable=variable
+ )
+ for variable in workflow.conversation_variables
+ ]
+ session.add_all(conversation_variables)
+ session.commit()
+ # Convert database entities to variables
+ conversation_variables = [item.to_variable() for item in conversation_variables]
+
+ # Create a variable pool.
+ system_inputs = {
+ SystemVariable.QUERY: query,
+ SystemVariable.FILES: files,
+ SystemVariable.CONVERSATION_ID: conversation.id,
+ SystemVariable.USER_ID: user_id,
+ }
+ variable_pool = VariablePool(
+ system_variables=system_inputs,
+ user_inputs=inputs,
+ environment_variables=workflow.environment_variables,
+ conversation_variables=conversation_variables,
+ )
+
# RUN WORKFLOW
workflow_entry = WorkflowEntry(
workflow=workflow,
@@ -142,14 +178,8 @@ class AdvancedChatAppRunner(AppRunner):
if self.application_generate_entity.invoke_from in [InvokeFrom.EXPLORE, InvokeFrom.DEBUGGER]
else UserFrom.END_USER,
invoke_from=self.application_generate_entity.invoke_from,
- user_inputs=inputs,
- system_inputs={
- SystemVariable.QUERY: query,
- SystemVariable.FILES: files,
- SystemVariable.CONVERSATION_ID: self.conversation.id,
- SystemVariable.USER_ID: user_id
- },
- call_depth=self.application_generate_entity.call_depth
+ call_depth=self.application_generate_entity.call_depth,
+ variable_pool=variable_pool,
)
generator = workflow_entry.run(
@@ -323,11 +353,13 @@ class AdvancedChatAppRunner(AppRunner):
Get workflow
"""
# fetch workflow by workflow_id
- workflow = db.session.query(Workflow).filter(
- Workflow.tenant_id == app_model.tenant_id,
- Workflow.app_id == app_model.id,
- Workflow.id == workflow_id
- ).first()
+ workflow = (
+ db.session.query(Workflow)
+ .filter(
+ Workflow.tenant_id == app_model.tenant_id, Workflow.app_id == app_model.id, Workflow.id == workflow_id
+ )
+ .first()
+ )
# return workflow
return workflow
@@ -385,7 +417,7 @@ class AdvancedChatAppRunner(AppRunner):
message=message,
query=query,
user_id=app_generate_entity.user_id,
- invoke_from=app_generate_entity.invoke_from
+ invoke_from=app_generate_entity.invoke_from,
)
if annotation_reply:
diff --git a/api/core/app/apps/agent_chat/app_generator.py b/api/core/app/apps/agent_chat/app_generator.py
index df6a35918b..53780bdfb0 100644
--- a/api/core/app/apps/agent_chat/app_generator.py
+++ b/api/core/app/apps/agent_chat/app_generator.py
@@ -110,7 +110,8 @@ class AgentChatAppGenerator(MessageBasedAppGenerator):
)
# get tracing instance
- trace_manager = TraceQueueManager(app_model.id)
+ user_id = user.id if isinstance(user, Account) else user.session_id
+ trace_manager = TraceQueueManager(app_model.id, user_id)
# init application generate entity
application_generate_entity = AgentChatAppGenerateEntity(
diff --git a/api/core/app/apps/workflow/app_generator.py b/api/core/app/apps/workflow/app_generator.py
index b1986dbcee..df40aec154 100644
--- a/api/core/app/apps/workflow/app_generator.py
+++ b/api/core/app/apps/workflow/app_generator.py
@@ -74,7 +74,8 @@ class WorkflowAppGenerator(BaseAppGenerator):
)
# get tracing instance
- trace_manager = TraceQueueManager(app_model.id)
+ user_id = user.id if isinstance(user, Account) else user.session_id
+ trace_manager = TraceQueueManager(app_model.id, user_id)
# init application generate entity
application_generate_entity = WorkflowAppGenerateEntity(
diff --git a/api/core/app/apps/workflow/app_runner.py b/api/core/app/apps/workflow/app_runner.py
index 618a91a999..9a100532b0 100644
--- a/api/core/app/apps/workflow/app_runner.py
+++ b/api/core/app/apps/workflow/app_runner.py
@@ -11,6 +11,7 @@ from core.app.entities.app_invoke_entities import (
)
from core.workflow.callbacks.base_workflow_callback import WorkflowCallback
from core.workflow.entities.node_entities import SystemVariable, UserFrom
+from core.workflow.entities.variable_pool import VariablePool
from core.workflow.workflow_entry import WorkflowEntry
from extensions.ext_database import db
from models.model import App, EndUser
@@ -24,8 +25,7 @@ class WorkflowAppRunner:
Workflow Application Runner
"""
- def run(self, application_generate_entity: WorkflowAppGenerateEntity,
- queue_manager: AppQueueManager) -> None:
+ def run(self, application_generate_entity: WorkflowAppGenerateEntity, queue_manager: AppQueueManager) -> None:
"""
Run application
:param application_generate_entity: application generate entity
@@ -45,11 +45,11 @@ class WorkflowAppRunner:
app_record = db.session.query(App).filter(App.id == app_config.app_id).first()
if not app_record:
- raise ValueError("App not found")
+ raise ValueError('App not found')
workflow = self.get_workflow(app_model=app_record, workflow_id=app_config.workflow_id)
if not workflow:
- raise ValueError("Workflow not initialized")
+ raise ValueError('Workflow not initialized')
inputs = application_generate_entity.inputs
files = application_generate_entity.files
@@ -58,9 +58,21 @@ class WorkflowAppRunner:
workflow_callbacks: list[WorkflowCallback] = []
- if bool(os.environ.get("DEBUG", 'False').lower() == 'true'):
+ if bool(os.environ.get('DEBUG', 'False').lower() == 'true'):
workflow_callbacks.append(WorkflowLoggingCallback())
+ # Create a variable pool.
+ system_inputs = {
+ SystemVariable.FILES: files,
+ SystemVariable.USER_ID: user_id,
+ }
+ variable_pool = VariablePool(
+ system_variables=system_inputs,
+ user_inputs=inputs,
+ environment_variables=workflow.environment_variables,
+ conversation_variables=[],
+ )
+
# RUN WORKFLOW
workflow_entry = WorkflowEntry()
workflow_entry.run(
@@ -71,26 +83,22 @@ class WorkflowAppRunner:
else UserFrom.END_USER,
invoke_from=application_generate_entity.invoke_from,
callbacks=workflow_callbacks,
- user_inputs=inputs,
- system_inputs={
- SystemVariable.FILES: files,
- SystemVariable.USER_ID: user_id
- },
- call_depth=application_generate_entity.call_depth
+ call_depth=application_generate_entity.call_depth,
+ variable_pool=variable_pool,
)
- def single_iteration_run(self, app_id: str, workflow_id: str,
- queue_manager: AppQueueManager,
- inputs: dict, node_id: str, user_id: str) -> None:
+ def single_iteration_run(
+ self, app_id: str, workflow_id: str, queue_manager: AppQueueManager, inputs: dict, node_id: str, user_id: str
+ ) -> None:
"""
Single iteration run
"""
- app_record: App = db.session.query(App).filter(App.id == app_id).first()
+ app_record = db.session.query(App).filter(App.id == app_id).first()
if not app_record:
- raise ValueError("App not found")
-
+ raise ValueError('App not found')
+
if not app_record.workflow_id:
- raise ValueError("Workflow not initialized")
+ raise ValueError('Workflow not initialized')
workflow = self.get_workflow(app_model=app_record, workflow_id=workflow_id)
if not workflow:
@@ -112,11 +120,13 @@ class WorkflowAppRunner:
Get workflow
"""
# fetch workflow by workflow_id
- workflow = db.session.query(Workflow).filter(
- Workflow.tenant_id == app_model.tenant_id,
- Workflow.app_id == app_model.id,
- Workflow.id == workflow_id
- ).first()
+ workflow = (
+ db.session.query(Workflow)
+ .filter(
+ Workflow.tenant_id == app_model.tenant_id, Workflow.app_id == app_model.id, Workflow.id == workflow_id
+ )
+ .first()
+ )
# return workflow
return workflow
diff --git a/api/core/app/segments/__init__.py b/api/core/app/segments/__init__.py
index d5cd0a589c..174e241261 100644
--- a/api/core/app/segments/__init__.py
+++ b/api/core/app/segments/__init__.py
@@ -1,6 +1,7 @@
from .segment_group import SegmentGroup
from .segments import (
ArrayAnySegment,
+ ArraySegment,
FileSegment,
FloatSegment,
IntegerSegment,
@@ -50,4 +51,5 @@ __all__ = [
'ArrayNumberVariable',
'ArrayObjectVariable',
'ArrayFileVariable',
+ 'ArraySegment',
]
diff --git a/api/core/app/segments/exc.py b/api/core/app/segments/exc.py
new file mode 100644
index 0000000000..d15d6d500f
--- /dev/null
+++ b/api/core/app/segments/exc.py
@@ -0,0 +1,2 @@
+class VariableError(Exception):
+ pass
diff --git a/api/core/app/segments/factory.py b/api/core/app/segments/factory.py
index f62e44bf07..91ff1fdb3d 100644
--- a/api/core/app/segments/factory.py
+++ b/api/core/app/segments/factory.py
@@ -1,8 +1,10 @@
from collections.abc import Mapping
from typing import Any
+from configs import dify_config
from core.file.file_obj import FileVar
+from .exc import VariableError
from .segments import (
ArrayAnySegment,
FileSegment,
@@ -29,39 +31,43 @@ from .variables import (
)
-def build_variable_from_mapping(m: Mapping[str, Any], /) -> Variable:
- if (value_type := m.get('value_type')) is None:
- raise ValueError('missing value type')
- if not m.get('name'):
- raise ValueError('missing name')
- if (value := m.get('value')) is None:
- raise ValueError('missing value')
+def build_variable_from_mapping(mapping: Mapping[str, Any], /) -> Variable:
+ if (value_type := mapping.get('value_type')) is None:
+ raise VariableError('missing value type')
+ if not mapping.get('name'):
+ raise VariableError('missing name')
+ if (value := mapping.get('value')) is None:
+ raise VariableError('missing value')
match value_type:
case SegmentType.STRING:
- return StringVariable.model_validate(m)
+ result = StringVariable.model_validate(mapping)
case SegmentType.SECRET:
- return SecretVariable.model_validate(m)
+ result = SecretVariable.model_validate(mapping)
case SegmentType.NUMBER if isinstance(value, int):
- return IntegerVariable.model_validate(m)
+ result = IntegerVariable.model_validate(mapping)
case SegmentType.NUMBER if isinstance(value, float):
- return FloatVariable.model_validate(m)
+ result = FloatVariable.model_validate(mapping)
case SegmentType.NUMBER if not isinstance(value, float | int):
- raise ValueError(f'invalid number value {value}')
+ raise VariableError(f'invalid number value {value}')
case SegmentType.FILE:
- return FileVariable.model_validate(m)
+ result = FileVariable.model_validate(mapping)
case SegmentType.OBJECT if isinstance(value, dict):
- return ObjectVariable.model_validate(
- {**m, 'value': {k: build_variable_from_mapping(v) for k, v in value.items()}}
- )
+ result = ObjectVariable.model_validate(mapping)
case SegmentType.ARRAY_STRING if isinstance(value, list):
- return ArrayStringVariable.model_validate({**m, 'value': [build_variable_from_mapping(v) for v in value]})
+ result = ArrayStringVariable.model_validate(mapping)
case SegmentType.ARRAY_NUMBER if isinstance(value, list):
- return ArrayNumberVariable.model_validate({**m, 'value': [build_variable_from_mapping(v) for v in value]})
+ result = ArrayNumberVariable.model_validate(mapping)
case SegmentType.ARRAY_OBJECT if isinstance(value, list):
- return ArrayObjectVariable.model_validate({**m, 'value': [build_variable_from_mapping(v) for v in value]})
+ result = ArrayObjectVariable.model_validate(mapping)
case SegmentType.ARRAY_FILE if isinstance(value, list):
- return ArrayFileVariable.model_validate({**m, 'value': [build_variable_from_mapping(v) for v in value]})
- raise ValueError(f'not supported value type {value_type}')
+ mapping = dict(mapping)
+ mapping['value'] = [{'value': v} for v in value]
+ result = ArrayFileVariable.model_validate(mapping)
+ case _:
+ raise VariableError(f'not supported value type {value_type}')
+ if result.size > dify_config.MAX_VARIABLE_SIZE:
+ raise VariableError(f'variable size {result.size} exceeds limit {dify_config.MAX_VARIABLE_SIZE}')
+ return result
def build_segment(value: Any, /) -> Segment:
@@ -74,13 +80,9 @@ def build_segment(value: Any, /) -> Segment:
if isinstance(value, float):
return FloatSegment(value=value)
if isinstance(value, dict):
- # TODO: Limit the depth of the object
- obj = {k: build_segment(v) for k, v in value.items()}
- return ObjectSegment(value=obj)
+ return ObjectSegment(value=value)
if isinstance(value, list):
- # TODO: Limit the depth of the array
- elements = [build_segment(v) for v in value]
- return ArrayAnySegment(value=elements)
+ return ArrayAnySegment(value=value)
if isinstance(value, FileVar):
return FileSegment(value=value)
raise ValueError(f'not supported value {value}')
diff --git a/api/core/app/segments/segments.py b/api/core/app/segments/segments.py
index 4227f154e6..7653e1085f 100644
--- a/api/core/app/segments/segments.py
+++ b/api/core/app/segments/segments.py
@@ -1,4 +1,5 @@
import json
+import sys
from collections.abc import Mapping, Sequence
from typing import Any
@@ -37,6 +38,10 @@ class Segment(BaseModel):
def markdown(self) -> str:
return str(self.value)
+ @property
+ def size(self) -> int:
+ return sys.getsizeof(self.value)
+
def to_object(self) -> Any:
return self.value
@@ -85,54 +90,45 @@ class FileSegment(Segment):
class ObjectSegment(Segment):
value_type: SegmentType = SegmentType.OBJECT
- value: Mapping[str, Segment]
+ value: Mapping[str, Any]
@property
def text(self) -> str:
- # TODO: Process variables.
return json.dumps(self.model_dump()['value'], ensure_ascii=False)
@property
def log(self) -> str:
- # TODO: Process variables.
return json.dumps(self.model_dump()['value'], ensure_ascii=False, indent=2)
@property
def markdown(self) -> str:
- # TODO: Use markdown code block
return json.dumps(self.model_dump()['value'], ensure_ascii=False, indent=2)
- def to_object(self):
- return {k: v.to_object() for k, v in self.value.items()}
-
class ArraySegment(Segment):
@property
def markdown(self) -> str:
return '\n'.join(['- ' + item.markdown for item in self.value])
- def to_object(self):
- return [v.to_object() for v in self.value]
-
class ArrayAnySegment(ArraySegment):
value_type: SegmentType = SegmentType.ARRAY_ANY
- value: Sequence[Segment]
+ value: Sequence[Any]
class ArrayStringSegment(ArraySegment):
value_type: SegmentType = SegmentType.ARRAY_STRING
- value: Sequence[StringSegment]
+ value: Sequence[str]
class ArrayNumberSegment(ArraySegment):
value_type: SegmentType = SegmentType.ARRAY_NUMBER
- value: Sequence[FloatSegment | IntegerSegment]
+ value: Sequence[float | int]
class ArrayObjectSegment(ArraySegment):
value_type: SegmentType = SegmentType.ARRAY_OBJECT
- value: Sequence[ObjectSegment]
+ value: Sequence[Mapping[str, Any]]
class ArrayFileSegment(ArraySegment):
diff --git a/api/core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py b/api/core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py
index c9644c7d4c..8d91a507a9 100644
--- a/api/core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py
+++ b/api/core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py
@@ -48,7 +48,8 @@ from core.model_runtime.entities.message_entities import (
)
from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
from core.model_runtime.utils.encoders import jsonable_encoder
-from core.ops.ops_trace_manager import TraceQueueManager, TraceTask, TraceTaskName
+from core.ops.entities.trace_entity import TraceTaskName
+from core.ops.ops_trace_manager import TraceQueueManager, TraceTask
from core.prompt.utils.prompt_message_util import PromptMessageUtil
from core.prompt.utils.prompt_template_parser import PromptTemplateParser
from events.message_event import message_was_created
diff --git a/api/core/app/task_pipeline/workflow_cycle_manage.py b/api/core/app/task_pipeline/workflow_cycle_manage.py
index 6c2d3bab54..ce7653e884 100644
--- a/api/core/app/task_pipeline/workflow_cycle_manage.py
+++ b/api/core/app/task_pipeline/workflow_cycle_manage.py
@@ -24,7 +24,8 @@ from core.app.entities.task_entities import (
)
from core.file.file_obj import FileVar
from core.model_runtime.utils.encoders import jsonable_encoder
-from core.ops.ops_trace_manager import TraceQueueManager, TraceTask, TraceTaskName
+from core.ops.entities.trace_entity import TraceTaskName
+from core.ops.ops_trace_manager import TraceQueueManager, TraceTask
from core.tools.tool_manager import ToolManager
from core.workflow.entities.node_entities import NodeType, SystemVariable
from core.workflow.nodes.tool.entities import ToolNodeData
@@ -42,6 +43,7 @@ from models.workflow import (
WorkflowRunStatus,
WorkflowRunTriggeredFrom,
)
+from services.workflow_service import WorkflowService
class WorkflowCycleManage:
@@ -50,7 +52,7 @@ class WorkflowCycleManage:
_user: Union[Account, EndUser]
_task_state: WorkflowTaskState
_workflow_system_variables: dict[SystemVariable, Any]
-
+
def _handle_workflow_run_start(self) -> WorkflowRun:
max_sequence = (
db.session.query(db.func.max(WorkflowRun.sequence_number))
@@ -71,7 +73,7 @@ class WorkflowCycleManage:
inputs = WorkflowEntry.handle_special_values(inputs)
triggered_from= (
- WorkflowRunTriggeredFrom.DEBUGGING
+ WorkflowRunTriggeredFrom.DEBUGGING
if self._application_generate_entity.invoke_from == InvokeFrom.DEBUGGER
else WorkflowRunTriggeredFrom.APP_RUN
)
@@ -99,7 +101,7 @@ class WorkflowCycleManage:
db.session.close()
return workflow_run
-
+
def _handle_workflow_run_success(
self,
workflow_run: WorkflowRun,
@@ -121,7 +123,7 @@ class WorkflowCycleManage:
:return:
"""
workflow_run = self._refetch_workflow_run(workflow_run.id)
-
+
workflow_run.status = WorkflowRunStatus.SUCCEEDED.value
workflow_run.outputs = outputs
workflow_run.elapsed_time = time.perf_counter() - start_at
@@ -138,6 +140,7 @@ class WorkflowCycleManage:
TraceTaskName.WORKFLOW_TRACE,
workflow_run=workflow_run,
conversation_id=conversation_id,
+ user_id=trace_manager.user_id,
)
)
@@ -185,11 +188,12 @@ class WorkflowCycleManage:
TraceTaskName.WORKFLOW_TRACE,
workflow_run=workflow_run,
conversation_id=conversation_id,
+ user_id=trace_manager.user_id,
)
)
return workflow_run
-
+
def _handle_node_execution_start(self, workflow_run: WorkflowRun, event: QueueNodeStartedEvent) -> WorkflowNodeExecution:
# init workflow node execution
workflow_node_execution = WorkflowNodeExecution()
@@ -250,7 +254,7 @@ class WorkflowCycleManage:
:return:
"""
workflow_node_execution = self._refetch_workflow_node_execution(event.node_execution_id)
-
+
inputs = WorkflowEntry.handle_special_values(event.inputs)
outputs = WorkflowEntry.handle_special_values(event.outputs)
@@ -267,7 +271,7 @@ class WorkflowCycleManage:
db.session.close()
return workflow_node_execution
-
+
#################################################
# to stream responses #
#################################################
@@ -406,10 +410,10 @@ class WorkflowCycleManage:
files=self._fetch_files_from_node_outputs(workflow_node_execution.outputs_dict or {}),
),
)
-
+
def _workflow_iteration_start_to_stream_response(
self,
- task_id: str,
+ task_id: str,
workflow_run: WorkflowRun,
event: QueueIterationStartEvent
) -> IterationNodeStartStreamResponse:
@@ -434,7 +438,7 @@ class WorkflowCycleManage:
metadata=event.metadata or {}
)
)
-
+
def _workflow_iteration_next_to_stream_response(self, task_id: str, workflow_run: WorkflowRun, event: QueueIterationNextEvent) -> IterationNodeNextStreamResponse:
"""
Workflow iteration next to stream response
@@ -457,7 +461,7 @@ class WorkflowCycleManage:
extras={}
)
)
-
+
def _workflow_iteration_completed_to_stream_response(self, task_id: str, workflow_run: WorkflowRun, event: QueueIterationCompletedEvent) -> IterationNodeCompletedStreamResponse:
"""
Workflow iteration completed to stream response
@@ -552,10 +556,10 @@ class WorkflowCycleManage:
"""
workflow_run = db.session.query(WorkflowRun).filter(
WorkflowRun.id == workflow_run_id).first()
-
+
if not workflow_run:
raise Exception(f'Workflow run not found: {workflow_run_id}')
-
+
return workflow_run
def _refetch_workflow_node_execution(self, node_execution_id: str) -> WorkflowNodeExecution:
@@ -578,5 +582,5 @@ class WorkflowCycleManage:
if not workflow_node_execution:
raise Exception(f'Workflow node execution not found: {node_execution_id}')
-
+
return workflow_node_execution
\ No newline at end of file
diff --git a/api/core/callback_handler/agent_tool_callback_handler.py b/api/core/callback_handler/agent_tool_callback_handler.py
index 03f8244bab..5789965747 100644
--- a/api/core/callback_handler/agent_tool_callback_handler.py
+++ b/api/core/callback_handler/agent_tool_callback_handler.py
@@ -4,7 +4,8 @@ from typing import Any, Optional, TextIO, Union
from pydantic import BaseModel
-from core.ops.ops_trace_manager import TraceQueueManager, TraceTask, TraceTaskName
+from core.ops.entities.trace_entity import TraceTaskName
+from core.ops.ops_trace_manager import TraceQueueManager, TraceTask
from core.tools.entities.tool_entities import ToolInvokeMessage
_TEXT_COLOR_MAPPING = {
diff --git a/api/core/entities/provider_configuration.py b/api/core/entities/provider_configuration.py
index f3cf54a58e..778ef2e1ac 100644
--- a/api/core/entities/provider_configuration.py
+++ b/api/core/entities/provider_configuration.py
@@ -8,6 +8,7 @@ from typing import Optional
from pydantic import BaseModel, ConfigDict
+from constants import HIDDEN_VALUE
from core.entities.model_entities import ModelStatus, ModelWithProviderEntity, SimpleModelProviderEntity
from core.entities.provider_entities import (
CustomConfiguration,
@@ -202,7 +203,7 @@ class ProviderConfiguration(BaseModel):
for key, value in credentials.items():
if key in provider_credential_secret_variables:
# if send [__HIDDEN__] in secret input, it will be same as original value
- if value == '[__HIDDEN__]' and key in original_credentials:
+ if value == HIDDEN_VALUE and key in original_credentials:
credentials[key] = encrypter.decrypt_token(self.tenant_id, original_credentials[key])
credentials = model_provider_factory.provider_credentials_validate(
@@ -345,7 +346,7 @@ class ProviderConfiguration(BaseModel):
for key, value in credentials.items():
if key in provider_credential_secret_variables:
# if send [__HIDDEN__] in secret input, it will be same as original value
- if value == '[__HIDDEN__]' and key in original_credentials:
+ if value == HIDDEN_VALUE and key in original_credentials:
credentials[key] = encrypter.decrypt_token(self.tenant_id, original_credentials[key])
credentials = model_provider_factory.model_credentials_validate(
diff --git a/api/core/file/file_obj.py b/api/core/file/file_obj.py
index 268ef5df86..3959f4b4a0 100644
--- a/api/core/file/file_obj.py
+++ b/api/core/file/file_obj.py
@@ -1,14 +1,19 @@
import enum
-from typing import Optional
+from typing import Any, Optional
from pydantic import BaseModel
-from core.app.app_config.entities import FileExtraConfig
from core.file.tool_file_parser import ToolFileParser
from core.file.upload_file_parser import UploadFileParser
from core.model_runtime.entities.message_entities import ImagePromptMessageContent
from extensions.ext_database import db
-from models.model import UploadFile
+
+
+class FileExtraConfig(BaseModel):
+ """
+ File Upload Entity.
+ """
+ image_config: Optional[dict[str, Any]] = None
class FileType(enum.Enum):
@@ -114,6 +119,7 @@ class FileVar(BaseModel):
)
def _get_data(self, force_url: bool = False) -> Optional[str]:
+ from models.model import UploadFile
if self.type == FileType.IMAGE:
if self.transfer_method == FileTransferMethod.REMOTE_URL:
return self.url
diff --git a/api/core/file/message_file_parser.py b/api/core/file/message_file_parser.py
index 7b2f8217f9..01b89907db 100644
--- a/api/core/file/message_file_parser.py
+++ b/api/core/file/message_file_parser.py
@@ -1,10 +1,11 @@
+import re
from collections.abc import Mapping, Sequence
from typing import Any, Union
+from urllib.parse import parse_qs, urlparse
import requests
-from core.app.app_config.entities import FileExtraConfig
-from core.file.file_obj import FileBelongsTo, FileTransferMethod, FileType, FileVar
+from core.file.file_obj import FileBelongsTo, FileExtraConfig, FileTransferMethod, FileType, FileVar
from extensions.ext_database import db
from models.account import Account
from models.model import EndUser, MessageFile, UploadFile
@@ -186,6 +187,30 @@ class MessageFileParser:
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
+ def is_s3_presigned_url(url):
+ try:
+ parsed_url = urlparse(url)
+ if 'amazonaws.com' not in parsed_url.netloc:
+ return False
+ query_params = parse_qs(parsed_url.query)
+ required_params = ['Signature', 'Expires']
+ for param in required_params:
+ if param not in query_params:
+ return False
+ if not query_params['Expires'][0].isdigit():
+ return False
+ signature = query_params['Signature'][0]
+ if not re.match(r'^[A-Za-z0-9+/]+={0,2}$', signature):
+ return False
+ return True
+ except Exception:
+ return False
+
+ if is_s3_presigned_url(url):
+ response = requests.get(url, headers=headers, allow_redirects=True)
+ if response.status_code in {200, 304}:
+ return True, ""
+
response = requests.head(url, headers=headers, allow_redirects=True)
if response.status_code in {200, 304}:
return True, ""
diff --git a/api/core/helper/code_executor/code_executor.py b/api/core/helper/code_executor/code_executor.py
index 5b69d3af4b..afb2bbbbf3 100644
--- a/api/core/helper/code_executor/code_executor.py
+++ b/api/core/helper/code_executor/code_executor.py
@@ -107,11 +107,11 @@ class CodeExecutor:
response = response.json()
except:
raise CodeExecutionException('Failed to parse response')
+
+ if (code := response.get('code')) != 0:
+ raise CodeExecutionException(f"Got error code: {code}. Got error msg: {response.get('message')}")
response = CodeExecutionResponse(**response)
-
- if response.code != 0:
- raise CodeExecutionException(response.message)
if response.data.error:
raise CodeExecutionException(response.data.error)
diff --git a/api/core/helper/encrypter.py b/api/core/helper/encrypter.py
index bf87a842c0..5e5deb86b4 100644
--- a/api/core/helper/encrypter.py
+++ b/api/core/helper/encrypter.py
@@ -2,7 +2,6 @@ import base64
from extensions.ext_database import db
from libs import rsa
-from models.account import Tenant
def obfuscated_token(token: str):
@@ -14,6 +13,7 @@ def obfuscated_token(token: str):
def encrypt_token(tenant_id: str, token: str):
+ from models.account import Tenant
if not (tenant := db.session.query(Tenant).filter(Tenant.id == tenant_id).first()):
raise ValueError(f'Tenant with id {tenant_id} not found')
encrypted_token = rsa.encrypt(token, tenant.encrypt_public_key)
diff --git a/api/core/hosting_configuration.py b/api/core/hosting_configuration.py
index 45ad1b51bf..5f7fec5833 100644
--- a/api/core/hosting_configuration.py
+++ b/api/core/hosting_configuration.py
@@ -73,6 +73,8 @@ class HostingConfiguration:
quota_limit=hosted_quota_limit,
restrict_models=[
RestrictModel(model="gpt-4", base_model_name="gpt-4", model_type=ModelType.LLM),
+ RestrictModel(model="gpt-4o", base_model_name="gpt-4o", model_type=ModelType.LLM),
+ RestrictModel(model="gpt-4o-mini", base_model_name="gpt-4o-mini", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-32k", base_model_name="gpt-4-32k", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-1106-preview", base_model_name="gpt-4-1106-preview", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-vision-preview", base_model_name="gpt-4-vision-preview", model_type=ModelType.LLM),
diff --git a/api/core/llm_generator/llm_generator.py b/api/core/llm_generator/llm_generator.py
index 0b5029460a..8c13b4a45c 100644
--- a/api/core/llm_generator/llm_generator.py
+++ b/api/core/llm_generator/llm_generator.py
@@ -14,7 +14,8 @@ from core.model_manager import ModelManager
from core.model_runtime.entities.message_entities import SystemPromptMessage, UserPromptMessage
from core.model_runtime.entities.model_entities import ModelType
from core.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError
-from core.ops.ops_trace_manager import TraceQueueManager, TraceTask, TraceTaskName
+from core.ops.entities.trace_entity import TraceTaskName
+from core.ops.ops_trace_manager import TraceQueueManager, TraceTask
from core.ops.utils import measure_time
from core.prompt.utils.prompt_template_parser import PromptTemplateParser
diff --git a/api/core/model_runtime/model_providers/__base/tts_model.py b/api/core/model_runtime/model_providers/__base/tts_model.py
index 086a189246..64e85d2c11 100644
--- a/api/core/model_runtime/model_providers/__base/tts_model.py
+++ b/api/core/model_runtime/model_providers/__base/tts_model.py
@@ -1,18 +1,16 @@
-import hashlib
import logging
import re
-import subprocess
-import uuid
from abc import abstractmethod
from typing import Optional
from pydantic import ConfigDict
from core.model_runtime.entities.model_entities import ModelPropertyKey, ModelType
-from core.model_runtime.errors.invoke import InvokeBadRequestError
from core.model_runtime.model_providers.__base.ai_model import AIModel
logger = logging.getLogger(__name__)
+
+
class TTSModel(AIModel):
"""
Model class for ttstext model.
@@ -37,8 +35,6 @@ class TTSModel(AIModel):
:return: translated audio file
"""
try:
- logger.info(f"Invoke TTS model: {model} , invoke content : {content_text}")
- self._is_ffmpeg_installed()
return self._invoke(model=model, credentials=credentials, user=user,
content_text=content_text, voice=voice, tenant_id=tenant_id)
except Exception as e:
@@ -75,7 +71,8 @@ class TTSModel(AIModel):
if model_schema and ModelPropertyKey.VOICES in model_schema.model_properties:
voices = model_schema.model_properties[ModelPropertyKey.VOICES]
if language:
- return [{'name': d['name'], 'value': d['mode']} for d in voices if language and language in d.get('language')]
+ return [{'name': d['name'], 'value': d['mode']} for d in voices if
+ language and language in d.get('language')]
else:
return [{'name': d['name'], 'value': d['mode']} for d in voices]
@@ -146,28 +143,3 @@ class TTSModel(AIModel):
if one_sentence != '':
result.append(one_sentence)
return result
-
- @staticmethod
- def _is_ffmpeg_installed():
- try:
- output = subprocess.check_output("ffmpeg -version", shell=True)
- if "ffmpeg version" in output.decode("utf-8"):
- return True
- else:
- raise InvokeBadRequestError("ffmpeg is not installed, "
- "details: https://docs.dify.ai/getting-started/install-self-hosted"
- "/install-faq#id-14.-what-to-do-if-this-error-occurs-in-text-to-speech")
- except Exception:
- raise InvokeBadRequestError("ffmpeg is not installed, "
- "details: https://docs.dify.ai/getting-started/install-self-hosted"
- "/install-faq#id-14.-what-to-do-if-this-error-occurs-in-text-to-speech")
-
- # Todo: To improve the streaming function
- @staticmethod
- def _get_file_name(file_content: str) -> str:
- hash_object = hashlib.sha256(file_content.encode())
- hex_digest = hash_object.hexdigest()
-
- namespace_uuid = uuid.UUID('a5da6ef9-b303-596f-8e88-bf8fa40f4b31')
- unique_uuid = uuid.uuid5(namespace_uuid, hex_digest)
- return str(unique_uuid)
diff --git a/api/core/model_runtime/model_providers/_position.yaml b/api/core/model_runtime/model_providers/_position.yaml
index c2fa0e5a6e..d10314ba03 100644
--- a/api/core/model_runtime/model_providers/_position.yaml
+++ b/api/core/model_runtime/model_providers/_position.yaml
@@ -6,6 +6,7 @@
- nvidia
- nvidia_nim
- cohere
+- upstage
- bedrock
- togetherai
- openrouter
@@ -35,3 +36,4 @@
- hunyuan
- siliconflow
- perfxcloud
+- zhinao
diff --git a/api/core/model_runtime/model_providers/anthropic/llm/llm.py b/api/core/model_runtime/model_providers/anthropic/llm/llm.py
index 107efe4867..19ce401999 100644
--- a/api/core/model_runtime/model_providers/anthropic/llm/llm.py
+++ b/api/core/model_runtime/model_providers/anthropic/llm/llm.py
@@ -116,7 +116,8 @@ class AnthropicLargeLanguageModel(LargeLanguageModel):
# Add the new header for claude-3-5-sonnet-20240620 model
extra_headers = {}
if model == "claude-3-5-sonnet-20240620":
- extra_headers["anthropic-beta"] = "max-tokens-3-5-sonnet-2024-07-15"
+ if model_parameters.get('max_tokens') > 4096:
+ extra_headers["anthropic-beta"] = "max-tokens-3-5-sonnet-2024-07-15"
if tools:
extra_model_kwargs['tools'] = [
diff --git a/api/core/model_runtime/model_providers/azure_openai/_constant.py b/api/core/model_runtime/model_providers/azure_openai/_constant.py
index 63a0b5c8be..984cca3744 100644
--- a/api/core/model_runtime/model_providers/azure_openai/_constant.py
+++ b/api/core/model_runtime/model_providers/azure_openai/_constant.py
@@ -496,6 +496,158 @@ LLM_BASE_MODELS = [
)
)
),
+ AzureBaseModel(
+ base_model_name='gpt-4o-mini',
+ entity=AIModelEntity(
+ model='fake-deployment-name',
+ label=I18nObject(
+ en_US='fake-deployment-name-label',
+ ),
+ model_type=ModelType.LLM,
+ features=[
+ ModelFeature.AGENT_THOUGHT,
+ ModelFeature.VISION,
+ ModelFeature.MULTI_TOOL_CALL,
+ ModelFeature.STREAM_TOOL_CALL,
+ ],
+ fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
+ model_properties={
+ ModelPropertyKey.MODE: LLMMode.CHAT.value,
+ ModelPropertyKey.CONTEXT_SIZE: 128000,
+ },
+ parameter_rules=[
+ ParameterRule(
+ name='temperature',
+ **PARAMETER_RULE_TEMPLATE[DefaultParameterName.TEMPERATURE],
+ ),
+ ParameterRule(
+ name='top_p',
+ **PARAMETER_RULE_TEMPLATE[DefaultParameterName.TOP_P],
+ ),
+ ParameterRule(
+ name='presence_penalty',
+ **PARAMETER_RULE_TEMPLATE[DefaultParameterName.PRESENCE_PENALTY],
+ ),
+ ParameterRule(
+ name='frequency_penalty',
+ **PARAMETER_RULE_TEMPLATE[DefaultParameterName.FREQUENCY_PENALTY],
+ ),
+ _get_max_tokens(default=512, min_val=1, max_val=16384),
+ ParameterRule(
+ name='seed',
+ label=I18nObject(
+ zh_Hans='种子',
+ en_US='Seed'
+ ),
+ type='int',
+ help=I18nObject(
+ zh_Hans='如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint 响应参数来监视变化。',
+ en_US='If specified, model will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.'
+ ),
+ required=False,
+ precision=2,
+ min=0,
+ max=1,
+ ),
+ ParameterRule(
+ name='response_format',
+ label=I18nObject(
+ zh_Hans='回复格式',
+ en_US='response_format'
+ ),
+ type='string',
+ help=I18nObject(
+ zh_Hans='指定模型必须输出的格式',
+ en_US='specifying the format that the model must output'
+ ),
+ required=False,
+ options=['text', 'json_object']
+ ),
+ ],
+ pricing=PriceConfig(
+ input=0.150,
+ output=0.600,
+ unit=0.000001,
+ currency='USD',
+ )
+ )
+ ),
+ AzureBaseModel(
+ base_model_name='gpt-4o-mini-2024-07-18',
+ entity=AIModelEntity(
+ model='fake-deployment-name',
+ label=I18nObject(
+ en_US='fake-deployment-name-label',
+ ),
+ model_type=ModelType.LLM,
+ features=[
+ ModelFeature.AGENT_THOUGHT,
+ ModelFeature.VISION,
+ ModelFeature.MULTI_TOOL_CALL,
+ ModelFeature.STREAM_TOOL_CALL,
+ ],
+ fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
+ model_properties={
+ ModelPropertyKey.MODE: LLMMode.CHAT.value,
+ ModelPropertyKey.CONTEXT_SIZE: 128000,
+ },
+ parameter_rules=[
+ ParameterRule(
+ name='temperature',
+ **PARAMETER_RULE_TEMPLATE[DefaultParameterName.TEMPERATURE],
+ ),
+ ParameterRule(
+ name='top_p',
+ **PARAMETER_RULE_TEMPLATE[DefaultParameterName.TOP_P],
+ ),
+ ParameterRule(
+ name='presence_penalty',
+ **PARAMETER_RULE_TEMPLATE[DefaultParameterName.PRESENCE_PENALTY],
+ ),
+ ParameterRule(
+ name='frequency_penalty',
+ **PARAMETER_RULE_TEMPLATE[DefaultParameterName.FREQUENCY_PENALTY],
+ ),
+ _get_max_tokens(default=512, min_val=1, max_val=16384),
+ ParameterRule(
+ name='seed',
+ label=I18nObject(
+ zh_Hans='种子',
+ en_US='Seed'
+ ),
+ type='int',
+ help=I18nObject(
+ zh_Hans='如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint 响应参数来监视变化。',
+ en_US='If specified, model will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.'
+ ),
+ required=False,
+ precision=2,
+ min=0,
+ max=1,
+ ),
+ ParameterRule(
+ name='response_format',
+ label=I18nObject(
+ zh_Hans='回复格式',
+ en_US='response_format'
+ ),
+ type='string',
+ help=I18nObject(
+ zh_Hans='指定模型必须输出的格式',
+ en_US='specifying the format that the model must output'
+ ),
+ required=False,
+ options=['text', 'json_object']
+ ),
+ ],
+ pricing=PriceConfig(
+ input=0.150,
+ output=0.600,
+ unit=0.000001,
+ currency='USD',
+ )
+ )
+ ),
AzureBaseModel(
base_model_name='gpt-4o',
entity=AIModelEntity(
diff --git a/api/core/model_runtime/model_providers/azure_openai/azure_openai.yaml b/api/core/model_runtime/model_providers/azure_openai/azure_openai.yaml
index 875e94167d..be4d4651d7 100644
--- a/api/core/model_runtime/model_providers/azure_openai/azure_openai.yaml
+++ b/api/core/model_runtime/model_providers/azure_openai/azure_openai.yaml
@@ -114,6 +114,18 @@ model_credential_schema:
show_on:
- variable: __model_type
value: llm
+ - label:
+ en_US: gpt-4o-mini
+ value: gpt-4o-mini
+ show_on:
+ - variable: __model_type
+ value: llm
+ - label:
+ en_US: gpt-4o-mini-2024-07-18
+ value: gpt-4o-mini-2024-07-18
+ show_on:
+ - variable: __model_type
+ value: llm
- label:
en_US: gpt-4o
value: gpt-4o
diff --git a/api/core/model_runtime/model_providers/azure_openai/tts/tts.py b/api/core/model_runtime/model_providers/azure_openai/tts/tts.py
index 50c125b873..3d2bac1c31 100644
--- a/api/core/model_runtime/model_providers/azure_openai/tts/tts.py
+++ b/api/core/model_runtime/model_providers/azure_openai/tts/tts.py
@@ -1,12 +1,8 @@
import concurrent.futures
import copy
-from functools import reduce
-from io import BytesIO
from typing import Optional
-from flask import Response
from openai import AzureOpenAI
-from pydub import AudioSegment
from core.model_runtime.entities.model_entities import AIModelEntity
from core.model_runtime.errors.invoke import InvokeBadRequestError
@@ -51,7 +47,7 @@ class AzureOpenAIText2SpeechModel(_CommonAzureOpenAI, TTSModel):
:return: text translated to audio file
"""
try:
- self._tts_invoke(
+ self._tts_invoke_streaming(
model=model,
credentials=credentials,
content_text='Hello Dify!',
@@ -60,45 +56,6 @@ class AzureOpenAIText2SpeechModel(_CommonAzureOpenAI, TTSModel):
except Exception as ex:
raise CredentialsValidateFailedError(str(ex))
- def _tts_invoke(self, model: str, credentials: dict, content_text: str, voice: str) -> Response:
- """
- _tts_invoke text2speech model
-
- :param model: model name
- :param credentials: model credentials
- :param content_text: text content to be translated
- :param voice: model timbre
- :return: text translated to audio file
- """
- audio_type = self._get_model_audio_type(model, credentials)
- word_limit = self._get_model_word_limit(model, credentials)
- max_workers = self._get_model_workers_limit(model, credentials)
- try:
- sentences = list(self._split_text_into_sentences(org_text=content_text, max_length=word_limit))
- audio_bytes_list = []
-
- # Create a thread pool and map the function to the list of sentences
- with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
- futures = [executor.submit(self._process_sentence, sentence=sentence, model=model, voice=voice,
- credentials=credentials) for sentence in sentences]
- for future in futures:
- try:
- if future.result():
- audio_bytes_list.append(future.result())
- except Exception as ex:
- raise InvokeBadRequestError(str(ex))
-
- if len(audio_bytes_list) > 0:
- audio_segments = [AudioSegment.from_file(BytesIO(audio_bytes), format=audio_type) for audio_bytes in
- audio_bytes_list if audio_bytes]
- combined_segment = reduce(lambda x, y: x + y, audio_segments)
- buffer: BytesIO = BytesIO()
- combined_segment.export(buffer, format=audio_type)
- buffer.seek(0)
- return Response(buffer.read(), status=200, mimetype=f"audio/{audio_type}")
- except Exception as ex:
- raise InvokeBadRequestError(str(ex))
-
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str,
voice: str) -> any:
"""
@@ -144,7 +101,6 @@ class AzureOpenAIText2SpeechModel(_CommonAzureOpenAI, TTSModel):
:param sentence: text content to be translated
:return: text translated to audio file
"""
- # transform credentials to kwargs for model instance
credentials_kwargs = self._to_credential_kwargs(credentials)
client = AzureOpenAI(**credentials_kwargs)
response = client.audio.speech.create(model=model, voice=voice, input=sentence.strip())
diff --git a/api/core/model_runtime/model_providers/bedrock/llm/llm.py b/api/core/model_runtime/model_providers/bedrock/llm/llm.py
index ff34a116c7..335fa493cd 100644
--- a/api/core/model_runtime/model_providers/bedrock/llm/llm.py
+++ b/api/core/model_runtime/model_providers/bedrock/llm/llm.py
@@ -379,8 +379,12 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
if not message_content.data.startswith("data:"):
# fetch image data from url
try:
- image_content = requests.get(message_content.data).content
- mime_type, _ = mimetypes.guess_type(message_content.data)
+ url = message_content.data
+ image_content = requests.get(url).content
+ if '?' in url:
+ url = url.split('?')[0]
+ mime_type, _ = mimetypes.guess_type(url)
+ base64_data = base64.b64encode(image_content).decode('utf-8')
except Exception as ex:
raise ValueError(f"Failed to fetch image data from url {message_content.data}, {ex}")
else:
diff --git a/api/core/model_runtime/model_providers/deepseek/llm/deepseek-chat.yaml b/api/core/model_runtime/model_providers/deepseek/llm/deepseek-chat.yaml
index 6832576524..6588a4b5e0 100644
--- a/api/core/model_runtime/model_providers/deepseek/llm/deepseek-chat.yaml
+++ b/api/core/model_runtime/model_providers/deepseek/llm/deepseek-chat.yaml
@@ -5,6 +5,8 @@ label:
model_type: llm
features:
- agent-thought
+ - multi-tool-call
+ - stream-tool-call
model_properties:
mode: chat
context_size: 128000
diff --git a/api/core/model_runtime/model_providers/deepseek/llm/deepseek-coder.yaml b/api/core/model_runtime/model_providers/deepseek/llm/deepseek-coder.yaml
index 4da75b9aa3..caafeadadd 100644
--- a/api/core/model_runtime/model_providers/deepseek/llm/deepseek-coder.yaml
+++ b/api/core/model_runtime/model_providers/deepseek/llm/deepseek-coder.yaml
@@ -5,6 +5,8 @@ label:
model_type: llm
features:
- agent-thought
+ - multi-tool-call
+ - stream-tool-call
model_properties:
mode: chat
context_size: 128000
diff --git a/api/core/model_runtime/model_providers/groq/llm/llama3-70b-8192.yaml b/api/core/model_runtime/model_providers/groq/llm/llama3-70b-8192.yaml
index 98655a4c9f..91d0e30765 100644
--- a/api/core/model_runtime/model_providers/groq/llm/llama3-70b-8192.yaml
+++ b/api/core/model_runtime/model_providers/groq/llm/llama3-70b-8192.yaml
@@ -19,7 +19,7 @@ parameter_rules:
min: 1
max: 8192
pricing:
- input: '0.05'
- output: '0.1'
+ input: '0.59'
+ output: '0.79'
unit: '0.000001'
currency: USD
diff --git a/api/core/model_runtime/model_providers/groq/llm/llama3-8b-8192.yaml b/api/core/model_runtime/model_providers/groq/llm/llama3-8b-8192.yaml
index d85bb7709b..b6154f761f 100644
--- a/api/core/model_runtime/model_providers/groq/llm/llama3-8b-8192.yaml
+++ b/api/core/model_runtime/model_providers/groq/llm/llama3-8b-8192.yaml
@@ -19,7 +19,7 @@ parameter_rules:
min: 1
max: 8192
pricing:
- input: '0.59'
- output: '0.79'
+ input: '0.05'
+ output: '0.08'
unit: '0.000001'
currency: USD
diff --git a/api/core/model_runtime/model_providers/huggingface_tei/__init__.py b/api/core/model_runtime/model_providers/huggingface_tei/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/core/model_runtime/model_providers/huggingface_tei/huggingface_tei.py b/api/core/model_runtime/model_providers/huggingface_tei/huggingface_tei.py
new file mode 100644
index 0000000000..9454466250
--- /dev/null
+++ b/api/core/model_runtime/model_providers/huggingface_tei/huggingface_tei.py
@@ -0,0 +1,11 @@
+import logging
+
+from core.model_runtime.model_providers.__base.model_provider import ModelProvider
+
+logger = logging.getLogger(__name__)
+
+
+class HuggingfaceTeiProvider(ModelProvider):
+
+ def validate_provider_credentials(self, credentials: dict) -> None:
+ pass
diff --git a/api/core/model_runtime/model_providers/huggingface_tei/huggingface_tei.yaml b/api/core/model_runtime/model_providers/huggingface_tei/huggingface_tei.yaml
new file mode 100644
index 0000000000..f3a912d84d
--- /dev/null
+++ b/api/core/model_runtime/model_providers/huggingface_tei/huggingface_tei.yaml
@@ -0,0 +1,36 @@
+provider: huggingface_tei
+label:
+ en_US: Text Embedding Inference
+description:
+ en_US: A blazing fast inference solution for text embeddings models.
+ zh_Hans: 用于文本嵌入模型的超快速推理解决方案。
+background: "#FFF8DC"
+help:
+ title:
+ en_US: How to deploy Text Embedding Inference
+ zh_Hans: 如何部署 Text Embedding Inference
+ url:
+ en_US: https://github.com/huggingface/text-embeddings-inference
+supported_model_types:
+ - text-embedding
+ - rerank
+configurate_methods:
+ - customizable-model
+model_credential_schema:
+ model:
+ label:
+ en_US: Model Name
+ zh_Hans: 模型名称
+ placeholder:
+ en_US: Enter your model name
+ zh_Hans: 输入模型名称
+ credential_form_schemas:
+ - variable: server_url
+ label:
+ zh_Hans: 服务器URL
+ en_US: Server url
+ type: secret-input
+ required: true
+ placeholder:
+ zh_Hans: 在此输入Text Embedding Inference的服务器地址,如 http://192.168.1.100:8080
+ en_US: Enter the url of your Text Embedding Inference, e.g. http://192.168.1.100:8080
diff --git a/api/core/model_runtime/model_providers/huggingface_tei/rerank/__init__.py b/api/core/model_runtime/model_providers/huggingface_tei/rerank/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/core/model_runtime/model_providers/huggingface_tei/rerank/rerank.py b/api/core/model_runtime/model_providers/huggingface_tei/rerank/rerank.py
new file mode 100644
index 0000000000..34013426de
--- /dev/null
+++ b/api/core/model_runtime/model_providers/huggingface_tei/rerank/rerank.py
@@ -0,0 +1,137 @@
+from typing import Optional
+
+import httpx
+
+from core.model_runtime.entities.common_entities import I18nObject
+from core.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelPropertyKey, ModelType
+from core.model_runtime.entities.rerank_entities import RerankDocument, RerankResult
+from core.model_runtime.errors.invoke import (
+ InvokeAuthorizationError,
+ InvokeBadRequestError,
+ InvokeConnectionError,
+ InvokeError,
+ InvokeRateLimitError,
+ InvokeServerUnavailableError,
+)
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.__base.rerank_model import RerankModel
+from core.model_runtime.model_providers.huggingface_tei.tei_helper import TeiHelper
+
+
+class HuggingfaceTeiRerankModel(RerankModel):
+ """
+ Model class for Text Embedding Inference rerank model.
+ """
+
+ def _invoke(
+ self,
+ model: str,
+ credentials: dict,
+ query: str,
+ docs: list[str],
+ score_threshold: Optional[float] = None,
+ top_n: Optional[int] = None,
+ user: Optional[str] = None,
+ ) -> RerankResult:
+ """
+ Invoke rerank model
+
+ :param model: model name
+ :param credentials: model credentials
+ :param query: search query
+ :param docs: docs for reranking
+ :param score_threshold: score threshold
+ :param top_n: top n
+ :param user: unique user id
+ :return: rerank result
+ """
+ if len(docs) == 0:
+ return RerankResult(model=model, docs=[])
+ server_url = credentials['server_url']
+
+ if server_url.endswith('/'):
+ server_url = server_url[:-1]
+
+ try:
+ results = TeiHelper.invoke_rerank(server_url, query, docs)
+
+ rerank_documents = []
+ for result in results:
+ rerank_document = RerankDocument(
+ index=result['index'],
+ text=result['text'],
+ score=result['score'],
+ )
+ if score_threshold is None or result['score'] >= score_threshold:
+ rerank_documents.append(rerank_document)
+ if top_n is not None and len(rerank_documents) >= top_n:
+ break
+
+ return RerankResult(model=model, docs=rerank_documents)
+ except httpx.HTTPStatusError as e:
+ raise InvokeServerUnavailableError(str(e))
+
+ def validate_credentials(self, model: str, credentials: dict) -> None:
+ """
+ Validate model credentials
+
+ :param model: model name
+ :param credentials: model credentials
+ :return:
+ """
+ try:
+ server_url = credentials['server_url']
+ extra_args = TeiHelper.get_tei_extra_parameter(server_url, model)
+ if extra_args.model_type != 'reranker':
+ raise CredentialsValidateFailedError('Current model is not a rerank model')
+
+ credentials['context_size'] = extra_args.max_input_length
+
+ self.invoke(
+ model=model,
+ credentials=credentials,
+ query='Whose kasumi',
+ docs=[
+ 'Kasumi is a girl\'s name of Japanese origin meaning "mist".',
+ 'Her music is a kawaii bass, a mix of future bass, pop, and kawaii music ',
+ 'and she leads a team named PopiParty.',
+ ],
+ score_threshold=0.8,
+ )
+ except Exception as ex:
+ raise CredentialsValidateFailedError(str(ex))
+
+ @property
+ def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:
+ """
+ Map model invoke error to unified error
+ The key is the error type thrown to the caller
+ The value is the error type thrown by the model,
+ which needs to be converted into a unified error type for the caller.
+
+ :return: Invoke error mapping
+ """
+ return {
+ InvokeConnectionError: [InvokeConnectionError],
+ InvokeServerUnavailableError: [InvokeServerUnavailableError],
+ InvokeRateLimitError: [InvokeRateLimitError],
+ InvokeAuthorizationError: [InvokeAuthorizationError],
+ InvokeBadRequestError: [InvokeBadRequestError, KeyError, ValueError],
+ }
+
+ def get_customizable_model_schema(self, model: str, credentials: dict) -> AIModelEntity | None:
+ """
+ used to define customizable model schema
+ """
+ entity = AIModelEntity(
+ model=model,
+ label=I18nObject(en_US=model),
+ fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
+ model_type=ModelType.RERANK,
+ model_properties={
+ ModelPropertyKey.CONTEXT_SIZE: int(credentials.get('context_size', 512)),
+ },
+ parameter_rules=[],
+ )
+
+ return entity
diff --git a/api/core/model_runtime/model_providers/huggingface_tei/tei_helper.py b/api/core/model_runtime/model_providers/huggingface_tei/tei_helper.py
new file mode 100644
index 0000000000..2aa785c89d
--- /dev/null
+++ b/api/core/model_runtime/model_providers/huggingface_tei/tei_helper.py
@@ -0,0 +1,183 @@
+from threading import Lock
+from time import time
+from typing import Optional
+
+import httpx
+from requests.adapters import HTTPAdapter
+from requests.exceptions import ConnectionError, MissingSchema, Timeout
+from requests.sessions import Session
+from yarl import URL
+
+
+class TeiModelExtraParameter:
+ model_type: str
+ max_input_length: int
+ max_client_batch_size: int
+
+ def __init__(self, model_type: str, max_input_length: int, max_client_batch_size: Optional[int] = None) -> None:
+ self.model_type = model_type
+ self.max_input_length = max_input_length
+ self.max_client_batch_size = max_client_batch_size
+
+
+cache = {}
+cache_lock = Lock()
+
+
+class TeiHelper:
+ @staticmethod
+ def get_tei_extra_parameter(server_url: str, model_name: str) -> TeiModelExtraParameter:
+ TeiHelper._clean_cache()
+ with cache_lock:
+ if model_name not in cache:
+ cache[model_name] = {
+ 'expires': time() + 300,
+ 'value': TeiHelper._get_tei_extra_parameter(server_url),
+ }
+ return cache[model_name]['value']
+
+ @staticmethod
+ def _clean_cache() -> None:
+ try:
+ with cache_lock:
+ expired_keys = [model_uid for model_uid, model in cache.items() if model['expires'] < time()]
+ for model_uid in expired_keys:
+ del cache[model_uid]
+ except RuntimeError as e:
+ pass
+
+ @staticmethod
+ def _get_tei_extra_parameter(server_url: str) -> TeiModelExtraParameter:
+ """
+ get tei model extra parameter like model_type, max_input_length, max_batch_requests
+ """
+
+ url = str(URL(server_url) / 'info')
+
+ # this method is surrounded by a lock, and default requests may hang forever, so we just set a Adapter with max_retries=3
+ session = Session()
+ session.mount('http://', HTTPAdapter(max_retries=3))
+ session.mount('https://', HTTPAdapter(max_retries=3))
+
+ try:
+ response = session.get(url, timeout=10)
+ except (MissingSchema, ConnectionError, Timeout) as e:
+ raise RuntimeError(f'get tei model extra parameter failed, url: {url}, error: {e}')
+ if response.status_code != 200:
+ raise RuntimeError(
+ f'get tei model extra parameter failed, status code: {response.status_code}, response: {response.text}'
+ )
+
+ response_json = response.json()
+
+ model_type = response_json.get('model_type', {})
+ if len(model_type.keys()) < 1:
+ raise RuntimeError('model_type is empty')
+ model_type = list(model_type.keys())[0]
+ if model_type not in ['embedding', 'reranker']:
+ raise RuntimeError(f'invalid model_type: {model_type}')
+
+ max_input_length = response_json.get('max_input_length', 512)
+ max_client_batch_size = response_json.get('max_client_batch_size', 1)
+
+ return TeiModelExtraParameter(
+ model_type=model_type,
+ max_input_length=max_input_length,
+ max_client_batch_size=max_client_batch_size
+ )
+
+ @staticmethod
+ def invoke_tokenize(server_url: str, texts: list[str]) -> list[list[dict]]:
+ """
+ Invoke tokenize endpoint
+
+ Example response:
+ [
+ [
+ {
+ "id": 0,
+ "text": "",
+ "special": true,
+ "start": null,
+ "stop": null
+ },
+ {
+ "id": 7704,
+ "text": "str",
+ "special": false,
+ "start": 0,
+ "stop": 3
+ },
+ < MORE TOKENS >
+ ]
+ ]
+
+ :param server_url: server url
+ :param texts: texts to tokenize
+ """
+ resp = httpx.post(
+ f'{server_url}/tokenize',
+ json={'inputs': texts},
+ )
+ resp.raise_for_status()
+ return resp.json()
+
+ @staticmethod
+ def invoke_embeddings(server_url: str, texts: list[str]) -> dict:
+ """
+ Invoke embeddings endpoint
+
+ Example response:
+ {
+ "object": "list",
+ "data": [
+ {
+ "object": "embedding",
+ "embedding": [...],
+ "index": 0
+ }
+ ],
+ "model": "MODEL_NAME",
+ "usage": {
+ "prompt_tokens": 3,
+ "total_tokens": 3
+ }
+ }
+
+ :param server_url: server url
+ :param texts: texts to embed
+ """
+ # Use OpenAI compatible API here, which has usage tracking
+ resp = httpx.post(
+ f'{server_url}/v1/embeddings',
+ json={'input': texts},
+ )
+ resp.raise_for_status()
+ return resp.json()
+
+ @staticmethod
+ def invoke_rerank(server_url: str, query: str, docs: list[str]) -> list[dict]:
+ """
+ Invoke rerank endpoint
+
+ Example response:
+ [
+ {
+ "index": 0,
+ "text": "Deep Learning is ...",
+ "score": 0.9950755
+ }
+ ]
+
+ :param server_url: server url
+ :param texts: texts to rerank
+ :param candidates: candidates to rerank
+ """
+ params = {'query': query, 'texts': docs, 'return_text': True}
+
+ response = httpx.post(
+ server_url + '/rerank',
+ json=params,
+ )
+ response.raise_for_status()
+ return response.json()
diff --git a/api/core/model_runtime/model_providers/huggingface_tei/text_embedding/__init__.py b/api/core/model_runtime/model_providers/huggingface_tei/text_embedding/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/core/model_runtime/model_providers/huggingface_tei/text_embedding/text_embedding.py b/api/core/model_runtime/model_providers/huggingface_tei/text_embedding/text_embedding.py
new file mode 100644
index 0000000000..6897b87f6d
--- /dev/null
+++ b/api/core/model_runtime/model_providers/huggingface_tei/text_embedding/text_embedding.py
@@ -0,0 +1,204 @@
+import time
+from typing import Optional
+
+from core.model_runtime.entities.common_entities import I18nObject
+from core.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelPropertyKey, ModelType, PriceType
+from core.model_runtime.entities.text_embedding_entities import EmbeddingUsage, TextEmbeddingResult
+from core.model_runtime.errors.invoke import (
+ InvokeAuthorizationError,
+ InvokeBadRequestError,
+ InvokeConnectionError,
+ InvokeError,
+ InvokeRateLimitError,
+ InvokeServerUnavailableError,
+)
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel
+from core.model_runtime.model_providers.huggingface_tei.tei_helper import TeiHelper
+
+
+class HuggingfaceTeiTextEmbeddingModel(TextEmbeddingModel):
+ """
+ Model class for Text Embedding Inference text embedding model.
+ """
+
+ def _invoke(
+ self, model: str, credentials: dict, texts: list[str], user: Optional[str] = None
+ ) -> TextEmbeddingResult:
+ """
+ Invoke text embedding model
+
+ credentials should be like:
+ {
+ 'server_url': 'server url',
+ 'model_uid': 'model uid',
+ }
+
+ :param model: model name
+ :param credentials: model credentials
+ :param texts: texts to embed
+ :param user: unique user id
+ :return: embeddings result
+ """
+ server_url = credentials['server_url']
+
+ if server_url.endswith('/'):
+ server_url = server_url[:-1]
+
+
+ # get model properties
+ context_size = self._get_context_size(model, credentials)
+ max_chunks = self._get_max_chunks(model, credentials)
+
+ inputs = []
+ indices = []
+ used_tokens = 0
+
+ # get tokenized results from TEI
+ batched_tokenize_result = TeiHelper.invoke_tokenize(server_url, texts)
+
+ for i, (text, tokenize_result) in enumerate(zip(texts, batched_tokenize_result)):
+
+ # Check if the number of tokens is larger than the context size
+ num_tokens = len(tokenize_result)
+
+ if num_tokens >= context_size:
+ # Find the best cutoff point
+ pre_special_token_count = 0
+ for token in tokenize_result:
+ if token['special']:
+ pre_special_token_count += 1
+ else:
+ break
+ rest_special_token_count = len([token for token in tokenize_result if token['special']]) - pre_special_token_count
+
+ # Calculate the cutoff point, leave 20 extra space to avoid exceeding the limit
+ token_cutoff = context_size - rest_special_token_count - 20
+
+ # Find the cutoff index
+ cutpoint_token = tokenize_result[token_cutoff]
+ cutoff = cutpoint_token['start']
+
+ inputs.append(text[0: cutoff])
+ else:
+ inputs.append(text)
+ indices += [i]
+
+ batched_embeddings = []
+ _iter = range(0, len(inputs), max_chunks)
+
+ try:
+ used_tokens = 0
+ for i in _iter:
+ iter_texts = inputs[i : i + max_chunks]
+ results = TeiHelper.invoke_embeddings(server_url, iter_texts)
+ embeddings = results['data']
+ embeddings = [embedding['embedding'] for embedding in embeddings]
+ batched_embeddings.extend(embeddings)
+
+ usage = results['usage']
+ used_tokens += usage['total_tokens']
+ except RuntimeError as e:
+ raise InvokeServerUnavailableError(str(e))
+
+ usage = self._calc_response_usage(model=model, credentials=credentials, tokens=used_tokens)
+
+ result = TextEmbeddingResult(model=model, embeddings=batched_embeddings, usage=usage)
+
+ return result
+
+ def get_num_tokens(self, model: str, credentials: dict, texts: list[str]) -> int:
+ """
+ Get number of tokens for given prompt messages
+
+ :param model: model name
+ :param credentials: model credentials
+ :param texts: texts to embed
+ :return:
+ """
+ num_tokens = 0
+ server_url = credentials['server_url']
+
+ if server_url.endswith('/'):
+ server_url = server_url[:-1]
+
+ batch_tokens = TeiHelper.invoke_tokenize(server_url, texts)
+ num_tokens = sum(len(tokens) for tokens in batch_tokens)
+ return num_tokens
+
+ def validate_credentials(self, model: str, credentials: dict) -> None:
+ """
+ Validate model credentials
+
+ :param model: model name
+ :param credentials: model credentials
+ :return:
+ """
+ try:
+ server_url = credentials['server_url']
+ extra_args = TeiHelper.get_tei_extra_parameter(server_url, model)
+ print(extra_args)
+ if extra_args.model_type != 'embedding':
+ raise CredentialsValidateFailedError('Current model is not a embedding model')
+
+ credentials['context_size'] = extra_args.max_input_length
+ credentials['max_chunks'] = extra_args.max_client_batch_size
+ self._invoke(model=model, credentials=credentials, texts=['ping'])
+ except Exception as ex:
+ raise CredentialsValidateFailedError(str(ex))
+
+ @property
+ def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:
+ return {
+ InvokeConnectionError: [InvokeConnectionError],
+ InvokeServerUnavailableError: [InvokeServerUnavailableError],
+ InvokeRateLimitError: [InvokeRateLimitError],
+ InvokeAuthorizationError: [InvokeAuthorizationError],
+ InvokeBadRequestError: [KeyError],
+ }
+
+ def _calc_response_usage(self, model: str, credentials: dict, tokens: int) -> EmbeddingUsage:
+ """
+ Calculate response usage
+
+ :param model: model name
+ :param credentials: model credentials
+ :param tokens: input tokens
+ :return: usage
+ """
+ # get input price info
+ input_price_info = self.get_price(
+ model=model, credentials=credentials, price_type=PriceType.INPUT, tokens=tokens
+ )
+
+ # transform usage
+ usage = EmbeddingUsage(
+ tokens=tokens,
+ total_tokens=tokens,
+ unit_price=input_price_info.unit_price,
+ price_unit=input_price_info.unit,
+ total_price=input_price_info.total_amount,
+ currency=input_price_info.currency,
+ latency=time.perf_counter() - self.started_at,
+ )
+
+ return usage
+
+ def get_customizable_model_schema(self, model: str, credentials: dict) -> AIModelEntity | None:
+ """
+ used to define customizable model schema
+ """
+
+ entity = AIModelEntity(
+ model=model,
+ label=I18nObject(en_US=model),
+ fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
+ model_type=ModelType.TEXT_EMBEDDING,
+ model_properties={
+ ModelPropertyKey.MAX_CHUNKS: int(credentials.get('max_chunks', 1)),
+ ModelPropertyKey.CONTEXT_SIZE: int(credentials.get('context_size', 512)),
+ },
+ parameter_rules=[],
+ )
+
+ return entity
diff --git a/api/core/model_runtime/model_providers/hunyuan/llm/hunyuan-pro.yaml b/api/core/model_runtime/model_providers/hunyuan/llm/hunyuan-pro.yaml
index d3b1b6d8b6..b173ffbe77 100644
--- a/api/core/model_runtime/model_providers/hunyuan/llm/hunyuan-pro.yaml
+++ b/api/core/model_runtime/model_providers/hunyuan/llm/hunyuan-pro.yaml
@@ -21,6 +21,16 @@ parameter_rules:
default: 1024
min: 1
max: 32000
+ - name: enable_enhance
+ label:
+ zh_Hans: 功能增强
+ en_US: Enable Enhancement
+ type: boolean
+ help:
+ zh_Hans: 功能增强(如搜索)开关,关闭时将直接由主模型生成回复内容,可以降低响应时延(对于流式输出时的首字时延尤为明显)。但在少数场景里,回复效果可能会下降。
+ en_US: Allow the model to perform external search to enhance the generation results.
+ required: false
+ default: true
pricing:
input: '0.03'
output: '0.10'
diff --git a/api/core/model_runtime/model_providers/hunyuan/llm/hunyuan-standard-256k.yaml b/api/core/model_runtime/model_providers/hunyuan/llm/hunyuan-standard-256k.yaml
index 3b28317497..1f94a8623b 100644
--- a/api/core/model_runtime/model_providers/hunyuan/llm/hunyuan-standard-256k.yaml
+++ b/api/core/model_runtime/model_providers/hunyuan/llm/hunyuan-standard-256k.yaml
@@ -21,6 +21,16 @@ parameter_rules:
default: 1024
min: 1
max: 256000
+ - name: enable_enhance
+ label:
+ zh_Hans: 功能增强
+ en_US: Enable Enhancement
+ type: boolean
+ help:
+ zh_Hans: 功能增强(如搜索)开关,关闭时将直接由主模型生成回复内容,可以降低响应时延(对于流式输出时的首字时延尤为明显)。但在少数场景里,回复效果可能会下降。
+ en_US: Allow the model to perform external search to enhance the generation results.
+ required: false
+ default: true
pricing:
input: '0.015'
output: '0.06'
diff --git a/api/core/model_runtime/model_providers/hunyuan/llm/hunyuan-standard.yaml b/api/core/model_runtime/model_providers/hunyuan/llm/hunyuan-standard.yaml
index 88b27f51c4..1db25930fc 100644
--- a/api/core/model_runtime/model_providers/hunyuan/llm/hunyuan-standard.yaml
+++ b/api/core/model_runtime/model_providers/hunyuan/llm/hunyuan-standard.yaml
@@ -21,6 +21,16 @@ parameter_rules:
default: 1024
min: 1
max: 32000
+ - name: enable_enhance
+ label:
+ zh_Hans: 功能增强
+ en_US: Enable Enhancement
+ type: boolean
+ help:
+ zh_Hans: 功能增强(如搜索)开关,关闭时将直接由主模型生成回复内容,可以降低响应时延(对于流式输出时的首字时延尤为明显)。但在少数场景里,回复效果可能会下降。
+ en_US: Allow the model to perform external search to enhance the generation results.
+ required: false
+ default: true
pricing:
input: '0.0045'
output: '0.0005'
diff --git a/api/core/model_runtime/model_providers/hunyuan/llm/llm.py b/api/core/model_runtime/model_providers/hunyuan/llm/llm.py
index 8859dd72bd..0bdf6ec005 100644
--- a/api/core/model_runtime/model_providers/hunyuan/llm/llm.py
+++ b/api/core/model_runtime/model_providers/hunyuan/llm/llm.py
@@ -36,7 +36,8 @@ class HunyuanLargeLanguageModel(LargeLanguageModel):
custom_parameters = {
'Temperature': model_parameters.get('temperature', 0.0),
- 'TopP': model_parameters.get('top_p', 1.0)
+ 'TopP': model_parameters.get('top_p', 1.0),
+ 'EnableEnhancement': model_parameters.get('enable_enhance', True)
}
params = {
@@ -213,7 +214,7 @@ class HunyuanLargeLanguageModel(LargeLanguageModel):
def _handle_chat_response(self, credentials, model, prompt_messages, response):
usage = self._calc_response_usage(model, credentials, response.Usage.PromptTokens,
response.Usage.CompletionTokens)
- assistant_prompt_message = PromptMessage(role="assistant")
+ assistant_prompt_message = AssistantPromptMessage()
assistant_prompt_message.content = response.Choices[0].Message.Content
result = LLMResult(
model=model,
diff --git a/api/core/model_runtime/model_providers/jina/rerank/jina-reranker-v2-base-multilingual.yaml b/api/core/model_runtime/model_providers/jina/rerank/jina-reranker-v2-base-multilingual.yaml
index acf576719c..e6af62107e 100644
--- a/api/core/model_runtime/model_providers/jina/rerank/jina-reranker-v2-base-multilingual.yaml
+++ b/api/core/model_runtime/model_providers/jina/rerank/jina-reranker-v2-base-multilingual.yaml
@@ -1,4 +1,4 @@
model: jina-reranker-v2-base-multilingual
model_type: rerank
model_properties:
- context_size: 8192
+ context_size: 1024
diff --git a/api/core/model_runtime/model_providers/nvidia/llm/_position.yaml b/api/core/model_runtime/model_providers/nvidia/llm/_position.yaml
index 2401f2a890..ad01d430d6 100644
--- a/api/core/model_runtime/model_providers/nvidia/llm/_position.yaml
+++ b/api/core/model_runtime/model_providers/nvidia/llm/_position.yaml
@@ -2,10 +2,16 @@
- google/codegemma-7b
- google/recurrentgemma-2b
- meta/llama2-70b
+- meta/llama-3.1-8b-instruct
+- meta/llama-3.1-70b-instruct
+- meta/llama-3.1-405b-instruct
- meta/llama3-8b-instruct
- meta/llama3-70b-instruct
- mistralai/mistral-large
- mistralai/mixtral-8x7b-instruct-v0.1
- mistralai/mixtral-8x22b-instruct-v0.1
+- nvidia/nemotron-4-340b-instruct
+- microsoft/phi-3-medium-128k-instruct
+- microsoft/phi-3-mini-128k-instruct
- fuyu-8b
- snowflake/arctic
diff --git a/api/core/model_runtime/model_providers/nvidia/llm/llama-3.1-405b.yaml b/api/core/model_runtime/model_providers/nvidia/llm/llama-3.1-405b.yaml
new file mode 100644
index 0000000000..5472de9902
--- /dev/null
+++ b/api/core/model_runtime/model_providers/nvidia/llm/llama-3.1-405b.yaml
@@ -0,0 +1,36 @@
+model: meta/llama-3.1-405b-instruct
+label:
+ zh_Hans: meta/llama-3.1-405b-instruct
+ en_US: meta/llama-3.1-405b-instruct
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 131072
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ min: 0
+ max: 1
+ default: 0.5
+ - name: top_p
+ use_template: top_p
+ min: 0
+ max: 1
+ default: 1
+ - name: max_tokens
+ use_template: max_tokens
+ min: 1
+ max: 4096
+ default: 1024
+ - name: frequency_penalt
+ use_template: frequency_penalty
+ min: -2
+ max: 2
+ default: 0
+ - name: presence_penalty
+ use_template: presence_penalty
+ min: -2
+ max: 2
+ default: 0
diff --git a/api/core/model_runtime/model_providers/nvidia/llm/llama-3.1-70b.yaml b/api/core/model_runtime/model_providers/nvidia/llm/llama-3.1-70b.yaml
new file mode 100644
index 0000000000..16af0554a1
--- /dev/null
+++ b/api/core/model_runtime/model_providers/nvidia/llm/llama-3.1-70b.yaml
@@ -0,0 +1,36 @@
+model: meta/llama-3.1-70b-instruct
+label:
+ zh_Hans: meta/llama-3.1-70b-instruct
+ en_US: meta/llama-3.1-70b-instruct
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 131072
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ min: 0
+ max: 1
+ default: 0.5
+ - name: top_p
+ use_template: top_p
+ min: 0
+ max: 1
+ default: 1
+ - name: max_tokens
+ use_template: max_tokens
+ min: 1
+ max: 4096
+ default: 1024
+ - name: frequency_penalty
+ use_template: frequency_penalty
+ min: -2
+ max: 2
+ default: 0
+ - name: presence_penalty
+ use_template: presence_penalty
+ min: -2
+ max: 2
+ default: 0
diff --git a/api/core/model_runtime/model_providers/nvidia/llm/llama-3.1-8b.yaml b/api/core/model_runtime/model_providers/nvidia/llm/llama-3.1-8b.yaml
new file mode 100644
index 0000000000..f2d43dc30e
--- /dev/null
+++ b/api/core/model_runtime/model_providers/nvidia/llm/llama-3.1-8b.yaml
@@ -0,0 +1,36 @@
+model: meta/llama-3.1-8b-instruct
+label:
+ zh_Hans: meta/llama-3.1-8b-instruct
+ en_US: meta/llama-3.1-8b-instruct
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 131072
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ min: 0
+ max: 1
+ default: 0.5
+ - name: top_p
+ use_template: top_p
+ min: 0
+ max: 1
+ default: 1
+ - name: max_tokens
+ use_template: max_tokens
+ min: 1
+ max: 4096
+ default: 1024
+ - name: frequency_penalty
+ use_template: frequency_penalty
+ min: -2
+ max: 2
+ default: 0
+ - name: presence_penalty
+ use_template: presence_penalty
+ min: -2
+ max: 2
+ default: 0
diff --git a/api/core/model_runtime/model_providers/nvidia/llm/llm.py b/api/core/model_runtime/model_providers/nvidia/llm/llm.py
index 11252b9211..bc42eaca65 100644
--- a/api/core/model_runtime/model_providers/nvidia/llm/llm.py
+++ b/api/core/model_runtime/model_providers/nvidia/llm/llm.py
@@ -31,8 +31,13 @@ class NVIDIALargeLanguageModel(OAIAPICompatLargeLanguageModel):
'meta/llama2-70b': '',
'meta/llama3-8b-instruct': '',
'meta/llama3-70b-instruct': '',
- 'google/recurrentgemma-2b': ''
-
+ 'meta/llama-3.1-8b-instruct': '',
+ 'meta/llama-3.1-70b-instruct': '',
+ 'meta/llama-3.1-405b-instruct': '',
+ 'google/recurrentgemma-2b': '',
+ 'nvidia/nemotron-4-340b-instruct': '',
+ 'microsoft/phi-3-medium-128k-instruct':'',
+ 'microsoft/phi-3-mini-128k-instruct':''
}
def _invoke(self, model: str, credentials: dict,
diff --git a/api/core/model_runtime/model_providers/nvidia/llm/nemotron-4-340b-instruct.yaml b/api/core/model_runtime/model_providers/nvidia/llm/nemotron-4-340b-instruct.yaml
new file mode 100644
index 0000000000..e5537cd2fd
--- /dev/null
+++ b/api/core/model_runtime/model_providers/nvidia/llm/nemotron-4-340b-instruct.yaml
@@ -0,0 +1,36 @@
+model: nvidia/nemotron-4-340b-instruct
+label:
+ zh_Hans: nvidia/nemotron-4-340b-instruct
+ en_US: nvidia/nemotron-4-340b-instruct
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 131072
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ min: 0
+ max: 1
+ default: 0.5
+ - name: top_p
+ use_template: top_p
+ min: 0
+ max: 1
+ default: 1
+ - name: max_tokens
+ use_template: max_tokens
+ min: 1
+ max: 4096
+ default: 1024
+ - name: frequency_penalty
+ use_template: frequency_penalty
+ min: -2
+ max: 2
+ default: 0
+ - name: presence_penalty
+ use_template: presence_penalty
+ min: -2
+ max: 2
+ default: 0
diff --git a/api/core/model_runtime/model_providers/nvidia/llm/phi-3-medium-128k-instruct.yaml b/api/core/model_runtime/model_providers/nvidia/llm/phi-3-medium-128k-instruct.yaml
new file mode 100644
index 0000000000..0c5538d135
--- /dev/null
+++ b/api/core/model_runtime/model_providers/nvidia/llm/phi-3-medium-128k-instruct.yaml
@@ -0,0 +1,36 @@
+model: microsoft/phi-3-medium-128k-instruct
+label:
+ zh_Hans: microsoft/phi-3-medium-128k-instruct
+ en_US: microsoft/phi-3-medium-128k-instruct
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 131072
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ min: 0
+ max: 1
+ default: 0.5
+ - name: top_p
+ use_template: top_p
+ min: 0
+ max: 1
+ default: 1
+ - name: max_tokens
+ use_template: max_tokens
+ min: 1
+ max: 4096
+ default: 1024
+ - name: frequency_penalty
+ use_template: frequency_penalty
+ min: -2
+ max: 2
+ default: 0
+ - name: presence_penalty
+ use_template: presence_penalty
+ min: -2
+ max: 2
+ default: 0
diff --git a/api/core/model_runtime/model_providers/nvidia/llm/phi-3-mini-128k-instruct.yaml b/api/core/model_runtime/model_providers/nvidia/llm/phi-3-mini-128k-instruct.yaml
new file mode 100644
index 0000000000..1eb1c51d01
--- /dev/null
+++ b/api/core/model_runtime/model_providers/nvidia/llm/phi-3-mini-128k-instruct.yaml
@@ -0,0 +1,36 @@
+model: microsoft/phi-3-mini-128k-instruct
+label:
+ zh_Hans: microsoft/phi-3-mini-128k-instruct
+ en_US: microsoft/phi-3-mini-128k-instruct
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 131072
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ min: 0
+ max: 1
+ default: 0.5
+ - name: top_p
+ use_template: top_p
+ min: 0
+ max: 1
+ default: 1
+ - name: max_tokens
+ use_template: max_tokens
+ min: 1
+ max: 4096
+ default: 1024
+ - name: frequency_penalty
+ use_template: frequency_penalty
+ min: -2
+ max: 2
+ default: 0
+ - name: presence_penalty
+ use_template: presence_penalty
+ min: -2
+ max: 2
+ default: 0
diff --git a/api/core/model_runtime/model_providers/ollama/text_embedding/text_embedding.py b/api/core/model_runtime/model_providers/ollama/text_embedding/text_embedding.py
index fd73728b78..9e26d35afc 100644
--- a/api/core/model_runtime/model_providers/ollama/text_embedding/text_embedding.py
+++ b/api/core/model_runtime/model_providers/ollama/text_embedding/text_embedding.py
@@ -59,7 +59,7 @@ class OllamaEmbeddingModel(TextEmbeddingModel):
if not endpoint_url.endswith('/'):
endpoint_url += '/'
- endpoint_url = urljoin(endpoint_url, 'api/embeddings')
+ endpoint_url = urljoin(endpoint_url, 'api/embed')
# get model properties
context_size = self._get_context_size(model, credentials)
@@ -72,38 +72,34 @@ class OllamaEmbeddingModel(TextEmbeddingModel):
num_tokens = self._get_num_tokens_by_gpt2(text)
if num_tokens >= context_size:
- cutoff = int(len(text) * (np.floor(context_size / num_tokens)))
+ cutoff = int(np.floor(len(text) * (context_size / num_tokens)))
# if num tokens is larger than context length, only use the start
inputs.append(text[0: cutoff])
else:
inputs.append(text)
- batched_embeddings = []
+ # Prepare the payload for the request
+ payload = {
+ 'input': inputs,
+ 'model': model,
+ }
- for text in inputs:
- # Prepare the payload for the request
- payload = {
- 'prompt': text,
- 'model': model,
- }
+ # Make the request to the OpenAI API
+ response = requests.post(
+ endpoint_url,
+ headers=headers,
+ data=json.dumps(payload),
+ timeout=(10, 300)
+ )
- # Make the request to the OpenAI API
- response = requests.post(
- endpoint_url,
- headers=headers,
- data=json.dumps(payload),
- timeout=(10, 300)
- )
+ response.raise_for_status() # Raise an exception for HTTP errors
+ response_data = response.json()
- response.raise_for_status() # Raise an exception for HTTP errors
- response_data = response.json()
+ # Extract embeddings and used tokens from the response
+ embeddings = response_data['embeddings']
+ embedding_used_tokens = self.get_num_tokens(model, credentials, inputs)
- # Extract embeddings and used tokens from the response
- embeddings = response_data['embedding']
- embedding_used_tokens = self.get_num_tokens(model, credentials, [text])
-
- used_tokens += embedding_used_tokens
- batched_embeddings.append(embeddings)
+ used_tokens += embedding_used_tokens
# calc usage
usage = self._calc_response_usage(
@@ -113,7 +109,7 @@ class OllamaEmbeddingModel(TextEmbeddingModel):
)
return TextEmbeddingResult(
- embeddings=batched_embeddings,
+ embeddings=embeddings,
usage=usage,
model=model
)
diff --git a/api/core/model_runtime/model_providers/openai/llm/_position.yaml b/api/core/model_runtime/model_providers/openai/llm/_position.yaml
index 91b9215829..21661b9a2b 100644
--- a/api/core/model_runtime/model_providers/openai/llm/_position.yaml
+++ b/api/core/model_runtime/model_providers/openai/llm/_position.yaml
@@ -1,6 +1,7 @@
- gpt-4
- gpt-4o
- gpt-4o-2024-05-13
+- gpt-4o-2024-08-06
- gpt-4o-mini
- gpt-4o-mini-2024-07-18
- gpt-4-turbo
diff --git a/api/core/model_runtime/model_providers/openai/llm/gpt-3.5-turbo.yaml b/api/core/model_runtime/model_providers/openai/llm/gpt-3.5-turbo.yaml
index d6338c3d19..6eb15e6c0d 100644
--- a/api/core/model_runtime/model_providers/openai/llm/gpt-3.5-turbo.yaml
+++ b/api/core/model_runtime/model_providers/openai/llm/gpt-3.5-turbo.yaml
@@ -37,7 +37,7 @@ parameter_rules:
- text
- json_object
pricing:
- input: '0.001'
- output: '0.002'
+ input: '0.0005'
+ output: '0.0015'
unit: '0.001'
currency: USD
diff --git a/api/core/model_runtime/model_providers/openai/llm/gpt-4o-2024-08-06.yaml b/api/core/model_runtime/model_providers/openai/llm/gpt-4o-2024-08-06.yaml
new file mode 100644
index 0000000000..cf2de0f73a
--- /dev/null
+++ b/api/core/model_runtime/model_providers/openai/llm/gpt-4o-2024-08-06.yaml
@@ -0,0 +1,44 @@
+model: gpt-4o-2024-08-06
+label:
+ zh_Hans: gpt-4o-2024-08-06
+ en_US: gpt-4o-2024-08-06
+model_type: llm
+features:
+ - multi-tool-call
+ - agent-thought
+ - stream-tool-call
+ - vision
+model_properties:
+ mode: chat
+ context_size: 128000
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: top_p
+ use_template: top_p
+ - name: presence_penalty
+ use_template: presence_penalty
+ - name: frequency_penalty
+ use_template: frequency_penalty
+ - name: max_tokens
+ use_template: max_tokens
+ default: 512
+ min: 1
+ max: 16384
+ - name: response_format
+ label:
+ zh_Hans: 回复格式
+ en_US: response_format
+ type: string
+ help:
+ zh_Hans: 指定模型必须输出的格式
+ en_US: specifying the format that the model must output
+ required: false
+ options:
+ - text
+ - json_object
+pricing:
+ input: '2.50'
+ output: '10.00'
+ unit: '0.000001'
+ currency: USD
diff --git a/api/core/model_runtime/model_providers/openai/tts/tts.py b/api/core/model_runtime/model_providers/openai/tts/tts.py
index d3fcf731f1..afa5d4b88a 100644
--- a/api/core/model_runtime/model_providers/openai/tts/tts.py
+++ b/api/core/model_runtime/model_providers/openai/tts/tts.py
@@ -1,11 +1,7 @@
import concurrent.futures
-from functools import reduce
-from io import BytesIO
from typing import Optional
-from flask import Response
from openai import OpenAI
-from pydub import AudioSegment
from core.model_runtime.errors.invoke import InvokeBadRequestError
from core.model_runtime.errors.validate import CredentialsValidateFailedError
@@ -32,7 +28,8 @@ class OpenAIText2SpeechModel(_CommonOpenAI, TTSModel):
:return: text translated to audio file
"""
- if not voice or voice not in [d['value'] for d in self.get_tts_model_voices(model=model, credentials=credentials)]:
+ if not voice or voice not in [d['value'] for d in
+ self.get_tts_model_voices(model=model, credentials=credentials)]:
voice = self._get_model_default_voice(model, credentials)
# if streaming:
return self._tts_invoke_streaming(model=model,
@@ -50,7 +47,7 @@ class OpenAIText2SpeechModel(_CommonOpenAI, TTSModel):
:return: text translated to audio file
"""
try:
- self._tts_invoke(
+ self._tts_invoke_streaming(
model=model,
credentials=credentials,
content_text='Hello Dify!',
@@ -59,46 +56,6 @@ class OpenAIText2SpeechModel(_CommonOpenAI, TTSModel):
except Exception as ex:
raise CredentialsValidateFailedError(str(ex))
- def _tts_invoke(self, model: str, credentials: dict, content_text: str, voice: str) -> Response:
- """
- _tts_invoke text2speech model
-
- :param model: model name
- :param credentials: model credentials
- :param content_text: text content to be translated
- :param voice: model timbre
- :return: text translated to audio file
- """
- audio_type = self._get_model_audio_type(model, credentials)
- word_limit = self._get_model_word_limit(model, credentials)
- max_workers = self._get_model_workers_limit(model, credentials)
- try:
- sentences = list(self._split_text_into_sentences(org_text=content_text, max_length=word_limit))
- audio_bytes_list = []
-
- # Create a thread pool and map the function to the list of sentences
- with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
- futures = [executor.submit(self._process_sentence, sentence=sentence, model=model, voice=voice,
- credentials=credentials) for sentence in sentences]
- for future in futures:
- try:
- if future.result():
- audio_bytes_list.append(future.result())
- except Exception as ex:
- raise InvokeBadRequestError(str(ex))
-
- if len(audio_bytes_list) > 0:
- audio_segments = [AudioSegment.from_file(BytesIO(audio_bytes), format=audio_type) for audio_bytes in
- audio_bytes_list if audio_bytes]
- combined_segment = reduce(lambda x, y: x + y, audio_segments)
- buffer: BytesIO = BytesIO()
- combined_segment.export(buffer, format=audio_type)
- buffer.seek(0)
- return Response(buffer.read(), status=200, mimetype=f"audio/{audio_type}")
- except Exception as ex:
- raise InvokeBadRequestError(str(ex))
-
-
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str,
voice: str) -> any:
"""
@@ -114,7 +71,8 @@ class OpenAIText2SpeechModel(_CommonOpenAI, TTSModel):
# doc: https://platform.openai.com/docs/guides/text-to-speech
credentials_kwargs = self._to_credential_kwargs(credentials)
client = OpenAI(**credentials_kwargs)
- model_support_voice = [x.get("value") for x in self.get_tts_model_voices(model=model, credentials=credentials)]
+ model_support_voice = [x.get("value") for x in
+ self.get_tts_model_voices(model=model, credentials=credentials)]
if not voice or voice not in model_support_voice:
voice = self._get_model_default_voice(model, credentials)
word_limit = self._get_model_word_limit(model, credentials)
diff --git a/api/core/model_runtime/model_providers/openai_api_compatible/openai_api_compatible.yaml b/api/core/model_runtime/model_providers/openai_api_compatible/openai_api_compatible.yaml
index 69bed96039..88c76fe16e 100644
--- a/api/core/model_runtime/model_providers/openai_api_compatible/openai_api_compatible.yaml
+++ b/api/core/model_runtime/model_providers/openai_api_compatible/openai_api_compatible.yaml
@@ -7,6 +7,7 @@ description:
supported_model_types:
- llm
- text-embedding
+ - speech2text
configurate_methods:
- customizable-model
model_credential_schema:
@@ -61,6 +62,22 @@ model_credential_schema:
zh_Hans: 模型上下文长度
en_US: Model context size
required: true
+ show_on:
+ - variable: __model_type
+ value: llm
+ type: text-input
+ default: '4096'
+ placeholder:
+ zh_Hans: 在此输入您的模型上下文长度
+ en_US: Enter your Model context size
+ - variable: context_size
+ label:
+ zh_Hans: 模型上下文长度
+ en_US: Model context size
+ required: true
+ show_on:
+ - variable: __model_type
+ value: text-embedding
type: text-input
default: '4096'
placeholder:
diff --git a/api/core/model_runtime/model_providers/openai_api_compatible/speech2text/__init__.py b/api/core/model_runtime/model_providers/openai_api_compatible/speech2text/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/core/model_runtime/model_providers/openai_api_compatible/speech2text/speech2text.py b/api/core/model_runtime/model_providers/openai_api_compatible/speech2text/speech2text.py
new file mode 100644
index 0000000000..00702ba936
--- /dev/null
+++ b/api/core/model_runtime/model_providers/openai_api_compatible/speech2text/speech2text.py
@@ -0,0 +1,63 @@
+from typing import IO, Optional
+from urllib.parse import urljoin
+
+import requests
+
+from core.model_runtime.errors.invoke import InvokeBadRequestError
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.__base.speech2text_model import Speech2TextModel
+from core.model_runtime.model_providers.openai_api_compatible._common import _CommonOAI_API_Compat
+
+
+class OAICompatSpeech2TextModel(_CommonOAI_API_Compat, Speech2TextModel):
+ """
+ Model class for OpenAI Compatible Speech to text model.
+ """
+
+ def _invoke(
+ self, model: str, credentials: dict, file: IO[bytes], user: Optional[str] = None
+ ) -> str:
+ """
+ Invoke speech2text model
+
+ :param model: model name
+ :param credentials: model credentials
+ :param file: audio file
+ :param user: unique user id
+ :return: text for given audio file
+ """
+ headers = {}
+
+ api_key = credentials.get("api_key")
+ if api_key:
+ headers["Authorization"] = f"Bearer {api_key}"
+
+ endpoint_url = credentials.get("endpoint_url")
+ if not endpoint_url.endswith("/"):
+ endpoint_url += "/"
+ endpoint_url = urljoin(endpoint_url, "audio/transcriptions")
+
+ payload = {"model": model}
+ files = [("file", file)]
+ response = requests.post(endpoint_url, headers=headers, data=payload, files=files)
+
+ if response.status_code != 200:
+ raise InvokeBadRequestError(response.text)
+ response_data = response.json()
+ return response_data["text"]
+
+ def validate_credentials(self, model: str, credentials: dict) -> None:
+ """
+ Validate model credentials
+
+ :param model: model name
+ :param credentials: model credentials
+ :return:
+ """
+ try:
+ audio_file_path = self._get_demo_file_path()
+
+ with open(audio_file_path, "rb") as audio_file:
+ self._invoke(model, credentials, audio_file)
+ except Exception as ex:
+ raise CredentialsValidateFailedError(str(ex))
diff --git a/api/core/model_runtime/model_providers/openai_api_compatible/text_embedding/text_embedding.py b/api/core/model_runtime/model_providers/openai_api_compatible/text_embedding/text_embedding.py
index 3467cd6dfd..363054b084 100644
--- a/api/core/model_runtime/model_providers/openai_api_compatible/text_embedding/text_embedding.py
+++ b/api/core/model_runtime/model_providers/openai_api_compatible/text_embedding/text_embedding.py
@@ -76,7 +76,7 @@ class OAICompatEmbeddingModel(_CommonOAI_API_Compat, TextEmbeddingModel):
num_tokens = self._get_num_tokens_by_gpt2(text)
if num_tokens >= context_size:
- cutoff = int(len(text) * (np.floor(context_size / num_tokens)))
+ cutoff = int(np.floor(len(text) * (context_size / num_tokens)))
# if num tokens is larger than context length, only use the start
inputs.append(text[0: cutoff])
else:
diff --git a/api/core/model_runtime/model_providers/openrouter/llm/llama-3.1-405b-instruct.yaml b/api/core/model_runtime/model_providers/openrouter/llm/llama-3.1-405b-instruct.yaml
index 7d68e708b7..a489ce1b5a 100644
--- a/api/core/model_runtime/model_providers/openrouter/llm/llama-3.1-405b-instruct.yaml
+++ b/api/core/model_runtime/model_providers/openrouter/llm/llama-3.1-405b-instruct.yaml
@@ -4,7 +4,7 @@ label:
model_type: llm
model_properties:
mode: chat
- context_size: 128000
+ context_size: 131072
parameter_rules:
- name: temperature
use_template: temperature
@@ -15,9 +15,9 @@ parameter_rules:
required: true
default: 512
min: 1
- max: 128000
+ max: 131072
pricing:
- input: "3"
- output: "3"
+ input: "2.7"
+ output: "2.7"
unit: "0.000001"
currency: USD
diff --git a/api/core/model_runtime/model_providers/openrouter/llm/llama-3.1-70b-instruct.yaml b/api/core/model_runtime/model_providers/openrouter/llm/llama-3.1-70b-instruct.yaml
index 78e3b45435..12037411b1 100644
--- a/api/core/model_runtime/model_providers/openrouter/llm/llama-3.1-70b-instruct.yaml
+++ b/api/core/model_runtime/model_providers/openrouter/llm/llama-3.1-70b-instruct.yaml
@@ -4,7 +4,7 @@ label:
model_type: llm
model_properties:
mode: chat
- context_size: 128000
+ context_size: 131072
parameter_rules:
- name: temperature
use_template: temperature
@@ -15,9 +15,9 @@ parameter_rules:
required: true
default: 512
min: 1
- max: 128000
+ max: 131072
pricing:
- input: "0.9"
- output: "0.9"
+ input: "0.52"
+ output: "0.75"
unit: "0.000001"
currency: USD
diff --git a/api/core/model_runtime/model_providers/openrouter/llm/llama-3.1-8b-instruct.yaml b/api/core/model_runtime/model_providers/openrouter/llm/llama-3.1-8b-instruct.yaml
index 6e69b7deb7..6f06493f29 100644
--- a/api/core/model_runtime/model_providers/openrouter/llm/llama-3.1-8b-instruct.yaml
+++ b/api/core/model_runtime/model_providers/openrouter/llm/llama-3.1-8b-instruct.yaml
@@ -4,7 +4,7 @@ label:
model_type: llm
model_properties:
mode: chat
- context_size: 128000
+ context_size: 131072
parameter_rules:
- name: temperature
use_template: temperature
@@ -15,9 +15,9 @@ parameter_rules:
required: true
default: 512
min: 1
- max: 128000
+ max: 131072
pricing:
- input: "0.2"
- output: "0.2"
+ input: "0.06"
+ output: "0.06"
unit: "0.000001"
currency: USD
diff --git a/api/core/model_runtime/model_providers/perfxcloud/text_embedding/text_embedding.py b/api/core/model_runtime/model_providers/perfxcloud/text_embedding/text_embedding.py
index 5a99ad301f..11d57e3749 100644
--- a/api/core/model_runtime/model_providers/perfxcloud/text_embedding/text_embedding.py
+++ b/api/core/model_runtime/model_providers/perfxcloud/text_embedding/text_embedding.py
@@ -79,7 +79,7 @@ class OAICompatEmbeddingModel(_CommonOAI_API_Compat, TextEmbeddingModel):
num_tokens = self._get_num_tokens_by_gpt2(text)
if num_tokens >= context_size:
- cutoff = int(len(text) * (np.floor(context_size / num_tokens)))
+ cutoff = int(np.floor(len(text) * (context_size / num_tokens)))
# if num tokens is larger than context length, only use the start
inputs.append(text[0: cutoff])
else:
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/_position.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/_position.yaml
index 20bb0790c2..c2f0eb0536 100644
--- a/api/core/model_runtime/model_providers/siliconflow/llm/_position.yaml
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/_position.yaml
@@ -1,8 +1,20 @@
-- deepseek-v2-chat
-- qwen2-72b-instruct
-- qwen2-57b-a14b-instruct
-- qwen2-7b-instruct
-- yi-1.5-34b-chat
-- yi-1.5-9b-chat
-- yi-1.5-6b-chat
-- glm4-9B-chat
+- Qwen/Qwen2-72B-Instruct
+- Qwen/Qwen2-57B-A14B-Instruct
+- Qwen/Qwen2-7B-Instruct
+- Qwen/Qwen2-1.5B-Instruct
+- 01-ai/Yi-1.5-34B-Chat
+- 01-ai/Yi-1.5-9B-Chat-16K
+- 01-ai/Yi-1.5-6B-Chat
+- THUDM/glm-4-9b-chat
+- deepseek-ai/DeepSeek-V2-Chat
+- deepseek-ai/DeepSeek-Coder-V2-Instruct
+- internlm/internlm2_5-7b-chat
+- google/gemma-2-27b-it
+- google/gemma-2-9b-it
+- meta-llama/Meta-Llama-3-70B-Instruct
+- meta-llama/Meta-Llama-3-8B-Instruct
+- meta-llama/Meta-Llama-3.1-405B-Instruct
+- meta-llama/Meta-Llama-3.1-70B-Instruct
+- meta-llama/Meta-Llama-3.1-8B-Instruct
+- mistralai/Mixtral-8x7B-Instruct-v0.1
+- mistralai/Mistral-7B-Instruct-v0.2
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/deepseek-v2-chat.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/deepseek-v2-chat.yaml
index 3926568db6..caa6508b5e 100644
--- a/api/core/model_runtime/model_providers/siliconflow/llm/deepseek-v2-chat.yaml
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/deepseek-v2-chat.yaml
@@ -1,4 +1,4 @@
-model: deepseek-ai/deepseek-v2-chat
+model: deepseek-ai/DeepSeek-V2-Chat
label:
en_US: deepseek-ai/DeepSeek-V2-Chat
model_type: llm
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/gemma-2-27b-it.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/gemma-2-27b-it.yaml
new file mode 100644
index 0000000000..2840e3dcf4
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/gemma-2-27b-it.yaml
@@ -0,0 +1,30 @@
+model: google/gemma-2-27b-it
+label:
+ en_US: google/gemma-2-27b-it
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 8196
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: max_tokens
+ use_template: max_tokens
+ type: int
+ default: 512
+ min: 1
+ max: 4096
+ help:
+ zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
+ en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
+ - name: top_p
+ use_template: top_p
+ - name: frequency_penalty
+ use_template: frequency_penalty
+pricing:
+ input: '1.26'
+ output: '1.26'
+ unit: '0.000001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/gemma-2-9b-it.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/gemma-2-9b-it.yaml
new file mode 100644
index 0000000000..d7e19b46f6
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/gemma-2-9b-it.yaml
@@ -0,0 +1,30 @@
+model: google/gemma-2-9b-it
+label:
+ en_US: google/gemma-2-9b-it
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 8196
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: max_tokens
+ use_template: max_tokens
+ type: int
+ default: 512
+ min: 1
+ max: 4096
+ help:
+ zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
+ en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
+ - name: top_p
+ use_template: top_p
+ - name: frequency_penalty
+ use_template: frequency_penalty
+pricing:
+ input: '0'
+ output: '0'
+ unit: '0.000001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/glm4-9b-chat.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/glm4-9b-chat.yaml
index d6a4b21b66..9b32a02477 100644
--- a/api/core/model_runtime/model_providers/siliconflow/llm/glm4-9b-chat.yaml
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/glm4-9b-chat.yaml
@@ -1,4 +1,4 @@
-model: zhipuai/glm4-9B-chat
+model: THUDM/glm-4-9b-chat
label:
en_US: THUDM/glm-4-9b-chat
model_type: llm
@@ -24,7 +24,7 @@ parameter_rules:
- name: frequency_penalty
use_template: frequency_penalty
pricing:
- input: '0.6'
- output: '0.6'
+ input: '0'
+ output: '0'
unit: '0.000001'
currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/internlm2_5-7b-chat.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/internlm2_5-7b-chat.yaml
new file mode 100644
index 0000000000..73ad4480aa
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/internlm2_5-7b-chat.yaml
@@ -0,0 +1,30 @@
+model: internlm/internlm2_5-7b-chat
+label:
+ en_US: internlm/internlm2_5-7b-chat
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 32768
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: max_tokens
+ use_template: max_tokens
+ type: int
+ default: 512
+ min: 1
+ max: 4096
+ help:
+ zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
+ en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
+ - name: top_p
+ use_template: top_p
+ - name: frequency_penalty
+ use_template: frequency_penalty
+pricing:
+ input: '0'
+ output: '0'
+ unit: '0.000001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3-70b-instruct.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3-70b-instruct.yaml
new file mode 100644
index 0000000000..9993d781ac
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3-70b-instruct.yaml
@@ -0,0 +1,30 @@
+model: meta-llama/Meta-Llama-3-70B-Instruct
+label:
+ en_US: meta-llama/Meta-Llama-3-70B-Instruct
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 32768
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: max_tokens
+ use_template: max_tokens
+ type: int
+ default: 512
+ min: 1
+ max: 4096
+ help:
+ zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
+ en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
+ - name: top_p
+ use_template: top_p
+ - name: frequency_penalty
+ use_template: frequency_penalty
+pricing:
+ input: '4.13'
+ output: '4.13'
+ unit: '0.000001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3-8b-instruct.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3-8b-instruct.yaml
new file mode 100644
index 0000000000..60e3764789
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3-8b-instruct.yaml
@@ -0,0 +1,30 @@
+model: meta-llama/Meta-Llama-3-8B-Instruct
+label:
+ en_US: meta-llama/Meta-Llama-3-8B-Instruct
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 8192
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: max_tokens
+ use_template: max_tokens
+ type: int
+ default: 512
+ min: 1
+ max: 4096
+ help:
+ zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
+ en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
+ - name: top_p
+ use_template: top_p
+ - name: frequency_penalty
+ use_template: frequency_penalty
+pricing:
+ input: '0'
+ output: '0'
+ unit: '0.000001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3.1-405b-instruct.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3.1-405b-instruct.yaml
new file mode 100644
index 0000000000..f992660aa2
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3.1-405b-instruct.yaml
@@ -0,0 +1,30 @@
+model: meta-llama/Meta-Llama-3.1-405B-Instruct
+label:
+ en_US: meta-llama/Meta-Llama-3.1-405B-Instruct
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 32768
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: max_tokens
+ use_template: max_tokens
+ type: int
+ default: 512
+ min: 1
+ max: 4096
+ help:
+ zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
+ en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
+ - name: top_p
+ use_template: top_p
+ - name: frequency_penalty
+ use_template: frequency_penalty
+pricing:
+ input: '21'
+ output: '21'
+ unit: '0.000001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3.1-70b-instruct.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3.1-70b-instruct.yaml
new file mode 100644
index 0000000000..1c69d63a40
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3.1-70b-instruct.yaml
@@ -0,0 +1,30 @@
+model: meta-llama/Meta-Llama-3.1-70B-Instruct
+label:
+ en_US: meta-llama/Meta-Llama-3.1-70B-Instruct
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 32768
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: max_tokens
+ use_template: max_tokens
+ type: int
+ default: 512
+ min: 1
+ max: 4096
+ help:
+ zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
+ en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
+ - name: top_p
+ use_template: top_p
+ - name: frequency_penalty
+ use_template: frequency_penalty
+pricing:
+ input: '4.13'
+ output: '4.13'
+ unit: '0.000001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3.1-8b-instruct.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3.1-8b-instruct.yaml
new file mode 100644
index 0000000000..a97002a5ca
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/meta-mlama-3.1-8b-instruct.yaml
@@ -0,0 +1,30 @@
+model: meta-llama/Meta-Llama-3.1-8B-Instruct
+label:
+ en_US: meta-llama/Meta-Llama-3.1-8B-Instruct
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 8192
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: max_tokens
+ use_template: max_tokens
+ type: int
+ default: 512
+ min: 1
+ max: 4096
+ help:
+ zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
+ en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
+ - name: top_p
+ use_template: top_p
+ - name: frequency_penalty
+ use_template: frequency_penalty
+pricing:
+ input: '0'
+ output: '0'
+ unit: '0.000001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/mistral-7b-instruct-v0.2.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/mistral-7b-instruct-v0.2.yaml
new file mode 100644
index 0000000000..27664eab6c
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/mistral-7b-instruct-v0.2.yaml
@@ -0,0 +1,30 @@
+model: mistralai/Mistral-7B-Instruct-v0.2
+label:
+ en_US: mistralai/Mistral-7B-Instruct-v0.2
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 32768
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: max_tokens
+ use_template: max_tokens
+ type: int
+ default: 512
+ min: 1
+ max: 4096
+ help:
+ zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
+ en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
+ - name: top_p
+ use_template: top_p
+ - name: frequency_penalty
+ use_template: frequency_penalty
+pricing:
+ input: '0'
+ output: '0'
+ unit: '0.000001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/mistral-8x7b-instruct-v0.1.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/mistral-8x7b-instruct-v0.1.yaml
new file mode 100644
index 0000000000..fd7aada428
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/mistral-8x7b-instruct-v0.1.yaml
@@ -0,0 +1,30 @@
+model: mistralai/Mixtral-8x7B-Instruct-v0.1
+label:
+ en_US: mistralai/Mixtral-8x7B-Instruct-v0.1
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 32768
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: max_tokens
+ use_template: max_tokens
+ type: int
+ default: 512
+ min: 1
+ max: 4096
+ help:
+ zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
+ en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
+ - name: top_p
+ use_template: top_p
+ - name: frequency_penalty
+ use_template: frequency_penalty
+pricing:
+ input: '1.26'
+ output: '1.26'
+ unit: '0.000001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-1.5b-instruct.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-1.5b-instruct.yaml
new file mode 100644
index 0000000000..f6c976af8e
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-1.5b-instruct.yaml
@@ -0,0 +1,30 @@
+model: Qwen/Qwen2-1.5B-Instruct
+label:
+ en_US: Qwen/Qwen2-1.5B-Instruct
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 32768
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: max_tokens
+ use_template: max_tokens
+ type: int
+ default: 512
+ min: 1
+ max: 4096
+ help:
+ zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
+ en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
+ - name: top_p
+ use_template: top_p
+ - name: frequency_penalty
+ use_template: frequency_penalty
+pricing:
+ input: '0'
+ output: '0'
+ unit: '0.000001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-57b-a14b-instruct.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-57b-a14b-instruct.yaml
index 39624dc5b9..a996e919ea 100644
--- a/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-57b-a14b-instruct.yaml
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-57b-a14b-instruct.yaml
@@ -1,4 +1,4 @@
-model: alibaba/Qwen2-57B-A14B-Instruct
+model: Qwen/Qwen2-57B-A14B-Instruct
label:
en_US: Qwen/Qwen2-57B-A14B-Instruct
model_type: llm
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-72b-instruct.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-72b-instruct.yaml
index fb7ff6cb14..a6e2c22dac 100644
--- a/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-72b-instruct.yaml
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-72b-instruct.yaml
@@ -1,4 +1,4 @@
-model: alibaba/Qwen2-72B-Instruct
+model: Qwen/Qwen2-72B-Instruct
label:
en_US: Qwen/Qwen2-72B-Instruct
model_type: llm
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-7b-instruct.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-7b-instruct.yaml
index efda4abbd9..d8bea5e129 100644
--- a/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-7b-instruct.yaml
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/qwen2-7b-instruct.yaml
@@ -1,4 +1,4 @@
-model: alibaba/Qwen2-7B-Instruct
+model: Qwen/Qwen2-7B-Instruct
label:
en_US: Qwen/Qwen2-7B-Instruct
model_type: llm
@@ -24,7 +24,7 @@ parameter_rules:
- name: frequency_penalty
use_template: frequency_penalty
pricing:
- input: '0.35'
- output: '0.35'
+ input: '0'
+ output: '0'
unit: '0.000001'
currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/yi-1.5-6b-chat.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/yi-1.5-6b-chat.yaml
index 38cd4197d4..fe4c8b4b3e 100644
--- a/api/core/model_runtime/model_providers/siliconflow/llm/yi-1.5-6b-chat.yaml
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/yi-1.5-6b-chat.yaml
@@ -24,7 +24,7 @@ parameter_rules:
- name: frequency_penalty
use_template: frequency_penalty
pricing:
- input: '0.35'
- output: '0.35'
+ input: '0'
+ output: '0'
unit: '0.000001'
currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/llm/yi-1.5-9b-chat.yaml b/api/core/model_runtime/model_providers/siliconflow/llm/yi-1.5-9b-chat.yaml
index 042eeea81a..c61f0dc53f 100644
--- a/api/core/model_runtime/model_providers/siliconflow/llm/yi-1.5-9b-chat.yaml
+++ b/api/core/model_runtime/model_providers/siliconflow/llm/yi-1.5-9b-chat.yaml
@@ -1,4 +1,4 @@
-model: 01-ai/Yi-1.5-9B-Chat
+model: 01-ai/Yi-1.5-9B-Chat-16K
label:
en_US: 01-ai/Yi-1.5-9B-Chat-16K
model_type: llm
@@ -24,7 +24,7 @@ parameter_rules:
- name: frequency_penalty
use_template: frequency_penalty
pricing:
- input: '0.42'
- output: '0.42'
+ input: '0'
+ output: '0'
unit: '0.000001'
currency: RMB
diff --git a/api/core/model_runtime/model_providers/siliconflow/siliconflow.py b/api/core/model_runtime/model_providers/siliconflow/siliconflow.py
index a53f16c929..dd0eea362a 100644
--- a/api/core/model_runtime/model_providers/siliconflow/siliconflow.py
+++ b/api/core/model_runtime/model_providers/siliconflow/siliconflow.py
@@ -6,6 +6,7 @@ from core.model_runtime.model_providers.__base.model_provider import ModelProvid
logger = logging.getLogger(__name__)
+
class SiliconflowProvider(ModelProvider):
def validate_provider_credentials(self, credentials: dict) -> None:
diff --git a/api/core/model_runtime/model_providers/siliconflow/siliconflow.yaml b/api/core/model_runtime/model_providers/siliconflow/siliconflow.yaml
index cf44c185d5..1ebb1e6d8b 100644
--- a/api/core/model_runtime/model_providers/siliconflow/siliconflow.yaml
+++ b/api/core/model_runtime/model_providers/siliconflow/siliconflow.yaml
@@ -15,6 +15,8 @@ help:
en_US: https://cloud.siliconflow.cn/keys
supported_model_types:
- llm
+ - text-embedding
+ - speech2text
configurate_methods:
- predefined-model
provider_credential_schema:
diff --git a/api/core/model_runtime/model_providers/siliconflow/speech2text/__init__.py b/api/core/model_runtime/model_providers/siliconflow/speech2text/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/core/model_runtime/model_providers/siliconflow/speech2text/sense-voice-small.yaml b/api/core/model_runtime/model_providers/siliconflow/speech2text/sense-voice-small.yaml
new file mode 100644
index 0000000000..deceaf60f4
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/speech2text/sense-voice-small.yaml
@@ -0,0 +1,5 @@
+model: iic/SenseVoiceSmall
+model_type: speech2text
+model_properties:
+ file_upload_limit: 1
+ supported_file_extensions: mp3,wav
diff --git a/api/core/model_runtime/model_providers/siliconflow/speech2text/speech2text.py b/api/core/model_runtime/model_providers/siliconflow/speech2text/speech2text.py
new file mode 100644
index 0000000000..6ad3cab587
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/speech2text/speech2text.py
@@ -0,0 +1,32 @@
+from typing import IO, Optional
+
+from core.model_runtime.model_providers.openai_api_compatible.speech2text.speech2text import OAICompatSpeech2TextModel
+
+
+class SiliconflowSpeech2TextModel(OAICompatSpeech2TextModel):
+ """
+ Model class for Siliconflow Speech to text model.
+ """
+
+ def _invoke(
+ self, model: str, credentials: dict, file: IO[bytes], user: Optional[str] = None
+ ) -> str:
+ """
+ Invoke speech2text model
+
+ :param model: model name
+ :param credentials: model credentials
+ :param file: audio file
+ :param user: unique user id
+ :return: text for given audio file
+ """
+ self._add_custom_parameters(credentials)
+ return super()._invoke(model, credentials, file)
+
+ def validate_credentials(self, model: str, credentials: dict) -> None:
+ self._add_custom_parameters(credentials)
+ return super().validate_credentials(model, credentials)
+
+ @classmethod
+ def _add_custom_parameters(cls, credentials: dict) -> None:
+ credentials["endpoint_url"] = "https://api.siliconflow.cn/v1"
diff --git a/api/core/model_runtime/model_providers/siliconflow/text_embedding/bce-embedding-base-v1.yaml b/api/core/model_runtime/model_providers/siliconflow/text_embedding/bce-embedding-base-v1.yaml
new file mode 100644
index 0000000000..710fbc04f6
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/text_embedding/bce-embedding-base-v1.yaml
@@ -0,0 +1,5 @@
+model: netease-youdao/bce-embedding-base_v1
+model_type: text-embedding
+model_properties:
+ context_size: 512
+ max_chunks: 1
diff --git a/api/core/model_runtime/model_providers/siliconflow/text_embedding/bge-large-en-v1.5.yaml b/api/core/model_runtime/model_providers/siliconflow/text_embedding/bge-large-en-v1.5.yaml
new file mode 100644
index 0000000000..84f69b41a0
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/text_embedding/bge-large-en-v1.5.yaml
@@ -0,0 +1,5 @@
+model: BAAI/bge-large-en-v1.5
+model_type: text-embedding
+model_properties:
+ context_size: 512
+ max_chunks: 1
diff --git a/api/core/model_runtime/model_providers/siliconflow/text_embedding/bge-large-zh-v1.5.yaml b/api/core/model_runtime/model_providers/siliconflow/text_embedding/bge-large-zh-v1.5.yaml
new file mode 100644
index 0000000000..5248375d0b
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/text_embedding/bge-large-zh-v1.5.yaml
@@ -0,0 +1,5 @@
+model: BAAI/bge-large-zh-v1.5
+model_type: text-embedding
+model_properties:
+ context_size: 512
+ max_chunks: 1
diff --git a/api/core/model_runtime/model_providers/siliconflow/text_embedding/bge-m3.yaml b/api/core/model_runtime/model_providers/siliconflow/text_embedding/bge-m3.yaml
new file mode 100644
index 0000000000..f0b12dd420
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/text_embedding/bge-m3.yaml
@@ -0,0 +1,5 @@
+model: BAAI/bge-m3
+model_type: text-embedding
+model_properties:
+ context_size: 8192
+ max_chunks: 1
diff --git a/api/core/model_runtime/model_providers/siliconflow/text_embedding/text_embedding.py b/api/core/model_runtime/model_providers/siliconflow/text_embedding/text_embedding.py
new file mode 100644
index 0000000000..c58765cecb
--- /dev/null
+++ b/api/core/model_runtime/model_providers/siliconflow/text_embedding/text_embedding.py
@@ -0,0 +1,29 @@
+from typing import Optional
+
+from core.model_runtime.entities.text_embedding_entities import TextEmbeddingResult
+from core.model_runtime.model_providers.openai_api_compatible.text_embedding.text_embedding import (
+ OAICompatEmbeddingModel,
+)
+
+
+class SiliconflowTextEmbeddingModel(OAICompatEmbeddingModel):
+ """
+ Model class for Siliconflow text embedding model.
+ """
+ def validate_credentials(self, model: str, credentials: dict) -> None:
+ self._add_custom_parameters(credentials)
+ super().validate_credentials(model, credentials)
+
+ def _invoke(self, model: str, credentials: dict,
+ texts: list[str], user: Optional[str] = None) \
+ -> TextEmbeddingResult:
+ self._add_custom_parameters(credentials)
+ return super()._invoke(model, credentials, texts, user)
+
+ def get_num_tokens(self, model: str, credentials: dict, texts: list[str]) -> int:
+ self._add_custom_parameters(credentials)
+ return super().get_num_tokens(model, credentials, texts)
+
+ @classmethod
+ def _add_custom_parameters(cls, credentials: dict) -> None:
+ credentials['endpoint_url'] = 'https://api.siliconflow.cn/v1'
\ No newline at end of file
diff --git a/api/core/model_runtime/model_providers/stepfun/llm/_position.yaml b/api/core/model_runtime/model_providers/stepfun/llm/_position.yaml
index b34433e1d4..2bb0c703f4 100644
--- a/api/core/model_runtime/model_providers/stepfun/llm/_position.yaml
+++ b/api/core/model_runtime/model_providers/stepfun/llm/_position.yaml
@@ -2,5 +2,7 @@
- step-1-32k
- step-1-128k
- step-1-256k
+- step-1-flash
+- step-2-16k
- step-1v-8k
- step-1v-32k
diff --git a/api/core/model_runtime/model_providers/stepfun/llm/step-1-flash.yaml b/api/core/model_runtime/model_providers/stepfun/llm/step-1-flash.yaml
new file mode 100644
index 0000000000..afb880f2a4
--- /dev/null
+++ b/api/core/model_runtime/model_providers/stepfun/llm/step-1-flash.yaml
@@ -0,0 +1,25 @@
+model: step-1-flash
+label:
+ zh_Hans: step-1-flash
+ en_US: step-1-flash
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 8000
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: top_p
+ use_template: top_p
+ - name: max_tokens
+ use_template: max_tokens
+ default: 512
+ min: 1
+ max: 8000
+pricing:
+ input: '0.001'
+ output: '0.004'
+ unit: '0.001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/stepfun/llm/step-1v-32k.yaml b/api/core/model_runtime/model_providers/stepfun/llm/step-1v-32k.yaml
index f878ee3e56..08d6ad245d 100644
--- a/api/core/model_runtime/model_providers/stepfun/llm/step-1v-32k.yaml
+++ b/api/core/model_runtime/model_providers/stepfun/llm/step-1v-32k.yaml
@@ -5,6 +5,9 @@ label:
model_type: llm
features:
- vision
+ - tool-call
+ - multi-tool-call
+ - stream-tool-call
model_properties:
mode: chat
context_size: 32000
diff --git a/api/core/model_runtime/model_providers/stepfun/llm/step-1v-8k.yaml b/api/core/model_runtime/model_providers/stepfun/llm/step-1v-8k.yaml
index 6c3cb61d2c..843d14d9c6 100644
--- a/api/core/model_runtime/model_providers/stepfun/llm/step-1v-8k.yaml
+++ b/api/core/model_runtime/model_providers/stepfun/llm/step-1v-8k.yaml
@@ -5,6 +5,9 @@ label:
model_type: llm
features:
- vision
+ - tool-call
+ - multi-tool-call
+ - stream-tool-call
model_properties:
mode: chat
context_size: 8192
diff --git a/api/core/model_runtime/model_providers/stepfun/llm/step-2-16k.yaml b/api/core/model_runtime/model_providers/stepfun/llm/step-2-16k.yaml
new file mode 100644
index 0000000000..6f2dabbfb0
--- /dev/null
+++ b/api/core/model_runtime/model_providers/stepfun/llm/step-2-16k.yaml
@@ -0,0 +1,28 @@
+model: step-2-16k
+label:
+ zh_Hans: step-2-16k
+ en_US: step-2-16k
+model_type: llm
+features:
+ - agent-thought
+ - tool-call
+ - multi-tool-call
+ - stream-tool-call
+model_properties:
+ mode: chat
+ context_size: 16000
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: top_p
+ use_template: top_p
+ - name: max_tokens
+ use_template: max_tokens
+ default: 1024
+ min: 1
+ max: 16000
+pricing:
+ input: '0.038'
+ output: '0.120'
+ unit: '0.001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/tongyi/llm/llm.py b/api/core/model_runtime/model_providers/tongyi/llm/llm.py
index 6f768131fb..a75db78d8c 100644
--- a/api/core/model_runtime/model_providers/tongyi/llm/llm.py
+++ b/api/core/model_runtime/model_providers/tongyi/llm/llm.py
@@ -497,12 +497,13 @@ You should also complete the text started with ``` but not tell ``` directly.
content = prompt_message.content
if not content:
content = ' '
- tongyi_messages.append({
+ message = {
'role': 'assistant',
- 'content': content if not rich_content else [{"text": content}],
- 'tool_calls': [tool_call.model_dump() for tool_call in
- prompt_message.tool_calls] if prompt_message.tool_calls else None
- })
+ 'content': content if not rich_content else [{"text": content}]
+ }
+ if prompt_message.tool_calls:
+ message['tool_calls'] = [tool_call.model_dump() for tool_call in prompt_message.tool_calls]
+ tongyi_messages.append(message)
elif isinstance(prompt_message, ToolPromptMessage):
tongyi_messages.append({
"role": "tool",
diff --git a/api/core/model_runtime/model_providers/tongyi/text_embedding/text-embedding-v1.yaml b/api/core/model_runtime/model_providers/tongyi/text_embedding/text-embedding-v1.yaml
index eed09f95de..f4303c53d3 100644
--- a/api/core/model_runtime/model_providers/tongyi/text_embedding/text-embedding-v1.yaml
+++ b/api/core/model_runtime/model_providers/tongyi/text_embedding/text-embedding-v1.yaml
@@ -2,3 +2,8 @@ model: text-embedding-v1
model_type: text-embedding
model_properties:
context_size: 2048
+ max_chunks: 25
+pricing:
+ input: "0.0007"
+ unit: "0.001"
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/tongyi/text_embedding/text-embedding-v2.yaml b/api/core/model_runtime/model_providers/tongyi/text_embedding/text-embedding-v2.yaml
index db2fa861e6..f6be3544ed 100644
--- a/api/core/model_runtime/model_providers/tongyi/text_embedding/text-embedding-v2.yaml
+++ b/api/core/model_runtime/model_providers/tongyi/text_embedding/text-embedding-v2.yaml
@@ -2,3 +2,8 @@ model: text-embedding-v2
model_type: text-embedding
model_properties:
context_size: 2048
+ max_chunks: 25
+pricing:
+ input: "0.0007"
+ unit: "0.001"
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/tongyi/text_embedding/text_embedding.py b/api/core/model_runtime/model_providers/tongyi/text_embedding/text_embedding.py
index c207ffc1e3..e7e1b5c764 100644
--- a/api/core/model_runtime/model_providers/tongyi/text_embedding/text_embedding.py
+++ b/api/core/model_runtime/model_providers/tongyi/text_embedding/text_embedding.py
@@ -2,6 +2,7 @@ import time
from typing import Optional
import dashscope
+import numpy as np
from core.model_runtime.entities.model_entities import PriceType
from core.model_runtime.entities.text_embedding_entities import (
@@ -21,11 +22,11 @@ class TongyiTextEmbeddingModel(_CommonTongyi, TextEmbeddingModel):
"""
def _invoke(
- self,
- model: str,
- credentials: dict,
- texts: list[str],
- user: Optional[str] = None,
+ self,
+ model: str,
+ credentials: dict,
+ texts: list[str],
+ user: Optional[str] = None,
) -> TextEmbeddingResult:
"""
Invoke text embedding model
@@ -37,16 +38,44 @@ class TongyiTextEmbeddingModel(_CommonTongyi, TextEmbeddingModel):
:return: embeddings result
"""
credentials_kwargs = self._to_credential_kwargs(credentials)
- embeddings, embedding_used_tokens = self.embed_documents(
- credentials_kwargs=credentials_kwargs,
- model=model,
- texts=texts
- )
+ context_size = self._get_context_size(model, credentials)
+ max_chunks = self._get_max_chunks(model, credentials)
+ inputs = []
+ indices = []
+ used_tokens = 0
+
+ for i, text in enumerate(texts):
+
+ # Here token count is only an approximation based on the GPT2 tokenizer
+ num_tokens = self._get_num_tokens_by_gpt2(text)
+
+ if num_tokens >= context_size:
+ cutoff = int(np.floor(len(text) * (context_size / num_tokens)))
+ # if num tokens is larger than context length, only use the start
+ inputs.append(text[0:cutoff])
+ else:
+ inputs.append(text)
+ indices += [i]
+
+ batched_embeddings = []
+ _iter = range(0, len(inputs), max_chunks)
+
+ for i in _iter:
+ embeddings_batch, embedding_used_tokens = self.embed_documents(
+ credentials_kwargs=credentials_kwargs,
+ model=model,
+ texts=inputs[i : i + max_chunks],
+ )
+ used_tokens += embedding_used_tokens
+ batched_embeddings += embeddings_batch
+
+ # calc usage
+ usage = self._calc_response_usage(
+ model=model, credentials=credentials, tokens=used_tokens
+ )
return TextEmbeddingResult(
- embeddings=embeddings,
- usage=self._calc_response_usage(model, credentials_kwargs, embedding_used_tokens),
- model=model
+ embeddings=batched_embeddings, usage=usage, model=model
)
def get_num_tokens(self, model: str, credentials: dict, texts: list[str]) -> int:
@@ -79,12 +108,16 @@ class TongyiTextEmbeddingModel(_CommonTongyi, TextEmbeddingModel):
credentials_kwargs = self._to_credential_kwargs(credentials)
# call embedding model
- self.embed_documents(credentials_kwargs=credentials_kwargs, model=model, texts=["ping"])
+ self.embed_documents(
+ credentials_kwargs=credentials_kwargs, model=model, texts=["ping"]
+ )
except Exception as ex:
raise CredentialsValidateFailedError(str(ex))
@staticmethod
- def embed_documents(credentials_kwargs: dict, model: str, texts: list[str]) -> tuple[list[list[float]], int]:
+ def embed_documents(
+ credentials_kwargs: dict, model: str, texts: list[str]
+ ) -> tuple[list[list[float]], int]:
"""Call out to Tongyi's embedding endpoint.
Args:
@@ -102,7 +135,7 @@ class TongyiTextEmbeddingModel(_CommonTongyi, TextEmbeddingModel):
api_key=credentials_kwargs["dashscope_api_key"],
model=model,
input=text,
- text_type="document"
+ text_type="document",
)
data = response.output["embeddings"][0]
embeddings.append(data["embedding"])
@@ -111,7 +144,7 @@ class TongyiTextEmbeddingModel(_CommonTongyi, TextEmbeddingModel):
return [list(map(float, e)) for e in embeddings], embedding_used_tokens
def _calc_response_usage(
- self, model: str, credentials: dict, tokens: int
+ self, model: str, credentials: dict, tokens: int
) -> EmbeddingUsage:
"""
Calculate response usage
@@ -125,7 +158,7 @@ class TongyiTextEmbeddingModel(_CommonTongyi, TextEmbeddingModel):
model=model,
credentials=credentials,
price_type=PriceType.INPUT,
- tokens=tokens
+ tokens=tokens,
)
# transform usage
@@ -136,7 +169,7 @@ class TongyiTextEmbeddingModel(_CommonTongyi, TextEmbeddingModel):
price_unit=input_price_info.unit,
total_price=input_price_info.total_amount,
currency=input_price_info.currency,
- latency=time.perf_counter() - self.started_at
+ latency=time.perf_counter() - self.started_at,
)
return usage
diff --git a/api/core/model_runtime/model_providers/tongyi/tts/tts.py b/api/core/model_runtime/model_providers/tongyi/tts/tts.py
index 655ed2d1d0..664b02cd92 100644
--- a/api/core/model_runtime/model_providers/tongyi/tts/tts.py
+++ b/api/core/model_runtime/model_providers/tongyi/tts/tts.py
@@ -1,7 +1,4 @@
-import concurrent.futures
import threading
-from functools import reduce
-from io import BytesIO
from queue import Queue
from typing import Optional
@@ -9,8 +6,6 @@ import dashscope
from dashscope import SpeechSynthesizer
from dashscope.api_entities.dashscope_response import SpeechSynthesisResponse
from dashscope.audio.tts import ResultCallback, SpeechSynthesisResult
-from flask import Response
-from pydub import AudioSegment
from core.model_runtime.errors.invoke import InvokeBadRequestError
from core.model_runtime.errors.validate import CredentialsValidateFailedError
@@ -55,7 +50,7 @@ class TongyiText2SpeechModel(_CommonTongyi, TTSModel):
:return: text translated to audio file
"""
try:
- self._tts_invoke(
+ self._tts_invoke_streaming(
model=model,
credentials=credentials,
content_text='Hello Dify!',
@@ -64,46 +59,6 @@ class TongyiText2SpeechModel(_CommonTongyi, TTSModel):
except Exception as ex:
raise CredentialsValidateFailedError(str(ex))
- def _tts_invoke(self, model: str, credentials: dict, content_text: str, voice: str) -> Response:
- """
- _tts_invoke text2speech model
-
- :param model: model name
- :param credentials: model credentials
- :param voice: model timbre
- :param content_text: text content to be translated
- :return: text translated to audio file
- """
- audio_type = self._get_model_audio_type(model, credentials)
- word_limit = self._get_model_word_limit(model, credentials)
- max_workers = self._get_model_workers_limit(model, credentials)
- try:
- sentences = list(self._split_text_into_sentences(org_text=content_text, max_length=word_limit))
- audio_bytes_list = []
-
- # Create a thread pool and map the function to the list of sentences
- with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
- futures = [executor.submit(self._process_sentence, sentence=sentence,
- credentials=credentials, voice=voice, audio_type=audio_type) for sentence in
- sentences]
- for future in futures:
- try:
- if future.result():
- audio_bytes_list.append(future.result())
- except Exception as ex:
- raise InvokeBadRequestError(str(ex))
-
- if len(audio_bytes_list) > 0:
- audio_segments = [AudioSegment.from_file(BytesIO(audio_bytes), format=audio_type) for audio_bytes in
- audio_bytes_list if audio_bytes]
- combined_segment = reduce(lambda x, y: x + y, audio_segments)
- buffer: BytesIO = BytesIO()
- combined_segment.export(buffer, format=audio_type)
- buffer.seek(0)
- return Response(buffer.read(), status=200, mimetype=f"audio/{audio_type}")
- except Exception as ex:
- raise InvokeBadRequestError(str(ex))
-
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str,
voice: str) -> any:
"""
diff --git a/api/core/model_runtime/model_providers/upstage/__init__.py b/api/core/model_runtime/model_providers/upstage/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/core/model_runtime/model_providers/upstage/_assets/icon_l_en.svg b/api/core/model_runtime/model_providers/upstage/_assets/icon_l_en.svg
new file mode 100644
index 0000000000..0761f85ba6
--- /dev/null
+++ b/api/core/model_runtime/model_providers/upstage/_assets/icon_l_en.svg
@@ -0,0 +1,14 @@
+
diff --git a/api/core/model_runtime/model_providers/upstage/_assets/icon_s_en.svg b/api/core/model_runtime/model_providers/upstage/_assets/icon_s_en.svg
new file mode 100644
index 0000000000..44ef12b730
--- /dev/null
+++ b/api/core/model_runtime/model_providers/upstage/_assets/icon_s_en.svg
@@ -0,0 +1,3 @@
+
diff --git a/api/core/model_runtime/model_providers/upstage/_common.py b/api/core/model_runtime/model_providers/upstage/_common.py
new file mode 100644
index 0000000000..13b73181e9
--- /dev/null
+++ b/api/core/model_runtime/model_providers/upstage/_common.py
@@ -0,0 +1,57 @@
+
+from collections.abc import Mapping
+
+import openai
+from httpx import Timeout
+
+from core.model_runtime.errors.invoke import (
+ InvokeAuthorizationError,
+ InvokeBadRequestError,
+ InvokeConnectionError,
+ InvokeError,
+ InvokeRateLimitError,
+ InvokeServerUnavailableError,
+)
+
+
+class _CommonUpstage:
+ def _to_credential_kwargs(self, credentials: Mapping) -> dict:
+ """
+ Transform credentials to kwargs for model instance
+
+ :param credentials:
+ :return:
+ """
+ credentials_kwargs = {
+ "api_key": credentials['upstage_api_key'],
+ "base_url": "https://api.upstage.ai/v1/solar",
+ "timeout": Timeout(315.0, read=300.0, write=20.0, connect=10.0),
+ "max_retries": 1
+ }
+
+ return credentials_kwargs
+
+ @property
+ def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:
+ """
+ Map model invoke error to unified error
+ The key is the error type thrown to the caller
+ The value is the error type thrown by the model,
+ which needs to be converted into a unified error type for the caller.
+
+ :return: Invoke error mapping
+ """
+ return {
+ InvokeConnectionError: [openai.APIConnectionError, openai.APITimeoutError],
+ InvokeServerUnavailableError: [openai.InternalServerError],
+ InvokeRateLimitError: [openai.RateLimitError],
+ InvokeAuthorizationError: [openai.AuthenticationError, openai.PermissionDeniedError],
+ InvokeBadRequestError: [
+ openai.BadRequestError,
+ openai.NotFoundError,
+ openai.UnprocessableEntityError,
+ openai.APIError,
+ ],
+ }
+
+
diff --git a/api/core/model_runtime/model_providers/upstage/llm/__init__.py b/api/core/model_runtime/model_providers/upstage/llm/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/core/model_runtime/model_providers/upstage/llm/_position.yaml b/api/core/model_runtime/model_providers/upstage/llm/_position.yaml
new file mode 100644
index 0000000000..d4f03e1988
--- /dev/null
+++ b/api/core/model_runtime/model_providers/upstage/llm/_position.yaml
@@ -0,0 +1 @@
+- soloar-1-mini-chat
diff --git a/api/core/model_runtime/model_providers/upstage/llm/llm.py b/api/core/model_runtime/model_providers/upstage/llm/llm.py
new file mode 100644
index 0000000000..d1ed4619d6
--- /dev/null
+++ b/api/core/model_runtime/model_providers/upstage/llm/llm.py
@@ -0,0 +1,575 @@
+import logging
+from collections.abc import Generator
+from typing import Optional, Union, cast
+
+from openai import OpenAI, Stream
+from openai.types.chat import ChatCompletion, ChatCompletionChunk, ChatCompletionMessageToolCall
+from openai.types.chat.chat_completion_chunk import ChoiceDeltaFunctionCall, ChoiceDeltaToolCall
+from openai.types.chat.chat_completion_message import FunctionCall
+from tokenizers import Tokenizer
+
+from core.model_runtime.callbacks.base_callback import Callback
+from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta
+from core.model_runtime.entities.message_entities import (
+ AssistantPromptMessage,
+ ImagePromptMessageContent,
+ PromptMessage,
+ PromptMessageContentType,
+ PromptMessageTool,
+ SystemPromptMessage,
+ TextPromptMessageContent,
+ ToolPromptMessage,
+ UserPromptMessage,
+)
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
+from core.model_runtime.model_providers.upstage._common import _CommonUpstage
+
+logger = logging.getLogger(__name__)
+
+UPSTAGE_BLOCK_MODE_PROMPT = """You should always follow the instructions and output a valid {{block}} object.
+The structure of the {{block}} object you can found in the instructions, use {"answer": "$your_answer"} as the default structure
+if you are not sure about the structure.
+
+
+{{instructions}}
+
+"""
+
+class UpstageLargeLanguageModel(_CommonUpstage, LargeLanguageModel):
+ """
+ Model class for Upstage large language model.
+ """
+
+ def _invoke(self, model: str, credentials: dict,
+ prompt_messages: list[PromptMessage], model_parameters: dict,
+ tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None,
+ stream: bool = True, user: Optional[str] = None) -> Union[LLMResult, Generator]:
+ """
+ Invoke large language model
+
+ :param model: model name
+ :param credentials: model credentials
+ :param prompt_messages: prompt messages
+ :param model_parameters: model parameters
+ :param tools: tools for tool calling
+ :param stop: stop words
+ :param stream: is stream response
+ :param user: unique user id
+ :return: full response or stream response chunk generator result
+ """
+
+ return self._chat_generate(
+ model=model,
+ credentials=credentials,
+ prompt_messages=prompt_messages,
+ model_parameters=model_parameters,
+ tools=tools,
+ stop=stop,
+ stream=stream,
+ user=user
+ )
+
+ def _code_block_mode_wrapper(self,
+ model: str, credentials: dict, prompt_messages: list[PromptMessage], model_parameters: dict, tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None, stream: bool = True, user: Optional[str] = None, callbacks: Optional[list[Callback]] = None) -> Union[LLMResult, Generator]:
+ """
+ Code block mode wrapper for invoking large language model
+ """
+ if 'response_format' in model_parameters and model_parameters['response_format'] in ['JSON', 'XML']:
+ stop = stop or []
+ self._transform_chat_json_prompts(
+ model=model,
+ credentials=credentials,
+ prompt_messages=prompt_messages,
+ model_parameters=model_parameters,
+ tools=tools,
+ stop=stop,
+ stream=stream,
+ user=user,
+ response_format=model_parameters['response_format']
+ )
+ model_parameters.pop('response_format')
+
+ return self._invoke(
+ model=model,
+ credentials=credentials,
+ prompt_messages=prompt_messages,
+ model_parameters=model_parameters,
+ tools=tools,
+ stop=stop,
+ stream=stream,
+ user=user
+ )
+
+ def _transform_chat_json_prompts(self, model: str, credentials: dict,
+ prompt_messages: list[PromptMessage], model_parameters: dict,
+ tools: list[PromptMessageTool] | None = None, stop: list[str] | None = None,
+ stream: bool = True, user: str | None = None, response_format: str = 'JSON') -> None:
+ """
+ Transform json prompts
+ """
+ if stop is None:
+ stop = []
+ if "```\n" not in stop:
+ stop.append("```\n")
+ if "\n```" not in stop:
+ stop.append("\n```")
+
+ if len(prompt_messages) > 0 and isinstance(prompt_messages[0], SystemPromptMessage):
+ prompt_messages[0] = SystemPromptMessage(
+ content=UPSTAGE_BLOCK_MODE_PROMPT
+ .replace("{{instructions}}", prompt_messages[0].content)
+ .replace("{{block}}", response_format)
+ )
+ prompt_messages.append(AssistantPromptMessage(content=f"\n```{response_format}\n"))
+ else:
+ prompt_messages.insert(0, SystemPromptMessage(
+ content=UPSTAGE_BLOCK_MODE_PROMPT
+ .replace("{{instructions}}", f"Please output a valid {response_format} object.")
+ .replace("{{block}}", response_format)
+ ))
+ prompt_messages.append(AssistantPromptMessage(content=f"\n```{response_format}"))
+
+ def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], tools: Optional[list[PromptMessageTool]] = None) -> int:
+ """
+ Get number of tokens for given prompt messages
+
+ :param model: model name
+ :param credentials: model credentials
+ :param prompt_messages: prompt messages
+ :param tools: tools for tool calling
+ :return:
+ """
+ return self._num_tokens_from_messages(model, prompt_messages, tools)
+
+ def validate_credentials(self, model: str, credentials: dict) -> None:
+ """
+ Validate model credentials
+
+ :param model: model name
+ :param credentials: model credentials
+ :return:
+ """
+ try:
+ credentials_kwargs = self._to_credential_kwargs(credentials)
+ client = OpenAI(**credentials_kwargs)
+
+ client.chat.completions.create(
+ messages=[{"role": "user", "content": "ping"}],
+ model=model,
+ temperature=0,
+ max_tokens=10,
+ stream=False
+ )
+ except Exception as e:
+ raise CredentialsValidateFailedError(str(e))
+
+ def _chat_generate(self, model: str, credentials: dict,
+ prompt_messages: list[PromptMessage], model_parameters: dict,
+ tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None,
+ stream: bool = True, user: Optional[str] = None) -> Union[LLMResult, Generator]:
+ credentials_kwargs = self._to_credential_kwargs(credentials)
+ client = OpenAI(**credentials_kwargs)
+
+ extra_model_kwargs = {}
+
+ if tools:
+ extra_model_kwargs["functions"] = [{
+ "name": tool.name,
+ "description": tool.description,
+ "parameters": tool.parameters
+ } for tool in tools]
+
+ if stop:
+ extra_model_kwargs["stop"] = stop
+
+ if user:
+ extra_model_kwargs["user"] = user
+
+ # chat model
+ response = client.chat.completions.create(
+ messages=[self._convert_prompt_message_to_dict(m) for m in prompt_messages],
+ model=model,
+ stream=stream,
+ **model_parameters,
+ **extra_model_kwargs,
+ )
+
+ if stream:
+ return self._handle_chat_generate_stream_response(model, credentials, response, prompt_messages, tools)
+ return self._handle_chat_generate_response(model, credentials, response, prompt_messages, tools)
+
+ def _handle_chat_generate_response(self, model: str, credentials: dict, response: ChatCompletion,
+ prompt_messages: list[PromptMessage],
+ tools: Optional[list[PromptMessageTool]] = None) -> LLMResult:
+ """
+ Handle llm chat response
+
+ :param model: model name
+ :param credentials: credentials
+ :param response: response
+ :param prompt_messages: prompt messages
+ :param tools: tools for tool calling
+ :return: llm response
+ """
+ assistant_message = response.choices[0].message
+ # assistant_message_tool_calls = assistant_message.tool_calls
+ assistant_message_function_call = assistant_message.function_call
+
+ # extract tool calls from response
+ # tool_calls = self._extract_response_tool_calls(assistant_message_tool_calls)
+ function_call = self._extract_response_function_call(assistant_message_function_call)
+ tool_calls = [function_call] if function_call else []
+
+ # transform assistant message to prompt message
+ assistant_prompt_message = AssistantPromptMessage(
+ content=assistant_message.content,
+ tool_calls=tool_calls
+ )
+
+ # calculate num tokens
+ if response.usage:
+ # transform usage
+ prompt_tokens = response.usage.prompt_tokens
+ completion_tokens = response.usage.completion_tokens
+ else:
+ # calculate num tokens
+ prompt_tokens = self._num_tokens_from_messages(model, prompt_messages, tools)
+ completion_tokens = self._num_tokens_from_messages(model, [assistant_prompt_message])
+
+ # transform usage
+ usage = self._calc_response_usage(model, credentials, prompt_tokens, completion_tokens)
+
+ # transform response
+ response = LLMResult(
+ model=response.model,
+ prompt_messages=prompt_messages,
+ message=assistant_prompt_message,
+ usage=usage,
+ system_fingerprint=response.system_fingerprint,
+ )
+
+ return response
+
+ def _handle_chat_generate_stream_response(self, model: str, credentials: dict, response: Stream[ChatCompletionChunk],
+ prompt_messages: list[PromptMessage],
+ tools: Optional[list[PromptMessageTool]] = None) -> Generator:
+ """
+ Handle llm chat stream response
+
+ :param model: model name
+ :param response: response
+ :param prompt_messages: prompt messages
+ :param tools: tools for tool calling
+ :return: llm response chunk generator
+ """
+ full_assistant_content = ''
+ delta_assistant_message_function_call_storage: Optional[ChoiceDeltaFunctionCall] = None
+ prompt_tokens = 0
+ completion_tokens = 0
+ final_tool_calls = []
+ final_chunk = LLMResultChunk(
+ model=model,
+ prompt_messages=prompt_messages,
+ delta=LLMResultChunkDelta(
+ index=0,
+ message=AssistantPromptMessage(content=''),
+ )
+ )
+
+ for chunk in response:
+ if len(chunk.choices) == 0:
+ if chunk.usage:
+ # calculate num tokens
+ prompt_tokens = chunk.usage.prompt_tokens
+ completion_tokens = chunk.usage.completion_tokens
+ continue
+
+ delta = chunk.choices[0]
+ has_finish_reason = delta.finish_reason is not None
+
+ if not has_finish_reason and (delta.delta.content is None or delta.delta.content == '') and \
+ delta.delta.function_call is None:
+ continue
+
+ # assistant_message_tool_calls = delta.delta.tool_calls
+ assistant_message_function_call = delta.delta.function_call
+
+ # extract tool calls from response
+ if delta_assistant_message_function_call_storage is not None:
+ # handle process of stream function call
+ if assistant_message_function_call:
+ # message has not ended ever
+ delta_assistant_message_function_call_storage.arguments += assistant_message_function_call.arguments
+ continue
+ else:
+ # message has ended
+ assistant_message_function_call = delta_assistant_message_function_call_storage
+ delta_assistant_message_function_call_storage = None
+ else:
+ if assistant_message_function_call:
+ # start of stream function call
+ delta_assistant_message_function_call_storage = assistant_message_function_call
+ if delta_assistant_message_function_call_storage.arguments is None:
+ delta_assistant_message_function_call_storage.arguments = ''
+ if not has_finish_reason:
+ continue
+
+ # tool_calls = self._extract_response_tool_calls(assistant_message_tool_calls)
+ function_call = self._extract_response_function_call(assistant_message_function_call)
+ tool_calls = [function_call] if function_call else []
+ if tool_calls:
+ final_tool_calls.extend(tool_calls)
+
+ # transform assistant message to prompt message
+ assistant_prompt_message = AssistantPromptMessage(
+ content=delta.delta.content if delta.delta.content else '',
+ tool_calls=tool_calls
+ )
+
+ full_assistant_content += delta.delta.content if delta.delta.content else ''
+
+ if has_finish_reason:
+ final_chunk = LLMResultChunk(
+ model=chunk.model,
+ prompt_messages=prompt_messages,
+ system_fingerprint=chunk.system_fingerprint,
+ delta=LLMResultChunkDelta(
+ index=delta.index,
+ message=assistant_prompt_message,
+ finish_reason=delta.finish_reason,
+ )
+ )
+ else:
+ yield LLMResultChunk(
+ model=chunk.model,
+ prompt_messages=prompt_messages,
+ system_fingerprint=chunk.system_fingerprint,
+ delta=LLMResultChunkDelta(
+ index=delta.index,
+ message=assistant_prompt_message,
+ )
+ )
+
+ if not prompt_tokens:
+ prompt_tokens = self._num_tokens_from_messages(model, prompt_messages, tools)
+
+ if not completion_tokens:
+ full_assistant_prompt_message = AssistantPromptMessage(
+ content=full_assistant_content,
+ tool_calls=final_tool_calls
+ )
+ completion_tokens = self._num_tokens_from_messages(model, [full_assistant_prompt_message])
+
+ # transform usage
+ usage = self._calc_response_usage(model, credentials, prompt_tokens, completion_tokens)
+ final_chunk.delta.usage = usage
+
+ yield final_chunk
+
+ def _extract_response_tool_calls(self,
+ response_tool_calls: list[ChatCompletionMessageToolCall | ChoiceDeltaToolCall]) \
+ -> list[AssistantPromptMessage.ToolCall]:
+ """
+ Extract tool calls from response
+
+ :param response_tool_calls: response tool calls
+ :return: list of tool calls
+ """
+ tool_calls = []
+ if response_tool_calls:
+ for response_tool_call in response_tool_calls:
+ function = AssistantPromptMessage.ToolCall.ToolCallFunction(
+ name=response_tool_call.function.name,
+ arguments=response_tool_call.function.arguments
+ )
+
+ tool_call = AssistantPromptMessage.ToolCall(
+ id=response_tool_call.id,
+ type=response_tool_call.type,
+ function=function
+ )
+ tool_calls.append(tool_call)
+
+ return tool_calls
+
+ def _extract_response_function_call(self, response_function_call: FunctionCall | ChoiceDeltaFunctionCall) \
+ -> AssistantPromptMessage.ToolCall:
+ """
+ Extract function call from response
+
+ :param response_function_call: response function call
+ :return: tool call
+ """
+ tool_call = None
+ if response_function_call:
+ function = AssistantPromptMessage.ToolCall.ToolCallFunction(
+ name=response_function_call.name,
+ arguments=response_function_call.arguments
+ )
+
+ tool_call = AssistantPromptMessage.ToolCall(
+ id=response_function_call.name,
+ type="function",
+ function=function
+ )
+
+ return tool_call
+
+ def _convert_prompt_message_to_dict(self, message: PromptMessage) -> dict:
+ """
+ Convert PromptMessage to dict for Upstage API
+ """
+ if isinstance(message, UserPromptMessage):
+ message = cast(UserPromptMessage, message)
+ if isinstance(message.content, str):
+ message_dict = {"role": "user", "content": message.content}
+ else:
+ sub_messages = []
+ for message_content in message.content:
+ if message_content.type == PromptMessageContentType.TEXT:
+ message_content = cast(TextPromptMessageContent, message_content)
+ sub_message_dict = {
+ "type": "text",
+ "text": message_content.data
+ }
+ sub_messages.append(sub_message_dict)
+ elif message_content.type == PromptMessageContentType.IMAGE:
+ message_content = cast(ImagePromptMessageContent, message_content)
+ sub_message_dict = {
+ "type": "image_url",
+ "image_url": {
+ "url": message_content.data,
+ "detail": message_content.detail.value
+ }
+ }
+ sub_messages.append(sub_message_dict)
+
+ message_dict = {"role": "user", "content": sub_messages}
+ elif isinstance(message, AssistantPromptMessage):
+ message = cast(AssistantPromptMessage, message)
+ message_dict = {"role": "assistant", "content": message.content}
+ if message.tool_calls:
+ # message_dict["tool_calls"] = [tool_call.dict() for tool_call in
+ # message.tool_calls]
+ function_call = message.tool_calls[0]
+ message_dict["function_call"] = {
+ "name": function_call.function.name,
+ "arguments": function_call.function.arguments,
+ }
+ elif isinstance(message, SystemPromptMessage):
+ message = cast(SystemPromptMessage, message)
+ message_dict = {"role": "system", "content": message.content}
+ elif isinstance(message, ToolPromptMessage):
+ message = cast(ToolPromptMessage, message)
+ # message_dict = {
+ # "role": "tool",
+ # "content": message.content,
+ # "tool_call_id": message.tool_call_id
+ # }
+ message_dict = {
+ "role": "function",
+ "content": message.content,
+ "name": message.tool_call_id
+ }
+ else:
+ raise ValueError(f"Got unknown type {message}")
+
+ if message.name:
+ message_dict["name"] = message.name
+
+ return message_dict
+
+ def _get_tokenizer(self) -> Tokenizer:
+ return Tokenizer.from_pretrained("upstage/solar-1-mini-tokenizer")
+
+ def _num_tokens_from_messages(self, model: str, messages: list[PromptMessage],
+ tools: Optional[list[PromptMessageTool]] = None) -> int:
+ """
+ Calculate num tokens for solar with Huggingface Solar tokenizer.
+ Solar tokenizer is opened in huggingface https://huggingface.co/upstage/solar-1-mini-tokenizer
+ """
+ tokenizer = self._get_tokenizer()
+ tokens_per_message = 5 # <|im_start|>{role}\n{message}<|im_end|>
+ tokens_prefix = 1 # <|startoftext|>
+ tokens_suffix = 3 # <|im_start|>assistant\n
+
+ num_tokens = 0
+ num_tokens += tokens_prefix
+
+ messages_dict = [self._convert_prompt_message_to_dict(message) for message in messages]
+ for message in messages_dict:
+ num_tokens += tokens_per_message
+ for key, value in message.items():
+ if isinstance(value, list):
+ text = ''
+ for item in value:
+ if isinstance(item, dict) and item['type'] == 'text':
+ text += item['text']
+ value = text
+
+ if key == "tool_calls":
+ for tool_call in value:
+ for t_key, t_value in tool_call.items():
+ num_tokens += len(tokenizer.encode(t_key, add_special_tokens=False))
+ if t_key == "function":
+ for f_key, f_value in t_value.items():
+ num_tokens += len(tokenizer.encode(f_key, add_special_tokens=False))
+ num_tokens += len(tokenizer.encode(f_value, add_special_tokens=False))
+ else:
+ num_tokens += len(tokenizer.encode(t_key, add_special_tokens=False))
+ num_tokens += len(tokenizer.encode(t_value, add_special_tokens=False))
+ else:
+ num_tokens += len(tokenizer.encode(str(value), add_special_tokens=False))
+ num_tokens += tokens_suffix
+
+ if tools:
+ num_tokens += self._num_tokens_for_tools(tokenizer, tools)
+
+ return num_tokens
+
+ def _num_tokens_for_tools(self, tokenizer: Tokenizer, tools: list[PromptMessageTool]) -> int:
+ """
+ Calculate num tokens for tool calling with upstage tokenizer.
+
+ :param tokenizer: huggingface tokenizer
+ :param tools: tools for tool calling
+ :return: number of tokens
+ """
+ num_tokens = 0
+ for tool in tools:
+ num_tokens += len(tokenizer.encode('type'))
+ num_tokens += len(tokenizer.encode('function'))
+
+ # calculate num tokens for function object
+ num_tokens += len(tokenizer.encode('name'))
+ num_tokens += len(tokenizer.encode(tool.name))
+ num_tokens += len(tokenizer.encode('description'))
+ num_tokens += len(tokenizer.encode(tool.description))
+ parameters = tool.parameters
+ num_tokens += len(tokenizer.encode('parameters'))
+ if 'title' in parameters:
+ num_tokens += len(tokenizer.encode('title'))
+ num_tokens += len(tokenizer.encode(parameters.get("title")))
+ num_tokens += len(tokenizer.encode('type'))
+ num_tokens += len(tokenizer.encode(parameters.get("type")))
+ if 'properties' in parameters:
+ num_tokens += len(tokenizer.encode('properties'))
+ for key, value in parameters.get('properties').items():
+ num_tokens += len(tokenizer.encode(key))
+ for field_key, field_value in value.items():
+ num_tokens += len(tokenizer.encode(field_key))
+ if field_key == 'enum':
+ for enum_field in field_value:
+ num_tokens += 3
+ num_tokens += len(tokenizer.encode(enum_field))
+ else:
+ num_tokens += len(tokenizer.encode(field_key))
+ num_tokens += len(tokenizer.encode(str(field_value)))
+ if 'required' in parameters:
+ num_tokens += len(tokenizer.encode('required'))
+ for required_field in parameters['required']:
+ num_tokens += 3
+ num_tokens += len(tokenizer.encode(required_field))
+
+ return num_tokens
diff --git a/api/core/model_runtime/model_providers/upstage/llm/solar-1-mini-chat.yaml b/api/core/model_runtime/model_providers/upstage/llm/solar-1-mini-chat.yaml
new file mode 100644
index 0000000000..787ac83f8a
--- /dev/null
+++ b/api/core/model_runtime/model_providers/upstage/llm/solar-1-mini-chat.yaml
@@ -0,0 +1,43 @@
+model: solar-1-mini-chat
+label:
+ zh_Hans: solar-1-mini-chat
+ en_US: solar-1-mini-chat
+ ko_KR: solar-1-mini-chat
+model_type: llm
+features:
+ - multi-tool-call
+ - agent-thought
+ - stream-tool-call
+model_properties:
+ mode: chat
+ context_size: 32768
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ - name: top_p
+ use_template: top_p
+ - name: max_tokens
+ use_template: max_tokens
+ default: 512
+ min: 1
+ max: 32768
+ - name: seed
+ label:
+ zh_Hans: 种子
+ en_US: Seed
+ type: int
+ help:
+ zh_Hans:
+ 如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint
+ 响应参数来监视变化。
+ en_US:
+ If specified, model will make a best effort to sample deterministically,
+ such that repeated requests with the same seed and parameters should return
+ the same result. Determinism is not guaranteed, and you should refer to the
+ system_fingerprint response parameter to monitor changes in the backend.
+ required: false
+pricing:
+ input: "0.5"
+ output: "0.5"
+ unit: "0.000001"
+ currency: USD
diff --git a/api/core/model_runtime/model_providers/upstage/text_embedding/__init__.py b/api/core/model_runtime/model_providers/upstage/text_embedding/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/core/model_runtime/model_providers/upstage/text_embedding/solar-embedding-1-large-passage.yaml b/api/core/model_runtime/model_providers/upstage/text_embedding/solar-embedding-1-large-passage.yaml
new file mode 100644
index 0000000000..d838a5bbb1
--- /dev/null
+++ b/api/core/model_runtime/model_providers/upstage/text_embedding/solar-embedding-1-large-passage.yaml
@@ -0,0 +1,9 @@
+model: solar-embedding-1-large-passage
+model_type: text-embedding
+model_properties:
+ context_size: 4000
+ max_chunks: 32
+pricing:
+ input: '0.1'
+ unit: '0.000001'
+ currency: 'USD'
diff --git a/api/core/model_runtime/model_providers/upstage/text_embedding/solar-embedding-1-large-query.yaml b/api/core/model_runtime/model_providers/upstage/text_embedding/solar-embedding-1-large-query.yaml
new file mode 100644
index 0000000000..c77645cffd
--- /dev/null
+++ b/api/core/model_runtime/model_providers/upstage/text_embedding/solar-embedding-1-large-query.yaml
@@ -0,0 +1,9 @@
+model: solar-embedding-1-large-query
+model_type: text-embedding
+model_properties:
+ context_size: 4000
+ max_chunks: 32
+pricing:
+ input: '0.1'
+ unit: '0.000001'
+ currency: 'USD'
diff --git a/api/core/model_runtime/model_providers/upstage/text_embedding/text_embedding.py b/api/core/model_runtime/model_providers/upstage/text_embedding/text_embedding.py
new file mode 100644
index 0000000000..05ae8665d6
--- /dev/null
+++ b/api/core/model_runtime/model_providers/upstage/text_embedding/text_embedding.py
@@ -0,0 +1,195 @@
+import base64
+import time
+from collections.abc import Mapping
+from typing import Union
+
+import numpy as np
+from openai import OpenAI
+from tokenizers import Tokenizer
+
+from core.model_runtime.entities.model_entities import PriceType
+from core.model_runtime.entities.text_embedding_entities import EmbeddingUsage, TextEmbeddingResult
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel
+from core.model_runtime.model_providers.upstage._common import _CommonUpstage
+
+
+class UpstageTextEmbeddingModel(_CommonUpstage, TextEmbeddingModel):
+ """
+ Model class for Upstage text embedding model.
+ """
+ def _get_tokenizer(self) -> Tokenizer:
+ return Tokenizer.from_pretrained("upstage/solar-1-mini-tokenizer")
+
+ def _invoke(self, model: str, credentials: dict, texts: list[str], user: str | None = None) -> TextEmbeddingResult:
+ """
+ Invoke text embedding model
+
+ :param model: model name
+ :param credentials: model credentials
+ :param texts: texts to embed
+ :param user: unique user id
+ :return: embeddings result
+ """
+
+ credentials_kwargs = self._to_credential_kwargs(credentials)
+ client = OpenAI(**credentials_kwargs)
+
+ extra_model_kwargs = {}
+ if user:
+ extra_model_kwargs["user"] = user
+ extra_model_kwargs["encoding_format"] = "base64"
+
+ context_size = self._get_context_size(model, credentials)
+ max_chunks = self._get_max_chunks(model, credentials)
+
+ embeddings: list[list[float]] = [[] for _ in range(len(texts))]
+ tokens = []
+ indices = []
+ used_tokens = 0
+
+ tokenizer = self._get_tokenizer()
+
+ for i, text in enumerate(texts):
+ token = tokenizer.encode(text, add_special_tokens=False).tokens
+ for j in range(0, len(token), context_size):
+ tokens += [token[j:j+context_size]]
+ indices += [i]
+
+ batched_embeddings = []
+ _iter = range(0, len(tokens), max_chunks)
+
+ for i in _iter:
+ embeddings_batch, embedding_used_tokens = self._embedding_invoke(
+ model=model,
+ client=client,
+ texts=tokens[i:i+max_chunks],
+ extra_model_kwargs=extra_model_kwargs,
+ )
+
+ used_tokens += embedding_used_tokens
+ batched_embeddings += embeddings_batch
+
+ results: list[list[list[float]]] = [[] for _ in range(len(texts))]
+ num_tokens_in_batch: list[list[int]] = [[] for _ in range(len(texts))]
+
+ for i in range(len(indices)):
+ results[indices[i]].append(batched_embeddings[i])
+ num_tokens_in_batch[indices[i]].append(len(tokens[i]))
+
+ for i in range(len(texts)):
+ _result = results[i]
+ if len(_result) == 0:
+ embeddings_batch, embedding_used_tokens = self._embedding_invoke(
+ model=model,
+ client=client,
+ texts=[texts[i]],
+ extra_model_kwargs=extra_model_kwargs,
+ )
+ used_tokens += embedding_used_tokens
+ average = embeddings_batch[0]
+ else:
+ average = np.average(_result, axis=0, weights=num_tokens_in_batch[i])
+ embeddings[i] = (average / np.linalg.norm(average)).tolist()
+
+ usage = self._calc_response_usage(
+ model=model,
+ credentials=credentials,
+ tokens=used_tokens
+ )
+
+ return TextEmbeddingResult(embeddings=embeddings, usage=usage, model=model)
+
+ def get_num_tokens(self, model: str, credentials: dict, texts: list[str]) -> int:
+ tokenizer = self._get_tokenizer()
+ """
+ Get number of tokens for given prompt messages
+
+ :param model: model name
+ :param credentials: model credentials
+ :param texts: texts to embed
+ :return:
+ """
+ if len(texts) == 0:
+ return 0
+
+ tokenizer = self._get_tokenizer()
+
+ total_num_tokens = 0
+ for text in texts:
+ # calculate the number of tokens in the encoded text
+ tokenized_text = tokenizer.encode(text)
+ total_num_tokens += len(tokenized_text)
+
+ return total_num_tokens
+
+ def validate_credentials(self, model: str, credentials: Mapping) -> None:
+ """
+ Validate model credentials
+
+ :param model: model name
+ :param credentials: model credentials
+ :return:
+ """
+ try:
+ # transform credentials to kwargs for model instance
+ credentials_kwargs = self._to_credential_kwargs(credentials)
+ client = OpenAI(**credentials_kwargs)
+
+ # call embedding model
+ self._embedding_invoke(
+ model=model,
+ client=client,
+ texts=['ping'],
+ extra_model_kwargs={}
+ )
+ except Exception as ex:
+ raise CredentialsValidateFailedError(str(ex))
+
+ def _embedding_invoke(self, model: str, client: OpenAI, texts: Union[list[str], str], extra_model_kwargs: dict) -> tuple[list[list[float]], int]:
+ """
+ Invoke embedding model
+ :param model: model name
+ :param client: model client
+ :param texts: texts to embed
+ :param extra_model_kwargs: extra model kwargs
+ :return: embeddings and used tokens
+ """
+ response = client.embeddings.create(
+ model=model,
+ input=texts,
+ **extra_model_kwargs
+ )
+
+ if 'encoding_format' in extra_model_kwargs and extra_model_kwargs['encoding_format'] == 'base64':
+ return ([list(np.frombuffer(base64.b64decode(embedding.embedding), dtype=np.float32)) for embedding in response.data], response.usage.total_tokens)
+
+ return [data.embedding for data in response.data], response.usage.total_tokens
+
+ def _calc_response_usage(self, model: str, credentials: dict, tokens: int) -> EmbeddingUsage:
+ """
+ Calculate response usage
+
+ :param model: model name
+ :param credentials: model credentials
+ :param tokens: input tokens
+ :return: usage
+ """
+ input_price_info = self.get_price(
+ model=model,
+ credentials=credentials,
+ tokens=tokens,
+ price_type=PriceType.INPUT
+ )
+
+ usage = EmbeddingUsage(
+ tokens=tokens,
+ total_tokens=tokens,
+ unit_price=input_price_info.unit_price,
+ price_unit=input_price_info.unit,
+ total_price=input_price_info.total_amount,
+ currency=input_price_info.currency,
+ latency=time.perf_counter() - self.started_at
+ )
+
+ return usage
diff --git a/api/core/model_runtime/model_providers/upstage/upstage.py b/api/core/model_runtime/model_providers/upstage/upstage.py
new file mode 100644
index 0000000000..56c91c0061
--- /dev/null
+++ b/api/core/model_runtime/model_providers/upstage/upstage.py
@@ -0,0 +1,32 @@
+import logging
+
+from core.model_runtime.entities.model_entities import ModelType
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.__base.model_provider import ModelProvider
+
+logger = logging.getLogger(__name__)
+
+
+class UpstageProvider(ModelProvider):
+
+ def validate_provider_credentials(self, credentials: dict) -> None:
+ """
+ Validate provider credentials
+ if validate failed, raise exception
+
+ :param credentials: provider credentials, credentials from defined in `provider_credential_schema`.
+ """
+ try:
+ model_instance = self.get_model_instance(ModelType.LLM)
+
+ model_instance.validate_credentials(
+ model="solar-1-mini-chat",
+ credentials=credentials
+ )
+ except CredentialsValidateFailedError as e:
+ logger.exception(f'{self.get_provider_schema().provider} credentials validate failed')
+ raise e
+ except Exception as e:
+ logger.exception(f'{self.get_provider_schema().provider} credentials validate failed')
+ raise e
+
diff --git a/api/core/model_runtime/model_providers/upstage/upstage.yaml b/api/core/model_runtime/model_providers/upstage/upstage.yaml
new file mode 100644
index 0000000000..837667cfa9
--- /dev/null
+++ b/api/core/model_runtime/model_providers/upstage/upstage.yaml
@@ -0,0 +1,49 @@
+provider: upstage
+label:
+ en_US: Upstage
+description:
+ en_US: Models provided by Upstage, such as Solar-1-mini-chat.
+ zh_Hans: Upstage 提供的模型,例如 Solar-1-mini-chat.
+icon_small:
+ en_US: icon_s_en.svg
+icon_large:
+ en_US: icon_l_en.svg
+background: "#FFFFF"
+help:
+ title:
+ en_US: Get your API Key from Upstage
+ zh_Hans: 从 Upstage 获取 API Key
+ url:
+ en_US: https://console.upstage.ai/api-keys
+supported_model_types:
+ - llm
+ - text-embedding
+configurate_methods:
+ - predefined-model
+model_credential_schema:
+ model:
+ label:
+ en_US: Model Name
+ zh_Hans: 模型名称
+ placeholder:
+ en_US: Enter your model name
+ zh_Hans: 输入模型名称
+ credential_form_schemas:
+ - variable: upstage_api_key
+ label:
+ en_US: API Key
+ type: secret-input
+ required: true
+ placeholder:
+ zh_Hans: 在此输入您的 API Key
+ en_US: Enter your API Key
+provider_credential_schema:
+ credential_form_schemas:
+ - variable: upstage_api_key
+ label:
+ en_US: API Key
+ type: secret-input
+ required: true
+ placeholder:
+ zh_Hans: 在此输入您的 API Key
+ en_US: Enter your API Key
diff --git a/api/core/model_runtime/model_providers/wenxin/llm/ernie_bot.py b/api/core/model_runtime/model_providers/wenxin/llm/ernie_bot.py
index bc7f29cf6e..e345663d36 100644
--- a/api/core/model_runtime/model_providers/wenxin/llm/ernie_bot.py
+++ b/api/core/model_runtime/model_providers/wenxin/llm/ernie_bot.py
@@ -140,8 +140,9 @@ class ErnieBotModel:
'ernie-lite-8k-0308': 'https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie-lite-8k',
'ernie-character-8k': 'https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie-char-8k',
'ernie-character-8k-0321': 'https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie-char-8k',
- 'ernie-4.0-tutbo-8k': 'https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie-4.0-turbo-8k',
- 'ernie-4.0-tutbo-8k-preview': 'https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie-4.0-turbo-8k-preview',
+ 'ernie-4.0-turbo-8k': 'https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie-4.0-turbo-8k',
+ 'ernie-4.0-turbo-8k-preview': 'https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie-4.0-turbo-8k-preview',
+ 'yi_34b_chat': 'https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/yi_34b_chat',
}
function_calling_supports = [
@@ -154,7 +155,8 @@ class ErnieBotModel:
'ernie-3.5-128k',
'ernie-4.0-8k',
'ernie-4.0-turbo-8k',
- 'ernie-4.0-turbo-8k-preview'
+ 'ernie-4.0-turbo-8k-preview',
+ 'yi_34b_chat'
]
api_key: str = ''
diff --git a/api/core/model_runtime/model_providers/wenxin/llm/yi_34b_chat.yaml b/api/core/model_runtime/model_providers/wenxin/llm/yi_34b_chat.yaml
new file mode 100644
index 0000000000..0b247fbd22
--- /dev/null
+++ b/api/core/model_runtime/model_providers/wenxin/llm/yi_34b_chat.yaml
@@ -0,0 +1,30 @@
+model: yi_34b_chat
+label:
+ en_US: yi_34b_chat
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 32000
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ min: 0.1
+ max: 1.0
+ default: 0.95
+ - name: top_p
+ use_template: top_p
+ min: 0
+ max: 1.0
+ default: 0.7
+ - name: max_tokens
+ use_template: max_tokens
+ default: 4096
+ min: 2
+ max: 4096
+ - name: presence_penalty
+ use_template: presence_penalty
+ default: 1.0
+ min: 1.0
+ max: 2.0
diff --git a/api/core/model_runtime/model_providers/xinference/rerank/rerank.py b/api/core/model_runtime/model_providers/xinference/rerank/rerank.py
index 649898f47a..4e7543fd99 100644
--- a/api/core/model_runtime/model_providers/xinference/rerank/rerank.py
+++ b/api/core/model_runtime/model_providers/xinference/rerank/rerank.py
@@ -51,22 +51,28 @@ class XinferenceRerankModel(RerankModel):
server_url = server_url[:-1]
auth_headers = {'Authorization': f'Bearer {api_key}'} if api_key else {}
+ params = {
+ 'documents': docs,
+ 'query': query,
+ 'top_n': top_n,
+ 'return_documents': True
+ }
try:
handle = RESTfulRerankModelHandle(model_uid, server_url, auth_headers)
- response = handle.rerank(
- documents=docs,
- query=query,
- top_n=top_n,
- )
+ response = handle.rerank(**params)
except RuntimeError as e:
- raise InvokeServerUnavailableError(str(e))
+ if "rerank hasn't support extra parameter" not in str(e):
+ raise InvokeServerUnavailableError(str(e))
+ # compatible xinference server between v0.10.1 - v0.12.1, not support 'return_len'
+ handle = RESTfulRerankModelHandleWithoutExtraParameter(model_uid, server_url, auth_headers)
+ response = handle.rerank(**params)
rerank_documents = []
for idx, result in enumerate(response['results']):
# format document
index = result['index']
- page_content = result['document']
+ page_content = result['document'] if isinstance(result['document'], str) else result['document']['text']
rerank_document = RerankDocument(
index=index,
text=page_content,
@@ -166,8 +172,40 @@ class XinferenceRerankModel(RerankModel):
),
fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
model_type=ModelType.RERANK,
- model_properties={ },
+ model_properties={},
parameter_rules=[]
)
return entity
+
+
+class RESTfulRerankModelHandleWithoutExtraParameter(RESTfulRerankModelHandle):
+
+ def rerank(
+ self,
+ documents: list[str],
+ query: str,
+ top_n: Optional[int] = None,
+ max_chunks_per_doc: Optional[int] = None,
+ return_documents: Optional[bool] = None,
+ **kwargs
+ ):
+ url = f"{self._base_url}/v1/rerank"
+ request_body = {
+ "model": self._model_uid,
+ "documents": documents,
+ "query": query,
+ "top_n": top_n,
+ "max_chunks_per_doc": max_chunks_per_doc,
+ "return_documents": return_documents,
+ }
+
+ import requests
+
+ response = requests.post(url, json=request_body, headers=self.auth_headers)
+ if response.status_code != 200:
+ raise InvokeServerUnavailableError(
+ f"Failed to rerank documents, detail: {response.json()['detail']}"
+ )
+ response_data = response.json()
+ return response_data
diff --git a/api/core/model_runtime/model_providers/xinference/tts/__init__.py b/api/core/model_runtime/model_providers/xinference/tts/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/core/model_runtime/model_providers/xinference/tts/tts.py b/api/core/model_runtime/model_providers/xinference/tts/tts.py
new file mode 100644
index 0000000000..a564a021b1
--- /dev/null
+++ b/api/core/model_runtime/model_providers/xinference/tts/tts.py
@@ -0,0 +1,242 @@
+import concurrent.futures
+from typing import Optional
+
+from xinference_client.client.restful.restful_client import RESTfulAudioModelHandle
+
+from core.model_runtime.entities.common_entities import I18nObject
+from core.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType
+from core.model_runtime.errors.invoke import (
+ InvokeAuthorizationError,
+ InvokeBadRequestError,
+ InvokeConnectionError,
+ InvokeError,
+ InvokeRateLimitError,
+ InvokeServerUnavailableError,
+)
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.__base.tts_model import TTSModel
+from core.model_runtime.model_providers.xinference.xinference_helper import XinferenceHelper
+
+
+class XinferenceText2SpeechModel(TTSModel):
+
+ def __init__(self):
+ # preset voices, need support custom voice
+ self.model_voices = {
+ '__default': {
+ 'all': [
+ {'name': 'Default', 'value': 'default'},
+ ]
+ },
+ 'ChatTTS': {
+ 'all': [
+ {'name': 'Alloy', 'value': 'alloy'},
+ {'name': 'Echo', 'value': 'echo'},
+ {'name': 'Fable', 'value': 'fable'},
+ {'name': 'Onyx', 'value': 'onyx'},
+ {'name': 'Nova', 'value': 'nova'},
+ {'name': 'Shimmer', 'value': 'shimmer'},
+ ]
+ },
+ 'CosyVoice': {
+ 'zh-Hans': [
+ {'name': '中文男', 'value': '中文男'},
+ {'name': '中文女', 'value': '中文女'},
+ {'name': '粤语女', 'value': '粤语女'},
+ ],
+ 'zh-Hant': [
+ {'name': '中文男', 'value': '中文男'},
+ {'name': '中文女', 'value': '中文女'},
+ {'name': '粤语女', 'value': '粤语女'},
+ ],
+ 'en-US': [
+ {'name': '英文男', 'value': '英文男'},
+ {'name': '英文女', 'value': '英文女'},
+ ],
+ 'ja-JP': [
+ {'name': '日语男', 'value': '日语男'},
+ ],
+ 'ko-KR': [
+ {'name': '韩语女', 'value': '韩语女'},
+ ]
+ }
+ }
+
+ def validate_credentials(self, model: str, credentials: dict) -> None:
+ """
+ Validate model credentials
+
+ :param model: model name
+ :param credentials: model credentials
+ :return:
+ """
+ try:
+ if ("/" in credentials['model_uid'] or
+ "?" in credentials['model_uid'] or
+ "#" in credentials['model_uid']):
+ raise CredentialsValidateFailedError("model_uid should not contain /, ?, or #")
+
+ if credentials['server_url'].endswith('/'):
+ credentials['server_url'] = credentials['server_url'][:-1]
+
+ extra_param = XinferenceHelper.get_xinference_extra_parameter(
+ server_url=credentials['server_url'],
+ model_uid=credentials['model_uid']
+ )
+
+ if 'text-to-audio' not in extra_param.model_ability:
+ raise InvokeBadRequestError(
+ 'please check model type, the model you want to invoke is not a text-to-audio model')
+
+ if extra_param.model_family and extra_param.model_family in self.model_voices:
+ credentials['audio_model_name'] = extra_param.model_family
+ else:
+ credentials['audio_model_name'] = '__default'
+
+ self._tts_invoke_streaming(
+ model=model,
+ credentials=credentials,
+ content_text='Hello Dify!',
+ voice=self._get_model_default_voice(model, credentials),
+ )
+ except Exception as ex:
+ raise CredentialsValidateFailedError(str(ex))
+
+ def _invoke(self, model: str, tenant_id: str, credentials: dict, content_text: str, voice: str,
+ user: Optional[str] = None):
+ """
+ _invoke text2speech model
+
+ :param model: model name
+ :param tenant_id: user tenant id
+ :param credentials: model credentials
+ :param voice: model timbre
+ :param content_text: text content to be translated
+ :param user: unique user id
+ :return: text translated to audio file
+ """
+ return self._tts_invoke_streaming(model, credentials, content_text, voice)
+
+ def get_customizable_model_schema(self, model: str, credentials: dict) -> AIModelEntity | None:
+ """
+ used to define customizable model schema
+ """
+
+ entity = AIModelEntity(
+ model=model,
+ label=I18nObject(
+ en_US=model
+ ),
+ fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
+ model_type=ModelType.TTS,
+ model_properties={},
+ parameter_rules=[]
+ )
+
+ return entity
+
+ @property
+ def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:
+ """
+ Map model invoke error to unified error
+ The key is the error type thrown to the caller
+ The value is the error type thrown by the model,
+ which needs to be converted into a unified error type for the caller.
+
+ :return: Invoke error mapping
+ """
+ return {
+ InvokeConnectionError: [
+ InvokeConnectionError
+ ],
+ InvokeServerUnavailableError: [
+ InvokeServerUnavailableError
+ ],
+ InvokeRateLimitError: [
+ InvokeRateLimitError
+ ],
+ InvokeAuthorizationError: [
+ InvokeAuthorizationError
+ ],
+ InvokeBadRequestError: [
+ InvokeBadRequestError,
+ KeyError,
+ ValueError
+ ]
+ }
+
+ def get_tts_model_voices(self, model: str, credentials: dict, language: Optional[str] = None) -> list:
+ audio_model_name = credentials.get('audio_model_name', '__default')
+ for key, voices in self.model_voices.items():
+ if key in audio_model_name:
+ if language and language in voices:
+ return voices[language]
+ elif 'all' in voices:
+ return voices['all']
+
+ return self.model_voices['__default']['all']
+
+ def _get_model_default_voice(self, model: str, credentials: dict) -> any:
+ return ""
+
+ def _get_model_word_limit(self, model: str, credentials: dict) -> int:
+ return 3500
+
+ def _get_model_audio_type(self, model: str, credentials: dict) -> str:
+ return "mp3"
+
+ def _get_model_workers_limit(self, model: str, credentials: dict) -> int:
+ return 5
+
+ def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str,
+ voice: str) -> any:
+ """
+ _tts_invoke_streaming text2speech model
+
+ :param model: model name
+ :param credentials: model credentials
+ :param content_text: text content to be translated
+ :param voice: model timbre
+ :return: text translated to audio file
+ """
+ if credentials['server_url'].endswith('/'):
+ credentials['server_url'] = credentials['server_url'][:-1]
+
+ try:
+ handle = RESTfulAudioModelHandle(credentials['model_uid'], credentials['server_url'], auth_headers={})
+
+ model_support_voice = [x.get("value") for x in
+ self.get_tts_model_voices(model=model, credentials=credentials)]
+ if not voice or voice not in model_support_voice:
+ voice = self._get_model_default_voice(model, credentials)
+ word_limit = self._get_model_word_limit(model, credentials)
+ if len(content_text) > word_limit:
+ sentences = self._split_text_into_sentences(content_text, max_length=word_limit)
+ executor = concurrent.futures.ThreadPoolExecutor(max_workers=min(3, len(sentences)))
+ futures = [executor.submit(
+ handle.speech,
+ input=sentences[i],
+ voice=voice,
+ response_format="mp3",
+ speed=1.0,
+ stream=False
+ )
+ for i in range(len(sentences))]
+
+ for index, future in enumerate(futures):
+ response = future.result()
+ for i in range(0, len(response), 1024):
+ yield response[i:i + 1024]
+ else:
+ response = handle.speech(
+ input=content_text.strip(),
+ voice=voice,
+ response_format="mp3",
+ speed=1.0,
+ stream=False
+ )
+
+ for i in range(0, len(response), 1024):
+ yield response[i:i + 1024]
+ except Exception as ex:
+ raise InvokeBadRequestError(str(ex))
diff --git a/api/core/model_runtime/model_providers/xinference/xinference.yaml b/api/core/model_runtime/model_providers/xinference/xinference.yaml
index 9496c66fdd..be9073c1ca 100644
--- a/api/core/model_runtime/model_providers/xinference/xinference.yaml
+++ b/api/core/model_runtime/model_providers/xinference/xinference.yaml
@@ -17,6 +17,7 @@ supported_model_types:
- text-embedding
- rerank
- speech2text
+ - tts
configurate_methods:
- customizable-model
model_credential_schema:
@@ -32,7 +33,7 @@ model_credential_schema:
label:
zh_Hans: 服务器URL
en_US: Server url
- type: text-input
+ type: secret-input
required: true
placeholder:
zh_Hans: 在此输入Xinference的服务器地址,如 http://192.168.1.100:9997
@@ -50,7 +51,7 @@ model_credential_schema:
label:
zh_Hans: API密钥
en_US: API key
- type: text-input
+ type: secret-input
required: false
placeholder:
zh_Hans: 在此输入您的API密钥
diff --git a/api/core/model_runtime/model_providers/xinference/xinference_helper.py b/api/core/model_runtime/model_providers/xinference/xinference_helper.py
index 9a3fc9b193..7db483a485 100644
--- a/api/core/model_runtime/model_providers/xinference/xinference_helper.py
+++ b/api/core/model_runtime/model_providers/xinference/xinference_helper.py
@@ -1,5 +1,6 @@
from threading import Lock
from time import time
+from typing import Optional
from requests.adapters import HTTPAdapter
from requests.exceptions import ConnectionError, MissingSchema, Timeout
@@ -15,9 +16,11 @@ class XinferenceModelExtraParameter:
context_length: int = 2048
support_function_call: bool = False
support_vision: bool = False
+ model_family: Optional[str]
def __init__(self, model_format: str, model_handle_type: str, model_ability: list[str],
- support_function_call: bool, support_vision: bool, max_tokens: int, context_length: int) -> None:
+ support_function_call: bool, support_vision: bool, max_tokens: int, context_length: int,
+ model_family: Optional[str]) -> None:
self.model_format = model_format
self.model_handle_type = model_handle_type
self.model_ability = model_ability
@@ -25,6 +28,7 @@ class XinferenceModelExtraParameter:
self.support_vision = support_vision
self.max_tokens = max_tokens
self.context_length = context_length
+ self.model_family = model_family
cache = {}
cache_lock = Lock()
@@ -78,9 +82,16 @@ class XinferenceHelper:
model_format = response_json.get('model_format', 'ggmlv3')
model_ability = response_json.get('model_ability', [])
+ model_family = response_json.get('model_family', None)
if response_json.get('model_type') == 'embedding':
model_handle_type = 'embedding'
+ elif response_json.get('model_type') == 'audio':
+ model_handle_type = 'audio'
+ if model_family and model_family in ['ChatTTS', 'CosyVoice']:
+ model_ability.append('text-to-audio')
+ else:
+ model_ability.append('audio-to-text')
elif model_format == 'ggmlv3' and 'chatglm' in response_json['model_name']:
model_handle_type = 'chatglm'
elif 'generate' in model_ability:
@@ -88,7 +99,7 @@ class XinferenceHelper:
elif 'chat' in model_ability:
model_handle_type = 'chat'
else:
- raise NotImplementedError(f'xinference model handle type {model_handle_type} is not supported')
+ raise NotImplementedError('xinference model handle type is not supported')
support_function_call = 'tools' in model_ability
support_vision = 'vision' in model_ability
@@ -103,5 +114,6 @@ class XinferenceHelper:
support_function_call=support_function_call,
support_vision=support_vision,
max_tokens=max_tokens,
- context_length=context_length
- )
\ No newline at end of file
+ context_length=context_length,
+ model_family=model_family
+ )
diff --git a/api/core/model_runtime/model_providers/zhinao/__init__.py b/api/core/model_runtime/model_providers/zhinao/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/core/model_runtime/model_providers/zhinao/_assets/icon_l_en.svg b/api/core/model_runtime/model_providers/zhinao/_assets/icon_l_en.svg
new file mode 100644
index 0000000000..b22b869441
--- /dev/null
+++ b/api/core/model_runtime/model_providers/zhinao/_assets/icon_l_en.svg
@@ -0,0 +1,8 @@
+
+
+
diff --git a/api/core/model_runtime/model_providers/zhinao/_assets/icon_s_en.svg b/api/core/model_runtime/model_providers/zhinao/_assets/icon_s_en.svg
new file mode 100644
index 0000000000..8fe72b7d09
--- /dev/null
+++ b/api/core/model_runtime/model_providers/zhinao/_assets/icon_s_en.svg
@@ -0,0 +1,8 @@
+
+
+
diff --git a/api/core/model_runtime/model_providers/zhinao/llm/360gpt-turbo-responsibility-8k.yaml b/api/core/model_runtime/model_providers/zhinao/llm/360gpt-turbo-responsibility-8k.yaml
new file mode 100644
index 0000000000..f420df0001
--- /dev/null
+++ b/api/core/model_runtime/model_providers/zhinao/llm/360gpt-turbo-responsibility-8k.yaml
@@ -0,0 +1,36 @@
+model: 360gpt-turbo-responsibility-8k
+label:
+ zh_Hans: 360gpt-turbo-responsibility-8k
+ en_US: 360gpt-turbo-responsibility-8k
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 8192
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ min: 0
+ max: 1
+ default: 0.5
+ - name: top_p
+ use_template: top_p
+ min: 0
+ max: 1
+ default: 1
+ - name: max_tokens
+ use_template: max_tokens
+ min: 1
+ max: 8192
+ default: 1024
+ - name: frequency_penalty
+ use_template: frequency_penalty
+ min: -2
+ max: 2
+ default: 0
+ - name: presence_penalty
+ use_template: presence_penalty
+ min: -2
+ max: 2
+ default: 0
diff --git a/api/core/model_runtime/model_providers/zhinao/llm/360gpt-turbo.yaml b/api/core/model_runtime/model_providers/zhinao/llm/360gpt-turbo.yaml
new file mode 100644
index 0000000000..a2658fbe4f
--- /dev/null
+++ b/api/core/model_runtime/model_providers/zhinao/llm/360gpt-turbo.yaml
@@ -0,0 +1,36 @@
+model: 360gpt-turbo
+label:
+ zh_Hans: 360gpt-turbo
+ en_US: 360gpt-turbo
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 2048
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ min: 0
+ max: 1
+ default: 0.5
+ - name: top_p
+ use_template: top_p
+ min: 0
+ max: 1
+ default: 1
+ - name: max_tokens
+ use_template: max_tokens
+ min: 1
+ max: 2048
+ default: 1024
+ - name: frequency_penalty
+ use_template: frequency_penalty
+ min: -2
+ max: 2
+ default: 0
+ - name: presence_penalty
+ use_template: presence_penalty
+ min: -2
+ max: 2
+ default: 0
diff --git a/api/core/model_runtime/model_providers/zhinao/llm/360gpt2-pro.yaml b/api/core/model_runtime/model_providers/zhinao/llm/360gpt2-pro.yaml
new file mode 100644
index 0000000000..00c81eb1da
--- /dev/null
+++ b/api/core/model_runtime/model_providers/zhinao/llm/360gpt2-pro.yaml
@@ -0,0 +1,36 @@
+model: 360gpt2-pro
+label:
+ zh_Hans: 360gpt2-pro
+ en_US: 360gpt2-pro
+model_type: llm
+features:
+ - agent-thought
+model_properties:
+ mode: chat
+ context_size: 2048
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ min: 0
+ max: 1
+ default: 0.5
+ - name: top_p
+ use_template: top_p
+ min: 0
+ max: 1
+ default: 1
+ - name: max_tokens
+ use_template: max_tokens
+ min: 1
+ max: 2048
+ default: 1024
+ - name: frequency_penalty
+ use_template: frequency_penalty
+ min: -2
+ max: 2
+ default: 0
+ - name: presence_penalty
+ use_template: presence_penalty
+ min: -2
+ max: 2
+ default: 0
diff --git a/api/core/model_runtime/model_providers/zhinao/llm/__init__.py b/api/core/model_runtime/model_providers/zhinao/llm/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/core/model_runtime/model_providers/zhinao/llm/_position.yaml b/api/core/model_runtime/model_providers/zhinao/llm/_position.yaml
new file mode 100644
index 0000000000..ab8dbf5182
--- /dev/null
+++ b/api/core/model_runtime/model_providers/zhinao/llm/_position.yaml
@@ -0,0 +1,3 @@
+- 360gpt2-pro
+- 360gpt-turbo
+- 360gpt-turbo-responsibility-8k
diff --git a/api/core/model_runtime/model_providers/zhinao/llm/llm.py b/api/core/model_runtime/model_providers/zhinao/llm/llm.py
new file mode 100644
index 0000000000..6930a5ed01
--- /dev/null
+++ b/api/core/model_runtime/model_providers/zhinao/llm/llm.py
@@ -0,0 +1,25 @@
+from collections.abc import Generator
+from typing import Optional, Union
+
+from core.model_runtime.entities.llm_entities import LLMResult
+from core.model_runtime.entities.message_entities import PromptMessage, PromptMessageTool
+from core.model_runtime.model_providers.openai_api_compatible.llm.llm import OAIAPICompatLargeLanguageModel
+
+
+class ZhinaoLargeLanguageModel(OAIAPICompatLargeLanguageModel):
+ def _invoke(self, model: str, credentials: dict,
+ prompt_messages: list[PromptMessage], model_parameters: dict,
+ tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None,
+ stream: bool = True, user: Optional[str] = None) \
+ -> Union[LLMResult, Generator]:
+ self._add_custom_parameters(credentials)
+ return super()._invoke(model, credentials, prompt_messages, model_parameters, tools, stop, stream)
+
+ def validate_credentials(self, model: str, credentials: dict) -> None:
+ self._add_custom_parameters(credentials)
+ super().validate_credentials(model, credentials)
+
+ @classmethod
+ def _add_custom_parameters(cls, credentials: dict) -> None:
+ credentials['mode'] = 'chat'
+ credentials['endpoint_url'] = 'https://api.360.cn/v1'
diff --git a/api/core/model_runtime/model_providers/zhinao/zhinao.py b/api/core/model_runtime/model_providers/zhinao/zhinao.py
new file mode 100644
index 0000000000..44b36c9f51
--- /dev/null
+++ b/api/core/model_runtime/model_providers/zhinao/zhinao.py
@@ -0,0 +1,32 @@
+import logging
+
+from core.model_runtime.entities.model_entities import ModelType
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.__base.model_provider import ModelProvider
+
+logger = logging.getLogger(__name__)
+
+
+class ZhinaoProvider(ModelProvider):
+
+ def validate_provider_credentials(self, credentials: dict) -> None:
+ """
+ Validate provider credentials
+ if validate failed, raise exception
+
+ :param credentials: provider credentials, credentials form defined in `provider_credential_schema`.
+ """
+ try:
+ model_instance = self.get_model_instance(ModelType.LLM)
+
+ # Use `360gpt-turbo` model for validate,
+ # no matter what model you pass in, text completion model or chat model
+ model_instance.validate_credentials(
+ model='360gpt-turbo',
+ credentials=credentials
+ )
+ except CredentialsValidateFailedError as ex:
+ raise ex
+ except Exception as ex:
+ logger.exception(f'{self.get_provider_schema().provider} credentials validate failed')
+ raise ex
diff --git a/api/core/model_runtime/model_providers/zhinao/zhinao.yaml b/api/core/model_runtime/model_providers/zhinao/zhinao.yaml
new file mode 100644
index 0000000000..c5cb142c47
--- /dev/null
+++ b/api/core/model_runtime/model_providers/zhinao/zhinao.yaml
@@ -0,0 +1,32 @@
+provider: zhinao
+label:
+ en_US: 360 AI
+ zh_Hans: 360 智脑
+description:
+ en_US: Models provided by 360 AI.
+ zh_Hans: 360 智脑提供的模型。
+icon_small:
+ en_US: icon_s_en.svg
+icon_large:
+ en_US: icon_l_en.svg
+background: "#e3f0ff"
+help:
+ title:
+ en_US: Get your API Key from 360 AI.
+ zh_Hans: 从360 智脑获取 API Key
+ url:
+ en_US: https://ai.360.com/platform/keys
+supported_model_types:
+ - llm
+configurate_methods:
+ - predefined-model
+provider_credential_schema:
+ credential_form_schemas:
+ - variable: api_key
+ label:
+ en_US: API Key
+ type: secret-input
+ required: true
+ placeholder:
+ zh_Hans: 在此输入您的 API Key
+ en_US: Enter your API Key
diff --git a/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-0520.yaml b/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-0520.yaml
index 3968e8f268..8391278e4f 100644
--- a/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-0520.yaml
+++ b/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-0520.yaml
@@ -37,3 +37,8 @@ parameter_rules:
default: 1024
min: 1
max: 8192
+pricing:
+ input: '0.1'
+ output: '0.1'
+ unit: '0.001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-air.yaml b/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-air.yaml
index ae2d5e5d53..7caebd3e4b 100644
--- a/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-air.yaml
+++ b/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-air.yaml
@@ -37,3 +37,8 @@ parameter_rules:
default: 1024
min: 1
max: 8192
+pricing:
+ input: '0.001'
+ output: '0.001'
+ unit: '0.001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-airx.yaml b/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-airx.yaml
index c0038a1ab2..dc123913de 100644
--- a/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-airx.yaml
+++ b/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-airx.yaml
@@ -37,3 +37,8 @@ parameter_rules:
default: 1024
min: 1
max: 8192
+pricing:
+ input: '0.01'
+ output: '0.01'
+ unit: '0.001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-flash.yaml b/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-flash.yaml
index 650f9faee6..1b1d499ba7 100644
--- a/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-flash.yaml
+++ b/api/core/model_runtime/model_providers/zhipuai/llm/glm-4-flash.yaml
@@ -37,3 +37,8 @@ parameter_rules:
default: 1024
min: 1
max: 8192
+pricing:
+ input: '0.0001'
+ output: '0.0001'
+ unit: '0.001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/zhipuai/llm/glm_4_long.yaml b/api/core/model_runtime/model_providers/zhipuai/llm/glm_4_long.yaml
new file mode 100644
index 0000000000..9d92e58f6c
--- /dev/null
+++ b/api/core/model_runtime/model_providers/zhipuai/llm/glm_4_long.yaml
@@ -0,0 +1,33 @@
+model: glm-4-long
+label:
+ en_US: glm-4-long
+model_type: llm
+features:
+ - multi-tool-call
+ - agent-thought
+ - stream-tool-call
+model_properties:
+ mode: chat
+ context_size: 10240
+parameter_rules:
+ - name: temperature
+ use_template: temperature
+ default: 0.95
+ min: 0.0
+ max: 1.0
+ help:
+ zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
+ en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
+ - name: top_p
+ use_template: top_p
+ default: 0.7
+ min: 0.0
+ max: 1.0
+ help:
+ zh_Hans: 用温度取样的另一种方法,称为核取样取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1,默认值为 0.7 模型考虑具有 top_p 概率质量tokens的结果例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens 建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。
+ en_US: Another method of temperature sampling is called kernel sampling. The value range is (0.0, 1.0) open interval, which cannot be equal to 0 or 1. The default value is 0.7. The model considers the results with top_p probability mass tokens. For example 0.1 means The model decoder only considers tokens from the candidate set with the top 10% probability. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
+ - name: max_tokens
+ use_template: max_tokens
+ default: 1024
+ min: 1
+ max: 4096
diff --git a/api/core/model_runtime/model_providers/zhipuai/text_embedding/embedding-2.yaml b/api/core/model_runtime/model_providers/zhipuai/text_embedding/embedding-2.yaml
index faf0f818c4..f1b8b35602 100644
--- a/api/core/model_runtime/model_providers/zhipuai/text_embedding/embedding-2.yaml
+++ b/api/core/model_runtime/model_providers/zhipuai/text_embedding/embedding-2.yaml
@@ -1,4 +1,8 @@
model: embedding-2
model_type: text-embedding
model_properties:
- context_size: 512
+ context_size: 8192
+pricing:
+ input: '0.0005'
+ unit: '0.001'
+ currency: RMB
diff --git a/api/core/model_runtime/model_providers/zhipuai/text_embedding/embedding-3.yaml b/api/core/model_runtime/model_providers/zhipuai/text_embedding/embedding-3.yaml
new file mode 100644
index 0000000000..5c55c911c4
--- /dev/null
+++ b/api/core/model_runtime/model_providers/zhipuai/text_embedding/embedding-3.yaml
@@ -0,0 +1,8 @@
+model: embedding-3
+model_type: text-embedding
+model_properties:
+ context_size: 8192
+pricing:
+ input: '0.0005'
+ unit: '0.001'
+ currency: RMB
diff --git a/api/core/moderation/input_moderation.py b/api/core/moderation/input_moderation.py
index c5dd88fb24..8157b300b1 100644
--- a/api/core/moderation/input_moderation.py
+++ b/api/core/moderation/input_moderation.py
@@ -4,7 +4,8 @@ from typing import Optional
from core.app.app_config.entities import AppConfig
from core.moderation.base import ModerationAction, ModerationException
from core.moderation.factory import ModerationFactory
-from core.ops.ops_trace_manager import TraceQueueManager, TraceTask, TraceTaskName
+from core.ops.entities.trace_entity import TraceTaskName
+from core.ops.ops_trace_manager import TraceQueueManager, TraceTask
from core.ops.utils import measure_time
logger = logging.getLogger(__name__)
diff --git a/api/core/ops/entities/trace_entity.py b/api/core/ops/entities/trace_entity.py
index db7e0806ee..a1443f0691 100644
--- a/api/core/ops/entities/trace_entity.py
+++ b/api/core/ops/entities/trace_entity.py
@@ -1,4 +1,5 @@
from datetime import datetime
+from enum import Enum
from typing import Any, Optional, Union
from pydantic import BaseModel, ConfigDict, field_validator
@@ -105,4 +106,15 @@ trace_info_info_map = {
'DatasetRetrievalTraceInfo': DatasetRetrievalTraceInfo,
'ToolTraceInfo': ToolTraceInfo,
'GenerateNameTraceInfo': GenerateNameTraceInfo,
-}
\ No newline at end of file
+}
+
+
+class TraceTaskName(str, Enum):
+ CONVERSATION_TRACE = 'conversation'
+ WORKFLOW_TRACE = 'workflow'
+ MESSAGE_TRACE = 'message'
+ MODERATION_TRACE = 'moderation'
+ SUGGESTED_QUESTION_TRACE = 'suggested_question'
+ DATASET_RETRIEVAL_TRACE = 'dataset_retrieval'
+ TOOL_TRACE = 'tool'
+ GENERATE_NAME_TRACE = 'generate_conversation_name'
diff --git a/api/core/ops/langfuse_trace/entities/langfuse_trace_entity.py b/api/core/ops/langfuse_trace/entities/langfuse_trace_entity.py
index b90c05f4cb..af7661f0af 100644
--- a/api/core/ops/langfuse_trace/entities/langfuse_trace_entity.py
+++ b/api/core/ops/langfuse_trace/entities/langfuse_trace_entity.py
@@ -50,10 +50,11 @@ class LangfuseTrace(BaseModel):
"""
Langfuse trace model
"""
+
id: Optional[str] = Field(
default=None,
description="The id of the trace can be set, defaults to a random id. Used to link traces to external systems "
- "or when creating a distributed trace. Traces are upserted on id.",
+ "or when creating a distributed trace. Traces are upserted on id.",
)
name: Optional[str] = Field(
default=None,
@@ -68,7 +69,7 @@ class LangfuseTrace(BaseModel):
metadata: Optional[dict[str, Any]] = Field(
default=None,
description="Additional metadata of the trace. Can be any JSON object. Metadata is merged when being updated "
- "via the API.",
+ "via the API.",
)
user_id: Optional[str] = Field(
default=None,
@@ -81,22 +82,22 @@ class LangfuseTrace(BaseModel):
version: Optional[str] = Field(
default=None,
description="The version of the trace type. Used to understand how changes to the trace type affect metrics. "
- "Useful in debugging.",
+ "Useful in debugging.",
)
release: Optional[str] = Field(
default=None,
description="The release identifier of the current deployment. Used to understand how changes of different "
- "deployments affect metrics. Useful in debugging.",
+ "deployments affect metrics. Useful in debugging.",
)
tags: Optional[list[str]] = Field(
default=None,
description="Tags are used to categorize or label traces. Traces can be filtered by tags in the UI and GET "
- "API. Tags can also be changed in the UI. Tags are merged and never deleted via the API.",
+ "API. Tags can also be changed in the UI. Tags are merged and never deleted via the API.",
)
public: Optional[bool] = Field(
default=None,
description="You can make a trace public to share it via a public link. This allows others to view the trace "
- "without needing to log in or be members of your Langfuse project.",
+ "without needing to log in or be members of your Langfuse project.",
)
@field_validator("input", "output")
@@ -109,6 +110,7 @@ class LangfuseSpan(BaseModel):
"""
Langfuse span model
"""
+
id: Optional[str] = Field(
default=None,
description="The id of the span can be set, otherwise a random id is generated. Spans are upserted on id.",
@@ -140,17 +142,17 @@ class LangfuseSpan(BaseModel):
metadata: Optional[dict[str, Any]] = Field(
default=None,
description="Additional metadata of the span. Can be any JSON object. Metadata is merged when being updated "
- "via the API.",
+ "via the API.",
)
level: Optional[str] = Field(
default=None,
description="The level of the span. Can be DEBUG, DEFAULT, WARNING or ERROR. Used for sorting/filtering of "
- "traces with elevated error levels and for highlighting in the UI.",
+ "traces with elevated error levels and for highlighting in the UI.",
)
status_message: Optional[str] = Field(
default=None,
description="The status message of the span. Additional field for context of the event. E.g. the error "
- "message of an error event.",
+ "message of an error event.",
)
input: Optional[Union[str, dict[str, Any], list, None]] = Field(
default=None, description="The input of the span. Can be any JSON object."
@@ -161,7 +163,7 @@ class LangfuseSpan(BaseModel):
version: Optional[str] = Field(
default=None,
description="The version of the span type. Used to understand how changes to the span type affect metrics. "
- "Useful in debugging.",
+ "Useful in debugging.",
)
parent_observation_id: Optional[str] = Field(
default=None,
@@ -185,10 +187,9 @@ class UnitEnum(str, Enum):
class GenerationUsage(BaseModel):
promptTokens: Optional[int] = None
completionTokens: Optional[int] = None
- totalTokens: Optional[int] = None
+ total: Optional[int] = None
input: Optional[int] = None
output: Optional[int] = None
- total: Optional[int] = None
unit: Optional[UnitEnum] = None
inputCost: Optional[float] = None
outputCost: Optional[float] = None
@@ -224,15 +225,13 @@ class LangfuseGeneration(BaseModel):
completion_start_time: Optional[datetime | str] = Field(
default=None,
description="The time at which the completion started (streaming). Set it to get latency analytics broken "
- "down into time until completion started and completion duration.",
+ "down into time until completion started and completion duration.",
)
end_time: Optional[datetime | str] = Field(
default=None,
description="The time at which the generation ended. Automatically set by generation.end().",
)
- model: Optional[str] = Field(
- default=None, description="The name of the model used for the generation."
- )
+ model: Optional[str] = Field(default=None, description="The name of the model used for the generation.")
model_parameters: Optional[dict[str, Any]] = Field(
default=None,
description="The parameters of the model used for the generation; can be any key-value pairs.",
@@ -248,27 +247,27 @@ class LangfuseGeneration(BaseModel):
usage: Optional[GenerationUsage] = Field(
default=None,
description="The usage object supports the OpenAi structure with tokens and a more generic version with "
- "detailed costs and units.",
+ "detailed costs and units.",
)
metadata: Optional[dict[str, Any]] = Field(
default=None,
description="Additional metadata of the generation. Can be any JSON object. Metadata is merged when being "
- "updated via the API.",
+ "updated via the API.",
)
level: Optional[LevelEnum] = Field(
default=None,
description="The level of the generation. Can be DEBUG, DEFAULT, WARNING or ERROR. Used for sorting/filtering "
- "of traces with elevated error levels and for highlighting in the UI.",
+ "of traces with elevated error levels and for highlighting in the UI.",
)
status_message: Optional[str] = Field(
default=None,
description="The status message of the generation. Additional field for context of the event. E.g. the error "
- "message of an error event.",
+ "message of an error event.",
)
version: Optional[str] = Field(
default=None,
description="The version of the generation type. Used to understand how changes to the span type affect "
- "metrics. Useful in debugging.",
+ "metrics. Useful in debugging.",
)
model_config = ConfigDict(protected_namespaces=())
@@ -277,4 +276,3 @@ class LangfuseGeneration(BaseModel):
def ensure_dict(cls, v, info: ValidationInfo):
field_name = info.field_name
return validate_input_output(v, field_name)
-
diff --git a/api/core/ops/langfuse_trace/langfuse_trace.py b/api/core/ops/langfuse_trace/langfuse_trace.py
index cb86396420..698398e0cb 100644
--- a/api/core/ops/langfuse_trace/langfuse_trace.py
+++ b/api/core/ops/langfuse_trace/langfuse_trace.py
@@ -16,6 +16,7 @@ from core.ops.entities.trace_entity import (
ModerationTraceInfo,
SuggestedQuestionTraceInfo,
ToolTraceInfo,
+ TraceTaskName,
WorkflowTraceInfo,
)
from core.ops.langfuse_trace.entities.langfuse_trace_entity import (
@@ -65,23 +66,26 @@ class LangFuseDataTrace(BaseTraceInstance):
def workflow_trace(self, trace_info: WorkflowTraceInfo):
trace_id = trace_info.workflow_app_log_id if trace_info.workflow_app_log_id else trace_info.workflow_run_id
+ user_id = trace_info.metadata.get("user_id")
if trace_info.message_id:
trace_id = trace_info.message_id
- name = f"message_{trace_info.message_id}"
+ name = TraceTaskName.MESSAGE_TRACE.value
trace_data = LangfuseTrace(
- id=trace_info.message_id,
- user_id=trace_info.tenant_id,
+ id=trace_id,
+ user_id=user_id,
name=name,
input=trace_info.workflow_run_inputs,
output=trace_info.workflow_run_outputs,
metadata=trace_info.metadata,
session_id=trace_info.conversation_id,
tags=["message", "workflow"],
+ created_at=trace_info.start_time,
+ updated_at=trace_info.end_time,
)
self.add_trace(langfuse_trace_data=trace_data)
workflow_span_data = LangfuseSpan(
- id=trace_info.workflow_app_log_id if trace_info.workflow_app_log_id else trace_info.workflow_run_id,
- name=f"workflow_{trace_info.workflow_app_log_id}" if trace_info.workflow_app_log_id else f"workflow_{trace_info.workflow_run_id}",
+ id=(trace_info.workflow_app_log_id if trace_info.workflow_app_log_id else trace_info.workflow_run_id),
+ name=TraceTaskName.WORKFLOW_TRACE.value,
input=trace_info.workflow_run_inputs,
output=trace_info.workflow_run_outputs,
trace_id=trace_id,
@@ -95,8 +99,8 @@ class LangFuseDataTrace(BaseTraceInstance):
else:
trace_data = LangfuseTrace(
id=trace_id,
- user_id=trace_info.tenant_id,
- name=f"workflow_{trace_info.workflow_app_log_id}" if trace_info.workflow_app_log_id else f"workflow_{trace_info.workflow_run_id}",
+ user_id=user_id,
+ name=TraceTaskName.WORKFLOW_TRACE.value,
input=trace_info.workflow_run_inputs,
output=trace_info.workflow_run_outputs,
metadata=trace_info.metadata,
@@ -133,14 +137,12 @@ class LangFuseDataTrace(BaseTraceInstance):
node_type = node_execution.node_type
status = node_execution.status
if node_type == "llm":
- inputs = json.loads(node_execution.process_data).get(
- "prompts", {}
- ) if node_execution.process_data else {}
+ inputs = (
+ json.loads(node_execution.process_data).get("prompts", {}) if node_execution.process_data else {}
+ )
else:
inputs = json.loads(node_execution.inputs) if node_execution.inputs else {}
- outputs = (
- json.loads(node_execution.outputs) if node_execution.outputs else {}
- )
+ outputs = json.loads(node_execution.outputs) if node_execution.outputs else {}
created_at = node_execution.created_at if node_execution.created_at else datetime.now()
elapsed_time = node_execution.elapsed_time
finished_at = created_at + timedelta(seconds=elapsed_time)
@@ -162,28 +164,30 @@ class LangFuseDataTrace(BaseTraceInstance):
if trace_info.message_id:
span_data = LangfuseSpan(
id=node_execution_id,
- name=f"{node_name}_{node_execution_id}",
+ name=node_type,
input=inputs,
output=outputs,
trace_id=trace_id,
start_time=created_at,
end_time=finished_at,
metadata=metadata,
- level=LevelEnum.DEFAULT if status == 'succeeded' else LevelEnum.ERROR,
+ level=(LevelEnum.DEFAULT if status == "succeeded" else LevelEnum.ERROR),
status_message=trace_info.error if trace_info.error else "",
- parent_observation_id=trace_info.workflow_app_log_id if trace_info.workflow_app_log_id else trace_info.workflow_run_id,
+ parent_observation_id=(
+ trace_info.workflow_app_log_id if trace_info.workflow_app_log_id else trace_info.workflow_run_id
+ ),
)
else:
span_data = LangfuseSpan(
id=node_execution_id,
- name=f"{node_name}_{node_execution_id}",
+ name=node_type,
input=inputs,
output=outputs,
trace_id=trace_id,
start_time=created_at,
end_time=finished_at,
metadata=metadata,
- level=LevelEnum.DEFAULT if status == 'succeeded' else LevelEnum.ERROR,
+ level=(LevelEnum.DEFAULT if status == "succeeded" else LevelEnum.ERROR),
status_message=trace_info.error if trace_info.error else "",
)
@@ -194,11 +198,11 @@ class LangFuseDataTrace(BaseTraceInstance):
total_token = metadata.get("total_tokens", 0)
# add generation
generation_usage = GenerationUsage(
- totalTokens=total_token,
+ total=total_token,
)
node_generation_data = LangfuseGeneration(
- name=f"generation_{node_execution_id}",
+ name="llm",
trace_id=trace_id,
parent_observation_id=node_execution_id,
start_time=created_at,
@@ -206,16 +210,14 @@ class LangFuseDataTrace(BaseTraceInstance):
input=inputs,
output=outputs,
metadata=metadata,
- level=LevelEnum.DEFAULT if status == 'succeeded' else LevelEnum.ERROR,
+ level=(LevelEnum.DEFAULT if status == "succeeded" else LevelEnum.ERROR),
status_message=trace_info.error if trace_info.error else "",
usage=generation_usage,
)
self.add_generation(langfuse_generation_data=node_generation_data)
- def message_trace(
- self, trace_info: MessageTraceInfo, **kwargs
- ):
+ def message_trace(self, trace_info: MessageTraceInfo, **kwargs):
# get message file data
file_list = trace_info.file_list
metadata = trace_info.metadata
@@ -224,9 +226,9 @@ class LangFuseDataTrace(BaseTraceInstance):
user_id = message_data.from_account_id
if message_data.from_end_user_id:
- end_user_data: EndUser = db.session.query(EndUser).filter(
- EndUser.id == message_data.from_end_user_id
- ).first()
+ end_user_data: EndUser = (
+ db.session.query(EndUser).filter(EndUser.id == message_data.from_end_user_id).first()
+ )
if end_user_data is not None:
user_id = end_user_data.session_id
metadata["user_id"] = user_id
@@ -234,7 +236,7 @@ class LangFuseDataTrace(BaseTraceInstance):
trace_data = LangfuseTrace(
id=message_id,
user_id=user_id,
- name=f"message_{message_id}",
+ name=TraceTaskName.MESSAGE_TRACE.value,
input={
"message": trace_info.inputs,
"files": file_list,
@@ -257,7 +259,6 @@ class LangFuseDataTrace(BaseTraceInstance):
# start add span
generation_usage = GenerationUsage(
- totalTokens=trace_info.total_tokens,
input=trace_info.message_tokens,
output=trace_info.answer_tokens,
total=trace_info.total_tokens,
@@ -266,7 +267,7 @@ class LangFuseDataTrace(BaseTraceInstance):
)
langfuse_generation_data = LangfuseGeneration(
- name=f"generation_{message_id}",
+ name="llm",
trace_id=message_id,
start_time=trace_info.start_time,
end_time=trace_info.end_time,
@@ -274,7 +275,7 @@ class LangFuseDataTrace(BaseTraceInstance):
input=trace_info.inputs,
output=message_data.answer,
metadata=metadata,
- level=LevelEnum.DEFAULT if message_data.status != 'error' else LevelEnum.ERROR,
+ level=(LevelEnum.DEFAULT if message_data.status != "error" else LevelEnum.ERROR),
status_message=message_data.error if message_data.error else "",
usage=generation_usage,
)
@@ -283,7 +284,7 @@ class LangFuseDataTrace(BaseTraceInstance):
def moderation_trace(self, trace_info: ModerationTraceInfo):
span_data = LangfuseSpan(
- name="moderation",
+ name=TraceTaskName.MODERATION_TRACE.value,
input=trace_info.inputs,
output={
"action": trace_info.action,
@@ -302,22 +303,21 @@ class LangFuseDataTrace(BaseTraceInstance):
def suggested_question_trace(self, trace_info: SuggestedQuestionTraceInfo):
message_data = trace_info.message_data
generation_usage = GenerationUsage(
- totalTokens=len(str(trace_info.suggested_question)),
+ total=len(str(trace_info.suggested_question)),
input=len(trace_info.inputs),
output=len(trace_info.suggested_question),
- total=len(trace_info.suggested_question),
unit=UnitEnum.CHARACTERS,
)
generation_data = LangfuseGeneration(
- name="suggested_question",
+ name=TraceTaskName.SUGGESTED_QUESTION_TRACE.value,
input=trace_info.inputs,
output=str(trace_info.suggested_question),
trace_id=trace_info.message_id,
start_time=trace_info.start_time,
end_time=trace_info.end_time,
metadata=trace_info.metadata,
- level=LevelEnum.DEFAULT if message_data.status != 'error' else LevelEnum.ERROR,
+ level=(LevelEnum.DEFAULT if message_data.status != "error" else LevelEnum.ERROR),
status_message=message_data.error if message_data.error else "",
usage=generation_usage,
)
@@ -326,7 +326,7 @@ class LangFuseDataTrace(BaseTraceInstance):
def dataset_retrieval_trace(self, trace_info: DatasetRetrievalTraceInfo):
dataset_retrieval_span_data = LangfuseSpan(
- name="dataset_retrieval",
+ name=TraceTaskName.DATASET_RETRIEVAL_TRACE.value,
input=trace_info.inputs,
output={"documents": trace_info.documents},
trace_id=trace_info.message_id,
@@ -346,7 +346,7 @@ class LangFuseDataTrace(BaseTraceInstance):
start_time=trace_info.start_time,
end_time=trace_info.end_time,
metadata=trace_info.metadata,
- level=LevelEnum.DEFAULT if trace_info.error == "" or trace_info.error is None else LevelEnum.ERROR,
+ level=(LevelEnum.DEFAULT if trace_info.error == "" or trace_info.error is None else LevelEnum.ERROR),
status_message=trace_info.error,
)
@@ -354,7 +354,7 @@ class LangFuseDataTrace(BaseTraceInstance):
def generate_name_trace(self, trace_info: GenerateNameTraceInfo):
name_generation_trace_data = LangfuseTrace(
- name="generate_name",
+ name=TraceTaskName.GENERATE_NAME_TRACE.value,
input=trace_info.inputs,
output=trace_info.outputs,
user_id=trace_info.tenant_id,
@@ -365,7 +365,7 @@ class LangFuseDataTrace(BaseTraceInstance):
self.add_trace(langfuse_trace_data=name_generation_trace_data)
name_generation_span_data = LangfuseSpan(
- name="generate_name",
+ name=TraceTaskName.GENERATE_NAME_TRACE.value,
input=trace_info.inputs,
output=trace_info.outputs,
trace_id=trace_info.conversation_id,
@@ -376,9 +376,7 @@ class LangFuseDataTrace(BaseTraceInstance):
self.add_span(langfuse_span_data=name_generation_span_data)
def add_trace(self, langfuse_trace_data: Optional[LangfuseTrace] = None):
- format_trace_data = (
- filter_none_values(langfuse_trace_data.model_dump()) if langfuse_trace_data else {}
- )
+ format_trace_data = filter_none_values(langfuse_trace_data.model_dump()) if langfuse_trace_data else {}
try:
self.langfuse_client.trace(**format_trace_data)
logger.debug("LangFuse Trace created successfully")
@@ -386,9 +384,7 @@ class LangFuseDataTrace(BaseTraceInstance):
raise ValueError(f"LangFuse Failed to create trace: {str(e)}")
def add_span(self, langfuse_span_data: Optional[LangfuseSpan] = None):
- format_span_data = (
- filter_none_values(langfuse_span_data.model_dump()) if langfuse_span_data else {}
- )
+ format_span_data = filter_none_values(langfuse_span_data.model_dump()) if langfuse_span_data else {}
try:
self.langfuse_client.span(**format_span_data)
logger.debug("LangFuse Span created successfully")
@@ -396,19 +392,13 @@ class LangFuseDataTrace(BaseTraceInstance):
raise ValueError(f"LangFuse Failed to create span: {str(e)}")
def update_span(self, span, langfuse_span_data: Optional[LangfuseSpan] = None):
- format_span_data = (
- filter_none_values(langfuse_span_data.model_dump()) if langfuse_span_data else {}
- )
+ format_span_data = filter_none_values(langfuse_span_data.model_dump()) if langfuse_span_data else {}
span.end(**format_span_data)
- def add_generation(
- self, langfuse_generation_data: Optional[LangfuseGeneration] = None
- ):
+ def add_generation(self, langfuse_generation_data: Optional[LangfuseGeneration] = None):
format_generation_data = (
- filter_none_values(langfuse_generation_data.model_dump())
- if langfuse_generation_data
- else {}
+ filter_none_values(langfuse_generation_data.model_dump()) if langfuse_generation_data else {}
)
try:
self.langfuse_client.generation(**format_generation_data)
@@ -416,13 +406,9 @@ class LangFuseDataTrace(BaseTraceInstance):
except Exception as e:
raise ValueError(f"LangFuse Failed to create generation: {str(e)}")
- def update_generation(
- self, generation, langfuse_generation_data: Optional[LangfuseGeneration] = None
- ):
+ def update_generation(self, generation, langfuse_generation_data: Optional[LangfuseGeneration] = None):
format_generation_data = (
- filter_none_values(langfuse_generation_data.model_dump())
- if langfuse_generation_data
- else {}
+ filter_none_values(langfuse_generation_data.model_dump()) if langfuse_generation_data else {}
)
generation.end(**format_generation_data)
diff --git a/api/core/ops/langsmith_trace/langsmith_trace.py b/api/core/ops/langsmith_trace/langsmith_trace.py
index 0ce91db335..fde8a06c61 100644
--- a/api/core/ops/langsmith_trace/langsmith_trace.py
+++ b/api/core/ops/langsmith_trace/langsmith_trace.py
@@ -15,6 +15,7 @@ from core.ops.entities.trace_entity import (
ModerationTraceInfo,
SuggestedQuestionTraceInfo,
ToolTraceInfo,
+ TraceTaskName,
WorkflowTraceInfo,
)
from core.ops.langsmith_trace.entities.langsmith_trace_entity import (
@@ -39,9 +40,7 @@ class LangSmithDataTrace(BaseTraceInstance):
self.langsmith_key = langsmith_config.api_key
self.project_name = langsmith_config.project
self.project_id = None
- self.langsmith_client = Client(
- api_key=langsmith_config.api_key, api_url=langsmith_config.endpoint
- )
+ self.langsmith_client = Client(api_key=langsmith_config.api_key, api_url=langsmith_config.endpoint)
self.file_base_url = os.getenv("FILES_URL", "http://127.0.0.1:5001")
def trace(self, trace_info: BaseTraceInfo):
@@ -64,7 +63,7 @@ class LangSmithDataTrace(BaseTraceInstance):
if trace_info.message_id:
message_run = LangSmithRunModel(
id=trace_info.message_id,
- name=f"message_{trace_info.message_id}",
+ name=TraceTaskName.MESSAGE_TRACE.value,
inputs=trace_info.workflow_run_inputs,
outputs=trace_info.workflow_run_outputs,
run_type=LangSmithRunType.chain,
@@ -73,8 +72,8 @@ class LangSmithDataTrace(BaseTraceInstance):
extra={
"metadata": trace_info.metadata,
},
- tags=["message"],
- error=trace_info.error
+ tags=["message", "workflow"],
+ error=trace_info.error,
)
self.add_run(message_run)
@@ -82,7 +81,7 @@ class LangSmithDataTrace(BaseTraceInstance):
file_list=trace_info.file_list,
total_tokens=trace_info.total_tokens,
id=trace_info.workflow_app_log_id if trace_info.workflow_app_log_id else trace_info.workflow_run_id,
- name=f"workflow_{trace_info.workflow_app_log_id}" if trace_info.workflow_app_log_id else f"workflow_{trace_info.workflow_run_id}",
+ name=TraceTaskName.WORKFLOW_TRACE.value,
inputs=trace_info.workflow_run_inputs,
run_type=LangSmithRunType.tool,
start_time=trace_info.workflow_data.created_at,
@@ -126,22 +125,18 @@ class LangSmithDataTrace(BaseTraceInstance):
node_type = node_execution.node_type
status = node_execution.status
if node_type == "llm":
- inputs = json.loads(node_execution.process_data).get(
- "prompts", {}
- ) if node_execution.process_data else {}
+ inputs = (
+ json.loads(node_execution.process_data).get("prompts", {}) if node_execution.process_data else {}
+ )
else:
inputs = json.loads(node_execution.inputs) if node_execution.inputs else {}
- outputs = (
- json.loads(node_execution.outputs) if node_execution.outputs else {}
- )
+ outputs = json.loads(node_execution.outputs) if node_execution.outputs else {}
created_at = node_execution.created_at if node_execution.created_at else datetime.now()
elapsed_time = node_execution.elapsed_time
finished_at = created_at + timedelta(seconds=elapsed_time)
execution_metadata = (
- json.loads(node_execution.execution_metadata)
- if node_execution.execution_metadata
- else {}
+ json.loads(node_execution.execution_metadata) if node_execution.execution_metadata else {}
)
node_total_tokens = execution_metadata.get("total_tokens", 0)
@@ -168,7 +163,7 @@ class LangSmithDataTrace(BaseTraceInstance):
langsmith_run = LangSmithRunModel(
total_tokens=node_total_tokens,
- name=f"{node_name}_{node_execution_id}",
+ name=node_type,
inputs=inputs,
run_type=run_type,
start_time=created_at,
@@ -178,7 +173,9 @@ class LangSmithDataTrace(BaseTraceInstance):
extra={
"metadata": metadata,
},
- parent_run_id=trace_info.workflow_app_log_id if trace_info.workflow_app_log_id else trace_info.workflow_run_id,
+ parent_run_id=trace_info.workflow_app_log_id
+ if trace_info.workflow_app_log_id
+ else trace_info.workflow_run_id,
tags=["node_execution"],
)
@@ -198,9 +195,9 @@ class LangSmithDataTrace(BaseTraceInstance):
metadata["user_id"] = user_id
if message_data.from_end_user_id:
- end_user_data: EndUser = db.session.query(EndUser).filter(
- EndUser.id == message_data.from_end_user_id
- ).first()
+ end_user_data: EndUser = (
+ db.session.query(EndUser).filter(EndUser.id == message_data.from_end_user_id).first()
+ )
if end_user_data is not None:
end_user_id = end_user_data.session_id
metadata["end_user_id"] = end_user_id
@@ -210,7 +207,7 @@ class LangSmithDataTrace(BaseTraceInstance):
output_tokens=trace_info.answer_tokens,
total_tokens=trace_info.total_tokens,
id=message_id,
- name=f"message_{message_id}",
+ name=TraceTaskName.MESSAGE_TRACE.value,
inputs=trace_info.inputs,
run_type=LangSmithRunType.chain,
start_time=trace_info.start_time,
@@ -230,7 +227,7 @@ class LangSmithDataTrace(BaseTraceInstance):
input_tokens=trace_info.message_tokens,
output_tokens=trace_info.answer_tokens,
total_tokens=trace_info.total_tokens,
- name=f"llm_{message_id}",
+ name="llm",
inputs=trace_info.inputs,
run_type=LangSmithRunType.llm,
start_time=trace_info.start_time,
@@ -248,7 +245,7 @@ class LangSmithDataTrace(BaseTraceInstance):
def moderation_trace(self, trace_info: ModerationTraceInfo):
langsmith_run = LangSmithRunModel(
- name="moderation",
+ name=TraceTaskName.MODERATION_TRACE.value,
inputs=trace_info.inputs,
outputs={
"action": trace_info.action,
@@ -271,7 +268,7 @@ class LangSmithDataTrace(BaseTraceInstance):
def suggested_question_trace(self, trace_info: SuggestedQuestionTraceInfo):
message_data = trace_info.message_data
suggested_question_run = LangSmithRunModel(
- name="suggested_question",
+ name=TraceTaskName.SUGGESTED_QUESTION_TRACE.value,
inputs=trace_info.inputs,
outputs=trace_info.suggested_question,
run_type=LangSmithRunType.tool,
@@ -288,7 +285,7 @@ class LangSmithDataTrace(BaseTraceInstance):
def dataset_retrieval_trace(self, trace_info: DatasetRetrievalTraceInfo):
dataset_retrieval_run = LangSmithRunModel(
- name="dataset_retrieval",
+ name=TraceTaskName.DATASET_RETRIEVAL_TRACE.value,
inputs=trace_info.inputs,
outputs={"documents": trace_info.documents},
run_type=LangSmithRunType.retriever,
@@ -323,7 +320,7 @@ class LangSmithDataTrace(BaseTraceInstance):
def generate_name_trace(self, trace_info: GenerateNameTraceInfo):
name_run = LangSmithRunModel(
- name="generate_name",
+ name=TraceTaskName.GENERATE_NAME_TRACE.value,
inputs=trace_info.inputs,
outputs=trace_info.outputs,
run_type=LangSmithRunType.tool,
diff --git a/api/core/ops/ops_trace_manager.py b/api/core/ops/ops_trace_manager.py
index 4f6ab2fb94..068b490ec8 100644
--- a/api/core/ops/ops_trace_manager.py
+++ b/api/core/ops/ops_trace_manager.py
@@ -5,7 +5,6 @@ import queue
import threading
import time
from datetime import timedelta
-from enum import Enum
from typing import Any, Optional, Union
from uuid import UUID
@@ -24,6 +23,7 @@ from core.ops.entities.trace_entity import (
ModerationTraceInfo,
SuggestedQuestionTraceInfo,
ToolTraceInfo,
+ TraceTaskName,
WorkflowTraceInfo,
)
from core.ops.langfuse_trace.langfuse_trace import LangFuseDataTrace
@@ -153,27 +153,12 @@ class OpsTraceManager:
def get_ops_trace_instance(
cls,
app_id: Optional[Union[UUID, str]] = None,
- message_id: Optional[str] = None,
- conversation_id: Optional[str] = None
):
"""
Get ops trace through model config
:param app_id: app_id
- :param message_id: message_id
- :param conversation_id: conversation_id
:return:
"""
- if conversation_id is not None:
- conversation_data: Conversation = db.session.query(Conversation).filter(
- Conversation.id == conversation_id
- ).first()
- if conversation_data:
- app_id = conversation_data.app_id
-
- if message_id is not None:
- record: Message = db.session.query(Message).filter(Message.id == message_id).first()
- app_id = record.app_id
-
if isinstance(app_id, UUID):
app_id = str(app_id)
@@ -268,17 +253,6 @@ class OpsTraceManager:
return trace_instance(tracing_config).api_check()
-class TraceTaskName(str, Enum):
- CONVERSATION_TRACE = 'conversation_trace'
- WORKFLOW_TRACE = 'workflow_trace'
- MESSAGE_TRACE = 'message_trace'
- MODERATION_TRACE = 'moderation_trace'
- SUGGESTED_QUESTION_TRACE = 'suggested_question_trace'
- DATASET_RETRIEVAL_TRACE = 'dataset_retrieval_trace'
- TOOL_TRACE = 'tool_trace'
- GENERATE_NAME_TRACE = 'generate_name_trace'
-
-
class TraceTask:
def __init__(
self,
@@ -286,6 +260,7 @@ class TraceTask:
message_id: Optional[str] = None,
workflow_run: Optional[WorkflowRun] = None,
conversation_id: Optional[str] = None,
+ user_id: Optional[str] = None,
timer: Optional[Any] = None,
**kwargs
):
@@ -293,17 +268,22 @@ class TraceTask:
self.message_id = message_id
self.workflow_run = workflow_run
self.conversation_id = conversation_id
+ self.user_id = user_id
self.timer = timer
self.kwargs = kwargs
self.file_base_url = os.getenv("FILES_URL", "http://127.0.0.1:5001")
+ self.app_id = None
+
def execute(self):
return self.preprocess()
def preprocess(self):
preprocess_map = {
TraceTaskName.CONVERSATION_TRACE: lambda: self.conversation_trace(**self.kwargs),
- TraceTaskName.WORKFLOW_TRACE: lambda: self.workflow_trace(self.workflow_run, self.conversation_id),
+ TraceTaskName.WORKFLOW_TRACE: lambda: self.workflow_trace(
+ self.workflow_run, self.conversation_id, self.user_id
+ ),
TraceTaskName.MESSAGE_TRACE: lambda: self.message_trace(self.message_id),
TraceTaskName.MODERATION_TRACE: lambda: self.moderation_trace(
self.message_id, self.timer, **self.kwargs
@@ -326,7 +306,7 @@ class TraceTask:
def conversation_trace(self, **kwargs):
return kwargs
- def workflow_trace(self, workflow_run: WorkflowRun, conversation_id):
+ def workflow_trace(self, workflow_run: WorkflowRun, conversation_id, user_id):
workflow_id = workflow_run.workflow_id
tenant_id = workflow_run.tenant_id
workflow_run_id = workflow_run.id
@@ -371,6 +351,7 @@ class TraceTask:
"total_tokens": total_tokens,
"file_list": file_list,
"triggered_form": workflow_run.triggered_from,
+ "user_id": user_id,
}
workflow_trace_info = WorkflowTraceInfo(
@@ -667,13 +648,12 @@ trace_manager_batch_size = int(os.getenv("TRACE_QUEUE_MANAGER_BATCH_SIZE", 100))
class TraceQueueManager:
- def __init__(self, app_id=None, conversation_id=None, message_id=None):
+ def __init__(self, app_id=None, user_id=None):
global trace_manager_timer
self.app_id = app_id
- self.conversation_id = conversation_id
- self.message_id = message_id
- self.trace_instance = OpsTraceManager.get_ops_trace_instance(app_id, conversation_id, message_id)
+ self.user_id = user_id
+ self.trace_instance = OpsTraceManager.get_ops_trace_instance(app_id)
self.flask_app = current_app._get_current_object()
if trace_manager_timer is None:
self.start_timer()
@@ -683,6 +663,7 @@ class TraceQueueManager:
global trace_manager_queue
try:
if self.trace_instance:
+ trace_task.app_id = self.app_id
trace_manager_queue.put(trace_task)
except Exception as e:
logging.debug(f"Error adding trace task: {e}")
@@ -721,9 +702,7 @@ class TraceQueueManager:
for task in tasks:
trace_info = task.execute()
task_data = {
- "app_id": self.app_id,
- "conversation_id": self.conversation_id,
- "message_id": self.message_id,
+ "app_id": task.app_id,
"trace_info_type": type(trace_info).__name__,
"trace_info": trace_info.model_dump() if trace_info else {},
}
diff --git a/api/core/rag/data_post_processor/data_post_processor.py b/api/core/rag/data_post_processor/data_post_processor.py
index 2ed6d74187..ad9ee4f7cf 100644
--- a/api/core/rag/data_post_processor/data_post_processor.py
+++ b/api/core/rag/data_post_processor/data_post_processor.py
@@ -37,7 +37,6 @@ class DataPostProcessor:
return WeightRerankRunner(
tenant_id,
Weights(
- weight_type=weights['weight_type'],
vector_setting=VectorSetting(
vector_weight=weights['vector_setting']['vector_weight'],
embedding_provider_name=weights['vector_setting']['embedding_provider_name'],
diff --git a/api/core/rag/datasource/retrieval_service.py b/api/core/rag/datasource/retrieval_service.py
index abbf4a35a4..3932e90042 100644
--- a/api/core/rag/datasource/retrieval_service.py
+++ b/api/core/rag/datasource/retrieval_service.py
@@ -28,7 +28,7 @@ class RetrievalService:
@classmethod
def retrieve(cls, retrival_method: str, dataset_id: str, query: str,
top_k: int, score_threshold: Optional[float] = .0,
- reranking_model: Optional[dict] = None, reranking_mode: Optional[str] = None,
+ reranking_model: Optional[dict] = None, reranking_mode: Optional[str] = 'reranking_model',
weights: Optional[dict] = None):
dataset = db.session.query(Dataset).filter(
Dataset.id == dataset_id
@@ -36,10 +36,6 @@ class RetrievalService:
if not dataset or dataset.available_document_count == 0 or dataset.available_segment_count == 0:
return []
all_documents = []
- keyword_search_documents = []
- embedding_search_documents = []
- full_text_search_documents = []
- hybrid_search_documents = []
threads = []
exceptions = []
# retrieval_model source with keyword
diff --git a/api/core/rag/datasource/vdb/analyticdb/analyticdb_vector.py b/api/core/rag/datasource/vdb/analyticdb/analyticdb_vector.py
index 442d71293f..b78e2a59b1 100644
--- a/api/core/rag/datasource/vdb/analyticdb/analyticdb_vector.py
+++ b/api/core/rag/datasource/vdb/analyticdb/analyticdb_vector.py
@@ -65,8 +65,15 @@ class AnalyticdbVector(BaseVector):
AnalyticdbVector._init = True
def _initialize(self) -> None:
- self._initialize_vector_database()
- self._create_namespace_if_not_exists()
+ cache_key = f"vector_indexing_{self.config.instance_id}"
+ lock_name = f"{cache_key}_lock"
+ with redis_client.lock(lock_name, timeout=20):
+ collection_exist_cache_key = f"vector_indexing_{self.config.instance_id}"
+ if redis_client.get(collection_exist_cache_key):
+ return
+ self._initialize_vector_database()
+ self._create_namespace_if_not_exists()
+ redis_client.set(collection_exist_cache_key, 1, ex=3600)
def _initialize_vector_database(self) -> None:
from alibabacloud_gpdb20160503 import models as gpdb_20160503_models
@@ -285,9 +292,11 @@ class AnalyticdbVector(BaseVector):
documents = []
for match in response.body.matches.match:
if match.score > score_threshold:
+ metadata = json.loads(match.metadata.get("metadata_"))
doc = Document(
page_content=match.metadata.get("page_content"),
- metadata=json.loads(match.metadata.get("metadata_")),
+ vector=match.metadata.get("vector"),
+ metadata=metadata,
)
documents.append(doc)
return documents
@@ -320,7 +329,23 @@ class AnalyticdbVectorFactory(AbstractVectorFactory):
self.gen_index_struct_dict(VectorType.ANALYTICDB, collection_name)
)
- # TODO handle optional params
+ # handle optional params
+ if dify_config.ANALYTICDB_KEY_ID is None:
+ raise ValueError("ANALYTICDB_KEY_ID should not be None")
+ if dify_config.ANALYTICDB_KEY_SECRET is None:
+ raise ValueError("ANALYTICDB_KEY_SECRET should not be None")
+ if dify_config.ANALYTICDB_REGION_ID is None:
+ raise ValueError("ANALYTICDB_REGION_ID should not be None")
+ if dify_config.ANALYTICDB_INSTANCE_ID is None:
+ raise ValueError("ANALYTICDB_INSTANCE_ID should not be None")
+ if dify_config.ANALYTICDB_ACCOUNT is None:
+ raise ValueError("ANALYTICDB_ACCOUNT should not be None")
+ if dify_config.ANALYTICDB_PASSWORD is None:
+ raise ValueError("ANALYTICDB_PASSWORD should not be None")
+ if dify_config.ANALYTICDB_NAMESPACE is None:
+ raise ValueError("ANALYTICDB_NAMESPACE should not be None")
+ if dify_config.ANALYTICDB_NAMESPACE_PASSWORD is None:
+ raise ValueError("ANALYTICDB_NAMESPACE_PASSWORD should not be None")
return AnalyticdbVector(
collection_name,
AnalyticdbConfig(
diff --git a/api/core/rag/datasource/vdb/myscale/myscale_vector.py b/api/core/rag/datasource/vdb/myscale/myscale_vector.py
index 241b5a8414..cff9293baa 100644
--- a/api/core/rag/datasource/vdb/myscale/myscale_vector.py
+++ b/api/core/rag/datasource/vdb/myscale/myscale_vector.py
@@ -126,13 +126,14 @@ class MyScaleVector(BaseVector):
where_str = f"WHERE dist < {1 - score_threshold}" if \
self._metric.upper() == "COSINE" and order == SortOrder.ASC and score_threshold > 0.0 else ""
sql = f"""
- SELECT text, metadata, {dist} as dist FROM {self._config.database}.{self._collection_name}
+ SELECT text, vector, metadata, {dist} as dist FROM {self._config.database}.{self._collection_name}
{where_str} ORDER BY dist {order.value} LIMIT {top_k}
"""
try:
return [
Document(
page_content=r["text"],
+ vector=r['vector'],
metadata=r["metadata"],
)
for r in self._client.query(sql).named_results()
diff --git a/api/core/rag/datasource/vdb/opensearch/opensearch_vector.py b/api/core/rag/datasource/vdb/opensearch/opensearch_vector.py
index d834e8ce14..c95d202173 100644
--- a/api/core/rag/datasource/vdb/opensearch/opensearch_vector.py
+++ b/api/core/rag/datasource/vdb/opensearch/opensearch_vector.py
@@ -192,7 +192,9 @@ class OpenSearchVector(BaseVector):
docs = []
for hit in response['hits']['hits']:
metadata = hit['_source'].get(Field.METADATA_KEY.value)
- doc = Document(page_content=hit['_source'].get(Field.CONTENT_KEY.value), metadata=metadata)
+ vector = hit['_source'].get(Field.VECTOR.value)
+ page_content = hit['_source'].get(Field.CONTENT_KEY.value)
+ doc = Document(page_content=page_content, vector=vector, metadata=metadata)
docs.append(doc)
return docs
diff --git a/api/core/rag/datasource/vdb/oracle/oraclevector.py b/api/core/rag/datasource/vdb/oracle/oraclevector.py
index 4bd09b331d..aa2c6171c3 100644
--- a/api/core/rag/datasource/vdb/oracle/oraclevector.py
+++ b/api/core/rag/datasource/vdb/oracle/oraclevector.py
@@ -234,16 +234,16 @@ class OracleVector(BaseVector):
entities.append(token)
with self._get_cursor() as cur:
cur.execute(
- f"select meta, text FROM {self.table_name} WHERE CONTAINS(text, :1, 1) > 0 order by score(1) desc fetch first {top_k} rows only",
+ f"select meta, text, embedding FROM {self.table_name} WHERE CONTAINS(text, :1, 1) > 0 order by score(1) desc fetch first {top_k} rows only",
[" ACCUM ".join(entities)]
)
docs = []
for record in cur:
- metadata, text = record
- docs.append(Document(page_content=text, metadata=metadata))
+ metadata, text, embedding = record
+ docs.append(Document(page_content=text, vector=embedding, metadata=metadata))
return docs
else:
- return [Document(page_content="", metadata="")]
+ return [Document(page_content="", metadata={})]
return []
def delete(self) -> None:
diff --git a/api/core/rag/datasource/vdb/pgvecto_rs/pgvecto_rs.py b/api/core/rag/datasource/vdb/pgvecto_rs/pgvecto_rs.py
index 82bdc5d4b9..a48224070f 100644
--- a/api/core/rag/datasource/vdb/pgvecto_rs/pgvecto_rs.py
+++ b/api/core/rag/datasource/vdb/pgvecto_rs/pgvecto_rs.py
@@ -4,7 +4,7 @@ from typing import Any
from uuid import UUID, uuid4
from numpy import ndarray
-from pgvecto_rs.sqlalchemy import Vector
+from pgvecto_rs.sqlalchemy import VECTOR
from pydantic import BaseModel, model_validator
from sqlalchemy import Float, String, create_engine, insert, select, text
from sqlalchemy import text as sql_text
@@ -67,7 +67,7 @@ class PGVectoRS(BaseVector):
)
text: Mapped[str] = mapped_column(String)
meta: Mapped[dict] = mapped_column(postgresql.JSONB)
- vector: Mapped[ndarray] = mapped_column(Vector(dim))
+ vector: Mapped[ndarray] = mapped_column(VECTOR(dim))
self._table = _Table
self._distance_op = "<=>"
diff --git a/api/core/rag/datasource/vdb/qdrant/qdrant_vector.py b/api/core/rag/datasource/vdb/qdrant/qdrant_vector.py
index 77c3f6a271..297bff928e 100644
--- a/api/core/rag/datasource/vdb/qdrant/qdrant_vector.py
+++ b/api/core/rag/datasource/vdb/qdrant/qdrant_vector.py
@@ -399,7 +399,6 @@ class QdrantVector(BaseVector):
document = self._document_from_scored_point(
result, Field.CONTENT_KEY.value, Field.METADATA_KEY.value
)
- document.metadata['vector'] = result.vector
documents.append(document)
return documents
@@ -418,6 +417,7 @@ class QdrantVector(BaseVector):
) -> Document:
return Document(
page_content=scored_point.payload.get(content_payload_key),
+ vector=scored_point.vector,
metadata=scored_point.payload.get(metadata_payload_key) or {},
)
diff --git a/api/core/rag/datasource/vdb/relyt/relyt_vector.py b/api/core/rag/datasource/vdb/relyt/relyt_vector.py
index 2e0bd6f303..63ad0682d7 100644
--- a/api/core/rag/datasource/vdb/relyt/relyt_vector.py
+++ b/api/core/rag/datasource/vdb/relyt/relyt_vector.py
@@ -105,7 +105,7 @@ class RelytVector(BaseVector):
redis_client.set(collection_exist_cache_key, 1, ex=3600)
def add_texts(self, documents: list[Document], embeddings: list[list[float]], **kwargs):
- from pgvecto_rs.sqlalchemy import Vector
+ from pgvecto_rs.sqlalchemy import VECTOR
ids = [str(uuid.uuid1()) for _ in documents]
metadatas = [d.metadata for d in documents]
@@ -118,7 +118,7 @@ class RelytVector(BaseVector):
self._collection_name,
Base.metadata,
Column("id", TEXT, primary_key=True),
- Column("embedding", Vector(len(embeddings[0]))),
+ Column("embedding", VECTOR(len(embeddings[0]))),
Column("document", String, nullable=True),
Column("metadata", JSON, nullable=True),
extend_existing=True,
@@ -169,7 +169,7 @@ class RelytVector(BaseVector):
Args:
ids: List of ids to delete.
"""
- from pgvecto_rs.sqlalchemy import Vector
+ from pgvecto_rs.sqlalchemy import VECTOR
if ids is None:
raise ValueError("No ids provided to delete.")
@@ -179,7 +179,7 @@ class RelytVector(BaseVector):
self._collection_name,
Base.metadata,
Column("id", TEXT, primary_key=True),
- Column("embedding", Vector(self.embedding_dimension)),
+ Column("embedding", VECTOR(self.embedding_dimension)),
Column("document", String, nullable=True),
Column("metadata", JSON, nullable=True),
extend_existing=True,
diff --git a/api/core/rag/datasource/vdb/weaviate/weaviate_vector.py b/api/core/rag/datasource/vdb/weaviate/weaviate_vector.py
index 87fc5ff158..205fe850c3 100644
--- a/api/core/rag/datasource/vdb/weaviate/weaviate_vector.py
+++ b/api/core/rag/datasource/vdb/weaviate/weaviate_vector.py
@@ -239,8 +239,7 @@ class WeaviateVector(BaseVector):
query_obj = self._client.query.get(collection_name, properties)
if kwargs.get("where_filter"):
query_obj = query_obj.with_where(kwargs.get("where_filter"))
- if kwargs.get("additional"):
- query_obj = query_obj.with_additional(kwargs.get("additional"))
+ query_obj = query_obj.with_additional(["vector"])
properties = ['text']
result = query_obj.with_bm25(query=query, properties=properties).with_limit(kwargs.get('top_k', 2)).do()
if "errors" in result:
@@ -248,7 +247,8 @@ class WeaviateVector(BaseVector):
docs = []
for res in result["data"]["Get"][collection_name]:
text = res.pop(Field.TEXT_KEY.value)
- docs.append(Document(page_content=text, metadata=res))
+ additional = res.pop('_additional')
+ docs.append(Document(page_content=text, vector=additional['vector'], metadata=res))
return docs
def _default_schema(self, index_name: str) -> dict:
diff --git a/api/core/rag/extractor/unstructured/unstructured_doc_extractor.py b/api/core/rag/extractor/unstructured/unstructured_doc_extractor.py
index 34a4e85e97..0323b14a4a 100644
--- a/api/core/rag/extractor/unstructured/unstructured_doc_extractor.py
+++ b/api/core/rag/extractor/unstructured/unstructured_doc_extractor.py
@@ -25,7 +25,7 @@ class UnstructuredWordExtractor(BaseExtractor):
from unstructured.file_utils.filetype import FileType, detect_filetype
unstructured_version = tuple(
- [int(x) for x in __unstructured_version__.split(".")]
+ int(x) for x in __unstructured_version__.split(".")
)
# check the file extension
try:
diff --git a/api/core/rag/extractor/word_extractor.py b/api/core/rag/extractor/word_extractor.py
index ac4a56319b..c3f0b75cfb 100644
--- a/api/core/rag/extractor/word_extractor.py
+++ b/api/core/rag/extractor/word_extractor.py
@@ -1,9 +1,12 @@
"""Abstract interface for document loader implementations."""
import datetime
+import logging
import mimetypes
import os
+import re
import tempfile
import uuid
+import xml.etree.ElementTree as ET
from urllib.parse import urlparse
import requests
@@ -16,6 +19,7 @@ from extensions.ext_database import db
from extensions.ext_storage import storage
from models.model import UploadFile
+logger = logging.getLogger(__name__)
class WordExtractor(BaseExtractor):
"""Load docx files.
@@ -117,19 +121,63 @@ class WordExtractor(BaseExtractor):
return image_map
- def _table_to_markdown(self, table):
- markdown = ""
- # deal with table headers
- header_row = table.rows[0]
- headers = [cell.text for cell in header_row.cells]
- markdown += "| " + " | ".join(headers) + " |\n"
- markdown += "| " + " | ".join(["---"] * len(headers)) + " |\n"
- # deal with table rows
- for row in table.rows[1:]:
- row_cells = [cell.text for cell in row.cells]
- markdown += "| " + " | ".join(row_cells) + " |\n"
+ def _table_to_markdown(self, table, image_map):
+ markdown = []
+ # calculate the total number of columns
+ total_cols = max(len(row.cells) for row in table.rows)
- return markdown
+ header_row = table.rows[0]
+ headers = self._parse_row(header_row, image_map, total_cols)
+ markdown.append("| " + " | ".join(headers) + " |")
+ markdown.append("| " + " | ".join(["---"] * total_cols) + " |")
+
+ for row in table.rows[1:]:
+ row_cells = self._parse_row(row, image_map, total_cols)
+ markdown.append("| " + " | ".join(row_cells) + " |")
+ return "\n".join(markdown)
+
+ def _parse_row(self, row, image_map, total_cols):
+ # Initialize a row, all of which are empty by default
+ row_cells = [""] * total_cols
+ col_index = 0
+ for cell in row.cells:
+ # make sure the col_index is not out of range
+ while col_index < total_cols and row_cells[col_index] != "":
+ col_index += 1
+ # if col_index is out of range the loop is jumped
+ if col_index >= total_cols:
+ break
+ cell_content = self._parse_cell(cell, image_map).strip()
+ cell_colspan = cell.grid_span if cell.grid_span else 1
+ for i in range(cell_colspan):
+ if col_index + i < total_cols:
+ row_cells[col_index + i] = cell_content if i == 0 else ""
+ col_index += cell_colspan
+ return row_cells
+
+ def _parse_cell(self, cell, image_map):
+ cell_content = []
+ for paragraph in cell.paragraphs:
+ parsed_paragraph = self._parse_cell_paragraph(paragraph, image_map)
+ if parsed_paragraph:
+ cell_content.append(parsed_paragraph)
+ unique_content = list(dict.fromkeys(cell_content))
+ return " ".join(unique_content)
+
+ def _parse_cell_paragraph(self, paragraph, image_map):
+ paragraph_content = []
+ for run in paragraph.runs:
+ if run.element.xpath('.//a:blip'):
+ for blip in run.element.xpath('.//a:blip'):
+ image_id = blip.get("{http://schemas.openxmlformats.org/officeDocument/2006/relationships}embed")
+ image_part = paragraph.part.rels[image_id].target_part
+
+ if image_part in image_map:
+ image_link = image_map[image_part]
+ paragraph_content.append(image_link)
+ else:
+ paragraph_content.append(run.text)
+ return "".join(paragraph_content).strip()
def _parse_paragraph(self, paragraph, image_map):
paragraph_content = []
@@ -153,10 +201,34 @@ class WordExtractor(BaseExtractor):
image_map = self._extract_images_from_docx(doc, image_folder)
+ hyperlinks_url = None
+ url_pattern = re.compile(r'http://[^\s+]+//|https://[^\s+]+')
+ for para in doc.paragraphs:
+ for run in para.runs:
+ if run.text and hyperlinks_url:
+ result = f' [{run.text}]({hyperlinks_url}) '
+ run.text = result
+ hyperlinks_url = None
+ if 'HYPERLINK' in run.element.xml:
+ try:
+ xml = ET.XML(run.element.xml)
+ x_child = [c for c in xml.iter() if c is not None]
+ for x in x_child:
+ if x_child is None:
+ continue
+ if x.tag.endswith('instrText'):
+ for i in url_pattern.findall(x.text):
+ hyperlinks_url = str(i)
+ except Exception as e:
+ logger.error(e)
+
+
+
+
def parse_paragraph(paragraph):
paragraph_content = []
for run in paragraph.runs:
- if run.element.tag.endswith('r'):
+ if hasattr(run.element, 'tag') and isinstance(element.tag, str) and run.element.tag.endswith('r'):
drawing_elements = run.element.findall(
'.//{http://schemas.openxmlformats.org/wordprocessingml/2006/main}drawing')
for drawing in drawing_elements:
@@ -176,13 +248,14 @@ class WordExtractor(BaseExtractor):
paragraphs = doc.paragraphs.copy()
tables = doc.tables.copy()
for element in doc.element.body:
- if element.tag.endswith('p'): # paragraph
- para = paragraphs.pop(0)
- parsed_paragraph = parse_paragraph(para)
- if parsed_paragraph:
- content.append(parsed_paragraph)
- elif element.tag.endswith('tbl'): # table
- table = tables.pop(0)
- content.append(self._table_to_markdown(table))
+ if hasattr(element, 'tag'):
+ if isinstance(element.tag, str) and element.tag.endswith('p'): # paragraph
+ para = paragraphs.pop(0)
+ parsed_paragraph = parse_paragraph(para)
+ if parsed_paragraph:
+ content.append(parsed_paragraph)
+ elif isinstance(element.tag, str) and element.tag.endswith('tbl'): # table
+ table = tables.pop(0)
+ content.append(self._table_to_markdown(table,image_map))
return '\n'.join(content)
diff --git a/api/core/rag/models/document.py b/api/core/rag/models/document.py
index 7bb675b149..6f3c1c5d34 100644
--- a/api/core/rag/models/document.py
+++ b/api/core/rag/models/document.py
@@ -10,6 +10,8 @@ class Document(BaseModel):
page_content: str
+ vector: Optional[list[float]] = None
+
"""Arbitrary metadata about the page content (e.g., source, relationships to other
documents, etc.).
"""
diff --git a/api/core/rag/rerank/entity/weight.py b/api/core/rag/rerank/entity/weight.py
index 36afc89a21..6dbbad2f8d 100644
--- a/api/core/rag/rerank/entity/weight.py
+++ b/api/core/rag/rerank/entity/weight.py
@@ -16,8 +16,6 @@ class KeywordSetting(BaseModel):
class Weights(BaseModel):
"""Model for weighted rerank."""
- weight_type: str
-
vector_setting: VectorSetting
keyword_setting: KeywordSetting
diff --git a/api/core/rag/rerank/weight_rerank.py b/api/core/rag/rerank/weight_rerank.py
index d07f94adb7..d8a7873982 100644
--- a/api/core/rag/rerank/weight_rerank.py
+++ b/api/core/rag/rerank/weight_rerank.py
@@ -159,10 +159,9 @@ class WeightRerankRunner:
if 'score' in document.metadata:
query_vector_scores.append(document.metadata['score'])
else:
- content_vector = document.metadata['vector']
# transform to NumPy
vec1 = np.array(query_vector)
- vec2 = np.array(document.metadata['vector'])
+ vec2 = np.array(document.vector)
# calculate dot product
dot_product = np.dot(vec1, vec2)
diff --git a/api/core/rag/retrieval/dataset_retrieval.py b/api/core/rag/retrieval/dataset_retrieval.py
index d51ea2942a..e945364796 100644
--- a/api/core/rag/retrieval/dataset_retrieval.py
+++ b/api/core/rag/retrieval/dataset_retrieval.py
@@ -14,7 +14,8 @@ from core.model_manager import ModelInstance, ModelManager
from core.model_runtime.entities.message_entities import PromptMessageTool
from core.model_runtime.entities.model_entities import ModelFeature, ModelType
from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
-from core.ops.ops_trace_manager import TraceQueueManager, TraceTask, TraceTaskName
+from core.ops.entities.trace_entity import TraceTaskName
+from core.ops.ops_trace_manager import TraceQueueManager, TraceTask
from core.ops.utils import measure_time
from core.rag.data_post_processor.data_post_processor import DataPostProcessor
from core.rag.datasource.keyword.jieba.jieba_keyword_table_handler import JiebaKeywordTableHandler
@@ -138,6 +139,7 @@ class DatasetRetrieval:
retrieve_config.rerank_mode,
retrieve_config.reranking_model,
retrieve_config.weights,
+ retrieve_config.reranking_enabled,
message_id,
)
@@ -277,6 +279,7 @@ class DatasetRetrieval:
query=query,
top_k=top_k, score_threshold=score_threshold,
reranking_model=reranking_model,
+ reranking_mode=retrieval_model_config.get('reranking_mode', 'reranking_model'),
weights=retrieval_model_config.get('weights', None),
)
self._on_query(query, [dataset_id], app_id, user_from, user_id)
@@ -321,23 +324,26 @@ class DatasetRetrieval:
for thread in threads:
thread.join()
- if reranking_enable:
- # do rerank for searched documents
- data_post_processor = DataPostProcessor(tenant_id, reranking_mode,
- reranking_model, weights, False)
+ with measure_time() as timer:
+ if reranking_enable:
+ # do rerank for searched documents
+ data_post_processor = DataPostProcessor(
+ tenant_id, reranking_mode,
+ reranking_model, weights, False
+ )
- with measure_time() as timer:
all_documents = data_post_processor.invoke(
query=query,
documents=all_documents,
score_threshold=score_threshold,
top_n=top_k
)
- else:
- if index_type == "economy":
- all_documents = self.calculate_keyword_score(query, all_documents, top_k)
- elif index_type == "high_quality":
- all_documents = self.calculate_vector_score(all_documents, top_k, score_threshold)
+ else:
+ if index_type == "economy":
+ all_documents = self.calculate_keyword_score(query, all_documents, top_k)
+ elif index_type == "high_quality":
+ all_documents = self.calculate_vector_score(all_documents, top_k, score_threshold)
+
self._on_query(query, dataset_ids, app_id, user_from, user_id)
if all_documents:
@@ -427,10 +433,12 @@ class DatasetRetrieval:
dataset_id=dataset.id,
query=query,
top_k=top_k,
- score_threshold=retrieval_model['score_threshold']
+ score_threshold=retrieval_model.get('score_threshold', .0)
if retrieval_model['score_threshold_enabled'] else None,
- reranking_model=retrieval_model['reranking_model']
+ reranking_model=retrieval_model.get('reranking_model', None)
if retrieval_model['reranking_enable'] else None,
+ reranking_mode=retrieval_model.get('reranking_mode')
+ if retrieval_model.get('reranking_mode') else 'reranking_model',
weights=retrieval_model.get('weights', None),
)
@@ -606,7 +614,7 @@ class DatasetRetrieval:
top_k: int, score_threshold: float) -> list[Document]:
filter_documents = []
for document in all_documents:
- if document.metadata['score'] >= score_threshold:
+ if score_threshold and document.metadata['score'] >= score_threshold:
filter_documents.append(document)
if not filter_documents:
return []
diff --git a/api/core/rag/splitter/fixed_text_splitter.py b/api/core/rag/splitter/fixed_text_splitter.py
index fd714edf5e..6a0804f890 100644
--- a/api/core/rag/splitter/fixed_text_splitter.py
+++ b/api/core/rag/splitter/fixed_text_splitter.py
@@ -63,7 +63,7 @@ class FixedRecursiveCharacterTextSplitter(EnhanceRecursiveCharacterTextSplitter)
if self._fixed_separator:
chunks = text.split(self._fixed_separator)
else:
- chunks = list(text)
+ chunks = [text]
final_chunks = []
for chunk in chunks:
diff --git a/api/core/tools/provider/_position.yaml b/api/core/tools/provider/_position.yaml
index 3a3ff64426..25d9f403a0 100644
--- a/api/core/tools/provider/_position.yaml
+++ b/api/core/tools/provider/_position.yaml
@@ -2,6 +2,7 @@
- bing
- duckduckgo
- searchapi
+- serper
- searxng
- dalle
- azuredalle
diff --git a/api/core/tools/provider/builtin/aws/_assets/icon.svg b/api/core/tools/provider/builtin/aws/_assets/icon.svg
new file mode 100644
index 0000000000..ecfcfc08d4
--- /dev/null
+++ b/api/core/tools/provider/builtin/aws/_assets/icon.svg
@@ -0,0 +1,9 @@
+
+
+
\ No newline at end of file
diff --git a/api/core/tools/provider/builtin/aws/aws.py b/api/core/tools/provider/builtin/aws/aws.py
new file mode 100644
index 0000000000..13ede96015
--- /dev/null
+++ b/api/core/tools/provider/builtin/aws/aws.py
@@ -0,0 +1,25 @@
+from core.tools.errors import ToolProviderCredentialValidationError
+from core.tools.provider.builtin.aws.tools.sagemaker_text_rerank import SageMakerReRankTool
+from core.tools.provider.builtin_tool_provider import BuiltinToolProviderController
+
+
+class SageMakerProvider(BuiltinToolProviderController):
+ def _validate_credentials(self, credentials: dict) -> None:
+ try:
+ SageMakerReRankTool().fork_tool_runtime(
+ runtime={
+ "credentials": credentials,
+ }
+ ).invoke(
+ user_id='',
+ tool_parameters={
+ "sagemaker_endpoint" : "",
+ "query": "misaka mikoto",
+ "candidate_texts" : "hello$$$hello world",
+ "topk" : 5,
+ "aws_region" : ""
+ },
+ )
+ except Exception as e:
+ raise ToolProviderCredentialValidationError(str(e))
+
\ No newline at end of file
diff --git a/api/core/tools/provider/builtin/aws/aws.yaml b/api/core/tools/provider/builtin/aws/aws.yaml
new file mode 100644
index 0000000000..847c6824a5
--- /dev/null
+++ b/api/core/tools/provider/builtin/aws/aws.yaml
@@ -0,0 +1,15 @@
+identity:
+ author: AWS
+ name: aws
+ label:
+ en_US: AWS
+ zh_Hans: 亚马逊云科技
+ pt_BR: AWS
+ description:
+ en_US: Services on AWS.
+ zh_Hans: 亚马逊云科技的各类服务
+ pt_BR: Services on AWS.
+ icon: icon.svg
+ tags:
+ - search
+credentials_for_provider:
diff --git a/api/core/tools/provider/builtin/aws/tools/apply_guardrail.py b/api/core/tools/provider/builtin/aws/tools/apply_guardrail.py
new file mode 100644
index 0000000000..9c006733bd
--- /dev/null
+++ b/api/core/tools/provider/builtin/aws/tools/apply_guardrail.py
@@ -0,0 +1,83 @@
+import json
+import logging
+from typing import Any, Union
+
+import boto3
+from pydantic import BaseModel, Field
+
+from core.tools.entities.tool_entities import ToolInvokeMessage
+from core.tools.tool.builtin_tool import BuiltinTool
+
+logging.basicConfig(level=logging.INFO)
+logger = logging.getLogger(__name__)
+
+class GuardrailParameters(BaseModel):
+ guardrail_id: str = Field(..., description="The identifier of the guardrail")
+ guardrail_version: str = Field(..., description="The version of the guardrail")
+ source: str = Field(..., description="The source of the content")
+ text: str = Field(..., description="The text to apply the guardrail to")
+ aws_region: str = Field(default="us-east-1", description="AWS region for the Bedrock client")
+
+class ApplyGuardrailTool(BuiltinTool):
+ def _invoke(self,
+ user_id: str,
+ tool_parameters: dict[str, Any]
+ ) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
+ """
+ Invoke the ApplyGuardrail tool
+ """
+ try:
+ # Validate and parse input parameters
+ params = GuardrailParameters(**tool_parameters)
+
+ # Initialize AWS client
+ bedrock_client = boto3.client('bedrock-runtime', region_name=params.aws_region)
+
+ # Apply guardrail
+ response = bedrock_client.apply_guardrail(
+ guardrailIdentifier=params.guardrail_id,
+ guardrailVersion=params.guardrail_version,
+ source=params.source,
+ content=[{"text": {"text": params.text}}]
+ )
+
+ # Check for empty response
+ if not response:
+ return self.create_text_message(text="Received empty response from AWS Bedrock.")
+
+ # Process the result
+ action = response.get("action", "No action specified")
+ outputs = response.get("outputs", [])
+ output = outputs[0].get("text", "No output received") if outputs else "No output received"
+ assessments = response.get("assessments", [])
+
+ # Format assessments
+ formatted_assessments = []
+ for assessment in assessments:
+ for policy_type, policy_data in assessment.items():
+ if isinstance(policy_data, dict) and 'topics' in policy_data:
+ for topic in policy_data['topics']:
+ formatted_assessments.append(f"Policy: {policy_type}, Topic: {topic['name']}, Type: {topic['type']}, Action: {topic['action']}")
+ else:
+ formatted_assessments.append(f"Policy: {policy_type}, Data: {policy_data}")
+
+ result = f"Action: {action}\n "
+ result += f"Output: {output}\n "
+ if formatted_assessments:
+ result += "Assessments:\n " + "\n ".join(formatted_assessments) + "\n "
+# result += f"Full response: {json.dumps(response, indent=2, ensure_ascii=False)}"
+
+ return self.create_text_message(text=result)
+
+ except boto3.exceptions.BotoCoreError as e:
+ error_message = f'AWS service error: {str(e)}'
+ logger.error(error_message, exc_info=True)
+ return self.create_text_message(text=error_message)
+ except json.JSONDecodeError as e:
+ error_message = f'JSON parsing error: {str(e)}'
+ logger.error(error_message, exc_info=True)
+ return self.create_text_message(text=error_message)
+ except Exception as e:
+ error_message = f'An unexpected error occurred: {str(e)}'
+ logger.error(error_message, exc_info=True)
+ return self.create_text_message(text=error_message)
diff --git a/api/core/tools/provider/builtin/aws/tools/apply_guardrail.yaml b/api/core/tools/provider/builtin/aws/tools/apply_guardrail.yaml
new file mode 100644
index 0000000000..2b7c8abb44
--- /dev/null
+++ b/api/core/tools/provider/builtin/aws/tools/apply_guardrail.yaml
@@ -0,0 +1,56 @@
+identity:
+ name: apply_guardrail
+ author: AWS
+ label:
+ en_US: Content Moderation Guardrails
+ zh_Hans: 内容审查护栏
+description:
+ human:
+ en_US: Content Moderation Guardrails utilizes the ApplyGuardrail API, a feature of Guardrails for Amazon Bedrock. This API is capable of evaluating input prompts and model responses for all Foundation Models (FMs), including those on Amazon Bedrock, custom FMs, and third-party FMs. By implementing this functionality, organizations can achieve centralized governance across all their generative AI applications, thereby enhancing control and consistency in content moderation.
+ zh_Hans: 内容审查护栏采用 Guardrails for Amazon Bedrock 功能中的 ApplyGuardrail API 。ApplyGuardrail 可以评估所有基础模型(FMs)的输入提示和模型响应,包括 Amazon Bedrock 上的 FMs、自定义 FMs 和第三方 FMs。通过实施这一功能, 组织可以在所有生成式 AI 应用程序中实现集中化的治理,从而增强内容审核的控制力和一致性。
+ llm: Content Moderation Guardrails utilizes the ApplyGuardrail API, a feature of Guardrails for Amazon Bedrock. This API is capable of evaluating input prompts and model responses for all Foundation Models (FMs), including those on Amazon Bedrock, custom FMs, and third-party FMs. By implementing this functionality, organizations can achieve centralized governance across all their generative AI applications, thereby enhancing control and consistency in content moderation.
+parameters:
+ - name: guardrail_id
+ type: string
+ required: true
+ label:
+ en_US: Guardrail ID
+ zh_Hans: Guardrail ID
+ human_description:
+ en_US: Please enter the ID of the Guardrail that has already been created on Amazon Bedrock, for example 'qk5nk0e4b77b'.
+ zh_Hans: 请输入已经在 Amazon Bedrock 上创建好的 Guardrail ID, 例如 'qk5nk0e4b77b'.
+ llm_description: Please enter the ID of the Guardrail that has already been created on Amazon Bedrock, for example 'qk5nk0e4b77b'.
+ form: form
+ - name: guardrail_version
+ type: string
+ required: true
+ label:
+ en_US: Guardrail Version Number
+ zh_Hans: Guardrail 版本号码
+ human_description:
+ en_US: Please enter the published version of the Guardrail ID that has already been created on Amazon Bedrock. This is typically a version number, such as 2.
+ zh_Hans: 请输入已经在Amazon Bedrock 上创建好的Guardrail ID发布的版本, 通常使用版本号, 例如2.
+ llm_description: Please enter the published version of the Guardrail ID that has already been created on Amazon Bedrock. This is typically a version number, such as 2.
+ form: form
+ - name: source
+ type: string
+ required: true
+ label:
+ en_US: Content Source (INPUT or OUTPUT)
+ zh_Hans: 内容来源 (INPUT or OUTPUT)
+ human_description:
+ en_US: The source of data used in the request to apply the guardrail. Valid Values "INPUT | OUTPUT"
+ zh_Hans: 用于应用护栏的请求中所使用的数据来源。有效值为 "INPUT | OUTPUT"
+ llm_description: The source of data used in the request to apply the guardrail. Valid Values "INPUT | OUTPUT"
+ form: form
+ - name: text
+ type: string
+ required: true
+ label:
+ en_US: Content to be reviewed
+ zh_Hans: 待审查内容
+ human_description:
+ en_US: The content used for requesting guardrail review, which can be either user input or LLM output.
+ zh_Hans: 用于请求护栏审查的内容,可以是用户输入或 LLM 输出。
+ llm_description: The content used for requesting guardrail review, which can be either user input or LLM output.
+ form: llm
diff --git a/api/core/tools/provider/builtin/aws/tools/lambda_translate_utils.py b/api/core/tools/provider/builtin/aws/tools/lambda_translate_utils.py
new file mode 100644
index 0000000000..005ba3deb5
--- /dev/null
+++ b/api/core/tools/provider/builtin/aws/tools/lambda_translate_utils.py
@@ -0,0 +1,88 @@
+import json
+from typing import Any, Union
+
+import boto3
+
+from core.tools.entities.tool_entities import ToolInvokeMessage
+from core.tools.tool.builtin_tool import BuiltinTool
+
+
+class LambdaTranslateUtilsTool(BuiltinTool):
+ lambda_client: Any = None
+
+ def _invoke_lambda(self, text_content, src_lang, dest_lang, model_id, dictionary_name, request_type, lambda_name):
+ msg = {
+ "src_content":text_content,
+ "src_lang": src_lang,
+ "dest_lang":dest_lang,
+ "dictionary_id": dictionary_name,
+ "request_type" : request_type,
+ "model_id" : model_id
+ }
+
+ invoke_response = self.lambda_client.invoke(FunctionName=lambda_name,
+ InvocationType='RequestResponse',
+ Payload=json.dumps(msg))
+ response_body = invoke_response['Payload']
+
+ response_str = response_body.read().decode("unicode_escape")
+
+ return response_str
+
+ def _invoke(self,
+ user_id: str,
+ tool_parameters: dict[str, Any],
+ ) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
+ """
+ invoke tools
+ """
+ line = 0
+ try:
+ if not self.lambda_client:
+ aws_region = tool_parameters.get('aws_region')
+ if aws_region:
+ self.lambda_client = boto3.client("lambda", region_name=aws_region)
+ else:
+ self.lambda_client = boto3.client("lambda")
+
+ line = 1
+ text_content = tool_parameters.get('text_content', '')
+ if not text_content:
+ return self.create_text_message('Please input text_content')
+
+ line = 2
+ src_lang = tool_parameters.get('src_lang', '')
+ if not src_lang:
+ return self.create_text_message('Please input src_lang')
+
+ line = 3
+ dest_lang = tool_parameters.get('dest_lang', '')
+ if not dest_lang:
+ return self.create_text_message('Please input dest_lang')
+
+ line = 4
+ lambda_name = tool_parameters.get('lambda_name', '')
+ if not lambda_name:
+ return self.create_text_message('Please input lambda_name')
+
+ line = 5
+ request_type = tool_parameters.get('request_type', '')
+ if not request_type:
+ return self.create_text_message('Please input request_type')
+
+ line = 6
+ model_id = tool_parameters.get('model_id', '')
+ if not model_id:
+ return self.create_text_message('Please input model_id')
+
+ line = 7
+ dictionary_name = tool_parameters.get('dictionary_name', '')
+ if not dictionary_name:
+ return self.create_text_message('Please input dictionary_name')
+
+ result = self._invoke_lambda(text_content, src_lang, dest_lang, model_id, dictionary_name, request_type, lambda_name)
+
+ return self.create_text_message(text=result)
+
+ except Exception as e:
+ return self.create_text_message(f'Exception {str(e)}, line : {line}')
diff --git a/api/core/tools/provider/builtin/aws/tools/lambda_translate_utils.yaml b/api/core/tools/provider/builtin/aws/tools/lambda_translate_utils.yaml
new file mode 100644
index 0000000000..a35c9f49fb
--- /dev/null
+++ b/api/core/tools/provider/builtin/aws/tools/lambda_translate_utils.yaml
@@ -0,0 +1,134 @@
+identity:
+ name: lambda_translate_utils
+ author: AWS
+ label:
+ en_US: TranslateTool
+ zh_Hans: 翻译工具
+ pt_BR: TranslateTool
+ icon: icon.svg
+description:
+ human:
+ en_US: A util tools for LLM translation, extra deployment is needed on AWS. Please refer Github Repo - https://github.com/ybalbert001/dynamodb-rag
+ zh_Hans: 大语言模型翻译工具(专词映射获取),需要在AWS上进行额外部署,可参考Github Repo - https://github.com/ybalbert001/dynamodb-rag
+ pt_BR: A util tools for LLM translation, specfic Lambda Function deployment is needed on AWS. Please refer Github Repo - https://github.com/ybalbert001/dynamodb-rag
+ llm: A util tools for translation.
+parameters:
+ - name: text_content
+ type: string
+ required: true
+ label:
+ en_US: source content for translation
+ zh_Hans: 待翻译原文
+ pt_BR: source content for translation
+ human_description:
+ en_US: source content for translation
+ zh_Hans: 待翻译原文
+ pt_BR: source content for translation
+ llm_description: source content for translation
+ form: llm
+ - name: src_lang
+ type: string
+ required: true
+ label:
+ en_US: source language code
+ zh_Hans: 原文语言代号
+ pt_BR: source language code
+ human_description:
+ en_US: source language code
+ zh_Hans: 原文语言代号
+ pt_BR: source language code
+ llm_description: source language code
+ form: llm
+ - name: dest_lang
+ type: string
+ required: true
+ label:
+ en_US: target language code
+ zh_Hans: 目标语言代号
+ pt_BR: target language code
+ human_description:
+ en_US: target language code
+ zh_Hans: 目标语言代号
+ pt_BR: target language code
+ llm_description: target language code
+ form: llm
+ - name: aws_region
+ type: string
+ required: false
+ label:
+ en_US: region of Lambda
+ zh_Hans: Lambda 所在的region
+ pt_BR: region of Lambda
+ human_description:
+ en_US: region of Lambda
+ zh_Hans: Lambda 所在的region
+ pt_BR: region of Lambda
+ llm_description: region of Lambda
+ form: form
+ - name: model_id
+ type: string
+ required: false
+ default: anthropic.claude-3-sonnet-20240229-v1:0
+ label:
+ en_US: LLM model_id in bedrock
+ zh_Hans: bedrock上的大语言模型model_id
+ pt_BR: LLM model_id in bedrock
+ human_description:
+ en_US: LLM model_id in bedrock
+ zh_Hans: bedrock上的大语言模型model_id
+ pt_BR: LLM model_id in bedrock
+ llm_description: LLM model_id in bedrock
+ form: form
+ - name: dictionary_name
+ type: string
+ required: false
+ label:
+ en_US: dictionary name for term mapping
+ zh_Hans: 专词映射表名称
+ pt_BR: dictionary name for term mapping
+ human_description:
+ en_US: dictionary name for term mapping
+ zh_Hans: 专词映射表名称
+ pt_BR: dictionary name for term mapping
+ llm_description: dictionary name for term mapping
+ form: form
+ - name: request_type
+ type: select
+ required: false
+ label:
+ en_US: request type
+ zh_Hans: 请求类型
+ pt_BR: request type
+ human_description:
+ en_US: request type
+ zh_Hans: 请求类型
+ pt_BR: request type
+ default: term_mapping
+ options:
+ - value: term_mapping
+ label:
+ en_US: term_mapping
+ zh_Hans: 专词映射
+ - value: segment_only
+ label:
+ en_US: segment_only
+ zh_Hans: 仅切词
+ - value: translate
+ label:
+ en_US: translate
+ zh_Hans: 翻译内容
+ form: form
+ - name: lambda_name
+ type: string
+ default: "translate_tool"
+ required: true
+ label:
+ en_US: AWS Lambda for term mapping retrieval
+ zh_Hans: 专词召回映射 - AWS Lambda
+ pt_BR: lambda name for term mapping retrieval
+ human_description:
+ en_US: AWS Lambda for term mapping retrieval
+ zh_Hans: 专词召回映射 - AWS Lambda
+ pt_BR: AWS Lambda for term mapping retrieval
+ llm_description: AWS Lambda for term mapping retrieval
+ form: form
diff --git a/api/core/tools/provider/builtin/aws/tools/sagemaker_text_rerank.py b/api/core/tools/provider/builtin/aws/tools/sagemaker_text_rerank.py
new file mode 100644
index 0000000000..d4bc446e5b
--- /dev/null
+++ b/api/core/tools/provider/builtin/aws/tools/sagemaker_text_rerank.py
@@ -0,0 +1,86 @@
+import json
+from typing import Any, Union
+
+import boto3
+
+from core.tools.entities.tool_entities import ToolInvokeMessage
+from core.tools.tool.builtin_tool import BuiltinTool
+
+
+class SageMakerReRankTool(BuiltinTool):
+ sagemaker_client: Any = None
+ sagemaker_endpoint:str = None
+ topk:int = None
+
+ def _sagemaker_rerank(self, query_input: str, docs: list[str], rerank_endpoint:str):
+ inputs = [query_input]*len(docs)
+ response_model = self.sagemaker_client.invoke_endpoint(
+ EndpointName=rerank_endpoint,
+ Body=json.dumps(
+ {
+ "inputs": inputs,
+ "docs": docs
+ }
+ ),
+ ContentType="application/json",
+ )
+ json_str = response_model['Body'].read().decode('utf8')
+ json_obj = json.loads(json_str)
+ scores = json_obj['scores']
+ return scores if isinstance(scores, list) else [scores]
+
+ def _invoke(self,
+ user_id: str,
+ tool_parameters: dict[str, Any],
+ ) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
+ """
+ invoke tools
+ """
+ line = 0
+ try:
+ if not self.sagemaker_client:
+ aws_region = tool_parameters.get('aws_region')
+ if aws_region:
+ self.sagemaker_client = boto3.client("sagemaker-runtime", region_name=aws_region)
+ else:
+ self.sagemaker_client = boto3.client("sagemaker-runtime")
+
+ line = 1
+ if not self.sagemaker_endpoint:
+ self.sagemaker_endpoint = tool_parameters.get('sagemaker_endpoint')
+
+ line = 2
+ if not self.topk:
+ self.topk = tool_parameters.get('topk', 5)
+
+ line = 3
+ query = tool_parameters.get('query', '')
+ if not query:
+ return self.create_text_message('Please input query')
+
+ line = 4
+ candidate_texts = tool_parameters.get('candidate_texts')
+ if not candidate_texts:
+ return self.create_text_message('Please input candidate_texts')
+
+ line = 5
+ candidate_docs = json.loads(candidate_texts)
+ docs = [ item.get('content') for item in candidate_docs ]
+
+ line = 6
+ scores = self._sagemaker_rerank(query_input=query, docs=docs, rerank_endpoint=self.sagemaker_endpoint)
+
+ line = 7
+ for idx in range(len(candidate_docs)):
+ candidate_docs[idx]["score"] = scores[idx]
+
+ line = 8
+ sorted_candidate_docs = sorted(candidate_docs, key=lambda x: x['score'], reverse=True)
+
+ line = 9
+ results_str = json.dumps(sorted_candidate_docs[:self.topk], ensure_ascii=False)
+ return self.create_text_message(text=results_str)
+
+ except Exception as e:
+ return self.create_text_message(f'Exception {str(e)}, line : {line}')
+
\ No newline at end of file
diff --git a/api/core/tools/provider/builtin/aws/tools/sagemaker_text_rerank.yaml b/api/core/tools/provider/builtin/aws/tools/sagemaker_text_rerank.yaml
new file mode 100644
index 0000000000..d1dfdb9f84
--- /dev/null
+++ b/api/core/tools/provider/builtin/aws/tools/sagemaker_text_rerank.yaml
@@ -0,0 +1,82 @@
+identity:
+ name: sagemaker_text_rerank
+ author: AWS
+ label:
+ en_US: SagemakerRerank
+ zh_Hans: Sagemaker重排序
+ pt_BR: SagemakerRerank
+ icon: icon.svg
+description:
+ human:
+ en_US: A tool for performing text similarity ranking. You can find deploy notebook on Github Repo - https://github.com/aws-samples/dify-aws-tool
+ zh_Hans: Sagemaker重排序工具, 请参考 Github Repo - https://github.com/aws-samples/dify-aws-tool上的部署脚本
+ pt_BR: A tool for performing text similarity ranking.
+ llm: A tool for performing text similarity ranking. You can find deploy notebook on Github Repo - https://github.com/aws-samples/dify-aws-tool
+parameters:
+ - name: sagemaker_endpoint
+ type: string
+ required: true
+ label:
+ en_US: sagemaker endpoint for reranking
+ zh_Hans: 重排序的SageMaker 端点
+ pt_BR: sagemaker endpoint for reranking
+ human_description:
+ en_US: sagemaker endpoint for reranking
+ zh_Hans: 重排序的SageMaker 端点
+ pt_BR: sagemaker endpoint for reranking
+ llm_description: sagemaker endpoint for reranking
+ form: form
+ - name: query
+ type: string
+ required: true
+ label:
+ en_US: Query string
+ zh_Hans: 查询语句
+ pt_BR: Query string
+ human_description:
+ en_US: key words for searching
+ zh_Hans: 查询关键词
+ pt_BR: key words for searching
+ llm_description: key words for searching
+ form: llm
+ - name: candidate_texts
+ type: string
+ required: true
+ label:
+ en_US: text candidates
+ zh_Hans: 候选文本
+ pt_BR: text candidates
+ human_description:
+ en_US: searched candidates by query
+ zh_Hans: 查询文本搜到候选文本
+ pt_BR: searched candidates by query
+ llm_description: searched candidates by query
+ form: llm
+ - name: topk
+ type: number
+ required: false
+ form: form
+ label:
+ en_US: Limit for results count
+ zh_Hans: 返回个数限制
+ pt_BR: Limit for results count
+ human_description:
+ en_US: Limit for results count
+ zh_Hans: 返回个数限制
+ pt_BR: Limit for results count
+ min: 1
+ max: 10
+ default: 5
+ - name: aws_region
+ type: string
+ required: false
+ label:
+ en_US: region of sagemaker endpoint
+ zh_Hans: SageMaker 端点所在的region
+ pt_BR: region of sagemaker endpoint
+ human_description:
+ en_US: region of sagemaker endpoint
+ zh_Hans: SageMaker 端点所在的region
+ pt_BR: region of sagemaker endpoint
+ llm_description: region of sagemaker endpoint
+ form: form
diff --git a/api/core/tools/provider/builtin/did/_assets/icon.svg b/api/core/tools/provider/builtin/did/_assets/icon.svg
new file mode 100644
index 0000000000..c477d7cb71
--- /dev/null
+++ b/api/core/tools/provider/builtin/did/_assets/icon.svg
@@ -0,0 +1,14 @@
+
\ No newline at end of file
diff --git a/api/core/tools/provider/builtin/did/did.py b/api/core/tools/provider/builtin/did/did.py
new file mode 100644
index 0000000000..b4bf172131
--- /dev/null
+++ b/api/core/tools/provider/builtin/did/did.py
@@ -0,0 +1,21 @@
+from core.tools.errors import ToolProviderCredentialValidationError
+from core.tools.provider.builtin.did.tools.talks import TalksTool
+from core.tools.provider.builtin_tool_provider import BuiltinToolProviderController
+
+
+class DIDProvider(BuiltinToolProviderController):
+ def _validate_credentials(self, credentials: dict) -> None:
+ try:
+ # Example validation using the D-ID talks tool
+ TalksTool().fork_tool_runtime(
+ runtime={"credentials": credentials}
+ ).invoke(
+ user_id='',
+ tool_parameters={
+ "source_url": "https://www.d-id.com/wp-content/uploads/2023/11/Hero-image-1.png",
+ "text_input": "Hello, welcome to use D-ID tool in Dify",
+ }
+ )
+ except Exception as e:
+ raise ToolProviderCredentialValidationError(str(e))
+
\ No newline at end of file
diff --git a/api/core/tools/provider/builtin/did/did.yaml b/api/core/tools/provider/builtin/did/did.yaml
new file mode 100644
index 0000000000..a70b71812e
--- /dev/null
+++ b/api/core/tools/provider/builtin/did/did.yaml
@@ -0,0 +1,28 @@
+identity:
+ author: Matri Qi
+ name: did
+ label:
+ en_US: D-ID
+ description:
+ en_US: D-ID is a tool enabling the creation of high-quality, custom videos of Digital Humans from a single image.
+ icon: icon.svg
+ tags:
+ - videos
+credentials_for_provider:
+ did_api_key:
+ type: secret-input
+ required: true
+ label:
+ en_US: D-ID API Key
+ placeholder:
+ en_US: Please input your D-ID API key
+ help:
+ en_US: Get your D-ID API key from your D-ID account settings.
+ url: https://studio.d-id.com/account-settings
+ base_url:
+ type: text-input
+ required: false
+ label:
+ en_US: D-ID server's Base URL
+ placeholder:
+ en_US: https://api.d-id.com
diff --git a/api/core/tools/provider/builtin/did/did_appx.py b/api/core/tools/provider/builtin/did/did_appx.py
new file mode 100644
index 0000000000..964e82b729
--- /dev/null
+++ b/api/core/tools/provider/builtin/did/did_appx.py
@@ -0,0 +1,87 @@
+import logging
+import time
+from collections.abc import Mapping
+from typing import Any
+
+import requests
+from requests.exceptions import HTTPError
+
+logger = logging.getLogger(__name__)
+
+
+class DIDApp:
+ def __init__(self, api_key: str | None = None, base_url: str | None = None):
+ self.api_key = api_key
+ self.base_url = base_url or 'https://api.d-id.com'
+ if not self.api_key:
+ raise ValueError('API key is required')
+
+ def _prepare_headers(self, idempotency_key: str | None = None):
+ headers = {'Content-Type': 'application/json', 'Authorization': f'Basic {self.api_key}'}
+ if idempotency_key:
+ headers['Idempotency-Key'] = idempotency_key
+ return headers
+
+ def _request(
+ self,
+ method: str,
+ url: str,
+ data: Mapping[str, Any] | None = None,
+ headers: Mapping[str, str] | None = None,
+ retries: int = 3,
+ backoff_factor: float = 0.3,
+ ) -> Mapping[str, Any] | None:
+ for i in range(retries):
+ try:
+ response = requests.request(method, url, json=data, headers=headers)
+ response.raise_for_status()
+ return response.json()
+ except requests.exceptions.RequestException as e:
+ if i < retries - 1 and isinstance(e, HTTPError) and e.response.status_code >= 500:
+ time.sleep(backoff_factor * (2**i))
+ else:
+ raise
+ return None
+
+ def talks(self, wait: bool = True, poll_interval: int = 5, idempotency_key: str | None = None, **kwargs):
+ endpoint = f'{self.base_url}/talks'
+ headers = self._prepare_headers(idempotency_key)
+ data = kwargs['params']
+ logger.debug(f'Send request to {endpoint=} body={data}')
+ response = self._request('POST', endpoint, data, headers)
+ if response is None:
+ raise HTTPError('Failed to initiate D-ID talks after multiple retries')
+ id: str = response['id']
+ if wait:
+ return self._monitor_job_status(id=id, target='talks', poll_interval=poll_interval)
+ return id
+
+ def animations(self, wait: bool = True, poll_interval: int = 5, idempotency_key: str | None = None, **kwargs):
+ endpoint = f'{self.base_url}/animations'
+ headers = self._prepare_headers(idempotency_key)
+ data = kwargs['params']
+ logger.debug(f'Send request to {endpoint=} body={data}')
+ response = self._request('POST', endpoint, data, headers)
+ if response is None:
+ raise HTTPError('Failed to initiate D-ID talks after multiple retries')
+ id: str = response['id']
+ if wait:
+ return self._monitor_job_status(target='animations', id=id, poll_interval=poll_interval)
+ return id
+
+ def check_did_status(self, target: str, id: str):
+ endpoint = f'{self.base_url}/{target}/{id}'
+ headers = self._prepare_headers()
+ response = self._request('GET', endpoint, headers=headers)
+ if response is None:
+ raise HTTPError(f'Failed to check status for talks {id} after multiple retries')
+ return response
+
+ def _monitor_job_status(self, target: str, id: str, poll_interval: int):
+ while True:
+ status = self.check_did_status(target=target, id=id)
+ if status['status'] == 'done':
+ return status
+ elif status['status'] == 'error' or status['status'] == 'rejected':
+ raise HTTPError(f'Talks {id} failed: {status["status"]} {status.get("error",{}).get("description")}')
+ time.sleep(poll_interval)
diff --git a/api/core/tools/provider/builtin/did/tools/animations.py b/api/core/tools/provider/builtin/did/tools/animations.py
new file mode 100644
index 0000000000..e1d9de603f
--- /dev/null
+++ b/api/core/tools/provider/builtin/did/tools/animations.py
@@ -0,0 +1,49 @@
+import json
+from typing import Any, Union
+
+from core.tools.entities.tool_entities import ToolInvokeMessage
+from core.tools.provider.builtin.did.did_appx import DIDApp
+from core.tools.tool.builtin_tool import BuiltinTool
+
+
+class AnimationsTool(BuiltinTool):
+ def _invoke(
+ self, user_id: str, tool_parameters: dict[str, Any]
+ ) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
+ app = DIDApp(api_key=self.runtime.credentials['did_api_key'], base_url=self.runtime.credentials['base_url'])
+
+ driver_expressions_str = tool_parameters.get('driver_expressions')
+ driver_expressions = json.loads(driver_expressions_str) if driver_expressions_str else None
+
+ config = {
+ 'stitch': tool_parameters.get('stitch', True),
+ 'mute': tool_parameters.get('mute'),
+ 'result_format': tool_parameters.get('result_format') or 'mp4',
+ }
+ config = {k: v for k, v in config.items() if v is not None and v != ''}
+
+ options = {
+ 'source_url': tool_parameters['source_url'],
+ 'driver_url': tool_parameters.get('driver_url'),
+ 'config': config,
+ }
+ options = {k: v for k, v in options.items() if v is not None and v != ''}
+
+ if not options.get('source_url'):
+ raise ValueError('Source URL is required')
+
+ if config.get('logo_url'):
+ if not config.get('logo_x'):
+ raise ValueError('Logo X position is required when logo URL is provided')
+ if not config.get('logo_y'):
+ raise ValueError('Logo Y position is required when logo URL is provided')
+
+ animations_result = app.animations(params=options, wait=True)
+
+ if not isinstance(animations_result, str):
+ animations_result = json.dumps(animations_result, ensure_ascii=False, indent=4)
+
+ if not animations_result:
+ return self.create_text_message('D-ID animations request failed.')
+
+ return self.create_text_message(animations_result)
diff --git a/api/core/tools/provider/builtin/did/tools/animations.yaml b/api/core/tools/provider/builtin/did/tools/animations.yaml
new file mode 100644
index 0000000000..2a2036c7b2
--- /dev/null
+++ b/api/core/tools/provider/builtin/did/tools/animations.yaml
@@ -0,0 +1,86 @@
+identity:
+ name: animations
+ author: Matri Qi
+ label:
+ en_US: Animations
+description:
+ human:
+ en_US: Animations enables to create videos matching head movements, expressions, emotions, and voice from a driver video and image.
+ llm: Animations enables to create videos matching head movements, expressions, emotions, and voice from a driver video and image.
+parameters:
+ - name: source_url
+ type: string
+ required: true
+ label:
+ en_US: source url
+ human_description:
+ en_US: The URL of the source image to be animated by the driver video, or a selection from the list of provided studio actors.
+ llm_description: The URL of the source image to be animated by the driver video, or a selection from the list of provided studio actors.
+ form: llm
+ - name: driver_url
+ type: string
+ required: false
+ label:
+ en_US: driver url
+ human_description:
+ en_US: The URL of the driver video to drive the animation, or a provided driver name from D-ID.
+ form: form
+ - name: mute
+ type: boolean
+ required: false
+ label:
+ en_US: mute
+ human_description:
+ en_US: Mutes the driver sound in the animated video result, defaults to true
+ form: form
+ - name: stitch
+ type: boolean
+ required: false
+ label:
+ en_US: stitch
+ human_description:
+ en_US: If enabled, the driver video will be stitched with the animationing head video.
+ form: form
+ - name: logo_url
+ type: string
+ required: false
+ label:
+ en_US: logo url
+ human_description:
+ en_US: The URL of the logo image to be added to the animation video.
+ form: form
+ - name: logo_x
+ type: number
+ required: false
+ label:
+ en_US: logo position x
+ human_description:
+ en_US: The x position of the logo image in the animation video. It's required when logo url is provided.
+ form: form
+ - name: logo_y
+ type: number
+ required: false
+ label:
+ en_US: logo position y
+ human_description:
+ en_US: The y position of the logo image in the animation video. It's required when logo url is provided.
+ form: form
+ - name: result_format
+ type: string
+ default: mp4
+ required: false
+ label:
+ en_US: result format
+ human_description:
+ en_US: The format of the result video.
+ form: form
+ options:
+ - value: mp4
+ label:
+ en_US: mp4
+ - value: gif
+ label:
+ en_US: gif
+ - value: mov
+ label:
+ en_US: mov
diff --git a/api/core/tools/provider/builtin/did/tools/talks.py b/api/core/tools/provider/builtin/did/tools/talks.py
new file mode 100644
index 0000000000..06b2c4cb2f
--- /dev/null
+++ b/api/core/tools/provider/builtin/did/tools/talks.py
@@ -0,0 +1,65 @@
+import json
+from typing import Any, Union
+
+from core.tools.entities.tool_entities import ToolInvokeMessage
+from core.tools.provider.builtin.did.did_appx import DIDApp
+from core.tools.tool.builtin_tool import BuiltinTool
+
+
+class TalksTool(BuiltinTool):
+ def _invoke(
+ self, user_id: str, tool_parameters: dict[str, Any]
+ ) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
+ app = DIDApp(api_key=self.runtime.credentials['did_api_key'], base_url=self.runtime.credentials['base_url'])
+
+ driver_expressions_str = tool_parameters.get('driver_expressions')
+ driver_expressions = json.loads(driver_expressions_str) if driver_expressions_str else None
+
+ script = {
+ 'type': tool_parameters.get('script_type') or 'text',
+ 'input': tool_parameters.get('text_input'),
+ 'audio_url': tool_parameters.get('audio_url'),
+ 'reduce_noise': tool_parameters.get('audio_reduce_noise', False),
+ }
+ script = {k: v for k, v in script.items() if v is not None and v != ''}
+ config = {
+ 'stitch': tool_parameters.get('stitch', True),
+ 'sharpen': tool_parameters.get('sharpen'),
+ 'fluent': tool_parameters.get('fluent'),
+ 'result_format': tool_parameters.get('result_format') or 'mp4',
+ 'pad_audio': tool_parameters.get('pad_audio'),
+ 'driver_expressions': driver_expressions,
+ }
+ config = {k: v for k, v in config.items() if v is not None and v != ''}
+
+ options = {
+ 'source_url': tool_parameters['source_url'],
+ 'driver_url': tool_parameters.get('driver_url'),
+ 'script': script,
+ 'config': config,
+ }
+ options = {k: v for k, v in options.items() if v is not None and v != ''}
+
+ if not options.get('source_url'):
+ raise ValueError('Source URL is required')
+
+ if script.get('type') == 'audio':
+ script.pop('input', None)
+ if not script.get('audio_url'):
+ raise ValueError('Audio URL is required for audio script type')
+
+ if script.get('type') == 'text':
+ script.pop('audio_url', None)
+ script.pop('reduce_noise', None)
+ if not script.get('input'):
+ raise ValueError('Text input is required for text script type')
+
+ talks_result = app.talks(params=options, wait=True)
+
+ if not isinstance(talks_result, str):
+ talks_result = json.dumps(talks_result, ensure_ascii=False, indent=4)
+
+ if not talks_result:
+ return self.create_text_message('D-ID talks request failed.')
+
+ return self.create_text_message(talks_result)
diff --git a/api/core/tools/provider/builtin/did/tools/talks.yaml b/api/core/tools/provider/builtin/did/tools/talks.yaml
new file mode 100644
index 0000000000..88d4305129
--- /dev/null
+++ b/api/core/tools/provider/builtin/did/tools/talks.yaml
@@ -0,0 +1,126 @@
+identity:
+ name: talks
+ author: Matri Qi
+ label:
+ en_US: Talks
+description:
+ human:
+ en_US: Talks enables the creation of realistic talking head videos from text or audio inputs.
+ llm: Talks enables the creation of realistic talking head videos from text or audio inputs.
+parameters:
+ - name: source_url
+ type: string
+ required: true
+ label:
+ en_US: source url
+ human_description:
+ en_US: The URL of the source image to be animated by the driver video, or a selection from the list of provided studio actors.
+ llm_description: The URL of the source image to be animated by the driver video, or a selection from the list of provided studio actors.
+ form: llm
+ - name: driver_url
+ type: string
+ required: false
+ label:
+ en_US: driver url
+ human_description:
+ en_US: The URL of the driver video to drive the talk, or a provided driver name from D-ID.
+ form: form
+ - name: script_type
+ type: string
+ required: false
+ label:
+ en_US: script type
+ human_description:
+ en_US: The type of the script.
+ form: form
+ options:
+ - value: text
+ label:
+ en_US: text
+ - value: audio
+ label:
+ en_US: audio
+ - name: text_input
+ type: string
+ required: false
+ label:
+ en_US: text input
+ human_description:
+ en_US: The text input to be spoken by the talking head. Required when script type is text.
+ form: form
+ - name: audio_url
+ type: string
+ required: false
+ label:
+ en_US: audio url
+ human_description:
+ en_US: The URL of the audio file to be spoken by the talking head. Required when script type is audio.
+ form: form
+ - name: audio_reduce_noise
+ type: boolean
+ required: false
+ label:
+ en_US: audio reduce noise
+ human_description:
+ en_US: If enabled, the audio will be processed to reduce noise before being spoken by the talking head. It only works when script type is audio.
+ form: form
+ - name: stitch
+ type: boolean
+ required: false
+ label:
+ en_US: stitch
+ human_description:
+ en_US: If enabled, the driver video will be stitched with the talking head video.
+ form: form
+ - name: sharpen
+ type: boolean
+ required: false
+ label:
+ en_US: sharpen
+ human_description:
+ en_US: If enabled, the talking head video will be sharpened.
+ form: form
+ - name: result_format
+ type: string
+ required: false
+ label:
+ en_US: result format
+ human_description:
+ en_US: The format of the result video.
+ form: form
+ options:
+ - value: mp4
+ label:
+ en_US: mp4
+ - value: gif
+ label:
+ en_US: gif
+ - value: mov
+ label:
+ en_US: mov
+ - name: fluent
+ type: boolean
+ required: false
+ label:
+ en_US: fluent
+ human_description:
+ en_US: Interpolate between the last & first frames of the driver video When used together with pad_audio can create a seamless transition between videos of the same driver
+ form: form
+ - name: pad_audio
+ type: number
+ required: false
+ label:
+ en_US: pad audio
+ human_description:
+ en_US: Pad the audio with silence at the end (given in seconds) Will increase the video duration & the credits it consumes
+ form: form
+ min: 1
+ max: 60
+ - name: driver_expressions
+ type: string
+ required: false
+ label:
+ en_US: driver expressions
+ human_description:
+ en_US: timed expressions for animation. It should be an JSON array style string. Take D-ID documentation(https://docs.d-id.com/reference/createtalk) for more information.
+ form: form
diff --git a/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_ai.yaml b/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_ai.yaml
index 1913eed1d1..21cbae6bd3 100644
--- a/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_ai.yaml
+++ b/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_ai.yaml
@@ -25,9 +25,9 @@ parameters:
type: select
required: true
options:
- - value: gpt-3.5
+ - value: gpt-4o-mini
label:
- en_US: GPT-3.5
+ en_US: GPT-4o-mini
- value: claude-3-haiku
label:
en_US: Claude 3
diff --git a/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_img.py b/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_img.py
index ed873cdcf6..bca53f6b4b 100644
--- a/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_img.py
+++ b/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_img.py
@@ -2,6 +2,7 @@ from typing import Any
from duckduckgo_search import DDGS
+from core.file.file_obj import FileTransferMethod
from core.tools.entities.tool_entities import ToolInvokeMessage
from core.tools.tool.builtin_tool import BuiltinTool
@@ -21,6 +22,7 @@ class DuckDuckGoImageSearchTool(BuiltinTool):
response = DDGS().images(**query_dict)
result = []
for res in response:
+ res['transfer_method'] = FileTransferMethod.REMOTE_URL
msg = ToolInvokeMessage(type=ToolInvokeMessage.MessageType.IMAGE_LINK,
message=res.get('image'),
save_as='',
diff --git a/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_search.py b/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_search.py
index 442f29f33d..dfaeb734d8 100644
--- a/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_search.py
+++ b/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_search.py
@@ -21,23 +21,16 @@ class DuckDuckGoSearchTool(BuiltinTool):
"""
Tool for performing a search using DuckDuckGo search engine.
"""
-
- def _invoke(self, user_id: str, tool_parameters: dict[str, Any]) -> ToolInvokeMessage:
- query = tool_parameters.get('query', '')
- result_type = tool_parameters.get('result_type', 'text')
- max_results = tool_parameters.get('max_results', 10)
+ def _invoke(self, user_id: str, tool_parameters: dict[str, Any]) -> ToolInvokeMessage | list[ToolInvokeMessage]:
+ query = tool_parameters.get('query')
+ max_results = tool_parameters.get('max_results', 5)
require_summary = tool_parameters.get('require_summary', False)
response = DDGS().text(query, max_results=max_results)
-
- if result_type == 'link':
- results = [f"[{res.get('title')}]({res.get('href')})" for res in response]
- results = "\n".join(results)
- return self.create_link_message(link=results)
- results = [res.get("body") for res in response]
- results = "\n".join(results)
if require_summary:
+ results = "\n".join([res.get("body") for res in response])
results = self.summary_results(user_id=user_id, content=results, query=query)
- return self.create_text_message(text=results)
+ return self.create_text_message(text=results)
+ return [self.create_json_message(res) for res in response]
def summary_results(self, user_id: str, content: str, query: str) -> str:
prompt = SUMMARY_PROMPT.format(query=query, content=content)
diff --git a/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_search.yaml b/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_search.yaml
index c427a37fe6..333c0cb093 100644
--- a/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_search.yaml
+++ b/api/core/tools/provider/builtin/duckduckgo/tools/ddgo_search.yaml
@@ -28,29 +28,6 @@ parameters:
label:
en_US: Max results
zh_Hans: 最大结果数量
- human_description:
- en_US: The max results.
- zh_Hans: 最大结果数量
- form: form
- - name: result_type
- type: select
- required: true
- options:
- - value: text
- label:
- en_US: text
- zh_Hans: 文本
- - value: link
- label:
- en_US: link
- zh_Hans: 链接
- default: text
- label:
- en_US: Result type
- zh_Hans: 结果类型
- human_description:
- en_US: used for selecting the result type, text or link
- zh_Hans: 用于选择结果类型,使用文本还是链接进行展示
form: form
- name: require_summary
type: boolean
diff --git a/api/core/tools/provider/builtin/json_process/tools/insert.py b/api/core/tools/provider/builtin/json_process/tools/insert.py
index 27e34f1ff3..48d1bdcab4 100644
--- a/api/core/tools/provider/builtin/json_process/tools/insert.py
+++ b/api/core/tools/provider/builtin/json_process/tools/insert.py
@@ -36,21 +36,26 @@ class JSONParseTool(BuiltinTool):
# get create path
create_path = tool_parameters.get('create_path', False)
+ # get value decode.
+ # if true, it will be decoded to an dict
+ value_decode = tool_parameters.get('value_decode', False)
+
ensure_ascii = tool_parameters.get('ensure_ascii', True)
try:
- result = self._insert(content, query, new_value, ensure_ascii, index, create_path)
+ result = self._insert(content, query, new_value, ensure_ascii, value_decode, index, create_path)
return self.create_text_message(str(result))
except Exception:
return self.create_text_message('Failed to insert JSON content')
- def _insert(self, origin_json, query, new_value, ensure_ascii: bool, index=None, create_path=False):
+ def _insert(self, origin_json, query, new_value, ensure_ascii: bool, value_decode: bool, index=None, create_path=False):
try:
input_data = json.loads(origin_json)
expr = parse(query)
- try:
- new_value = json.loads(new_value)
- except json.JSONDecodeError:
- new_value = new_value
+ if value_decode is True:
+ try:
+ new_value = json.loads(new_value)
+ except json.JSONDecodeError:
+ return "Cannot decode new value to json object"
matches = expr.find(input_data)
diff --git a/api/core/tools/provider/builtin/json_process/tools/insert.yaml b/api/core/tools/provider/builtin/json_process/tools/insert.yaml
index 63e7816455..21b51312da 100644
--- a/api/core/tools/provider/builtin/json_process/tools/insert.yaml
+++ b/api/core/tools/provider/builtin/json_process/tools/insert.yaml
@@ -47,10 +47,22 @@ parameters:
pt_BR: New Value
human_description:
en_US: New Value
- zh_Hans: 新值
+ zh_Hans: 插入的新值
pt_BR: New Value
llm_description: New Value to insert
form: llm
+ - name: value_decode
+ type: boolean
+ default: false
+ label:
+ en_US: Decode Value
+ zh_Hans: 解码值
+ pt_BR: Decode Value
+ human_description:
+ en_US: Whether to decode the value to a JSON object
+ zh_Hans: 是否将值解码为 JSON 对象
+ pt_BR: Whether to decode the value to a JSON object
+ form: form
- name: create_path
type: select
required: true
diff --git a/api/core/tools/provider/builtin/json_process/tools/replace.py b/api/core/tools/provider/builtin/json_process/tools/replace.py
index be696bce0e..b19198aa93 100644
--- a/api/core/tools/provider/builtin/json_process/tools/replace.py
+++ b/api/core/tools/provider/builtin/json_process/tools/replace.py
@@ -35,6 +35,10 @@ class JSONReplaceTool(BuiltinTool):
if not replace_model:
return self.create_text_message('Invalid parameter replace_model')
+ # get value decode.
+ # if true, it will be decoded to an dict
+ value_decode = tool_parameters.get('value_decode', False)
+
ensure_ascii = tool_parameters.get('ensure_ascii', True)
try:
if replace_model == 'pattern':
@@ -42,17 +46,17 @@ class JSONReplaceTool(BuiltinTool):
replace_pattern = tool_parameters.get('replace_pattern', '')
if not replace_pattern:
return self.create_text_message('Invalid parameter replace_pattern')
- result = self._replace_pattern(content, query, replace_pattern, replace_value, ensure_ascii)
+ result = self._replace_pattern(content, query, replace_pattern, replace_value, ensure_ascii, value_decode)
elif replace_model == 'key':
result = self._replace_key(content, query, replace_value, ensure_ascii)
elif replace_model == 'value':
- result = self._replace_value(content, query, replace_value, ensure_ascii)
+ result = self._replace_value(content, query, replace_value, ensure_ascii, value_decode)
return self.create_text_message(str(result))
except Exception:
return self.create_text_message('Failed to replace JSON content')
# Replace pattern
- def _replace_pattern(self, content: str, query: str, replace_pattern: str, replace_value: str, ensure_ascii: bool) -> str:
+ def _replace_pattern(self, content: str, query: str, replace_pattern: str, replace_value: str, ensure_ascii: bool, value_decode: bool) -> str:
try:
input_data = json.loads(content)
expr = parse(query)
@@ -61,6 +65,12 @@ class JSONReplaceTool(BuiltinTool):
for match in matches:
new_value = match.value.replace(replace_pattern, replace_value)
+ if value_decode is True:
+ try:
+ new_value = json.loads(new_value)
+ except json.JSONDecodeError:
+ return "Cannot decode replace value to json object"
+
match.full_path.update(input_data, new_value)
return json.dumps(input_data, ensure_ascii=ensure_ascii)
@@ -92,10 +102,15 @@ class JSONReplaceTool(BuiltinTool):
return str(e)
# Replace value
- def _replace_value(self, content: str, query: str, replace_value: str, ensure_ascii: bool) -> str:
+ def _replace_value(self, content: str, query: str, replace_value: str, ensure_ascii: bool, value_decode: bool) -> str:
try:
input_data = json.loads(content)
expr = parse(query)
+ if value_decode is True:
+ try:
+ replace_value = json.loads(replace_value)
+ except json.JSONDecodeError:
+ return "Cannot decode replace value to json object"
matches = expr.find(input_data)
diff --git a/api/core/tools/provider/builtin/json_process/tools/replace.yaml b/api/core/tools/provider/builtin/json_process/tools/replace.yaml
index cf4b1dc63f..ae238b1fbc 100644
--- a/api/core/tools/provider/builtin/json_process/tools/replace.yaml
+++ b/api/core/tools/provider/builtin/json_process/tools/replace.yaml
@@ -60,10 +60,22 @@ parameters:
pt_BR: Replace Value
human_description:
en_US: New Value
- zh_Hans: New Value
+ zh_Hans: 新值
pt_BR: New Value
llm_description: New Value to replace
form: llm
+ - name: value_decode
+ type: boolean
+ default: false
+ label:
+ en_US: Decode Value
+ zh_Hans: 解码值
+ pt_BR: Decode Value
+ human_description:
+ en_US: Whether to decode the value to a JSON object (Does not apply to replace key)
+ zh_Hans: 是否将值解码为 JSON 对象 (不适用于键替换)
+ pt_BR: Whether to decode the value to a JSON object (Does not apply to replace key)
+ form: form
- name: replace_model
type: select
required: true
diff --git a/api/core/tools/provider/builtin/regex/_assets/icon.svg b/api/core/tools/provider/builtin/regex/_assets/icon.svg
new file mode 100644
index 0000000000..0231a2b4aa
--- /dev/null
+++ b/api/core/tools/provider/builtin/regex/_assets/icon.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/api/core/tools/provider/builtin/regex/regex.py b/api/core/tools/provider/builtin/regex/regex.py
new file mode 100644
index 0000000000..d38ae1b292
--- /dev/null
+++ b/api/core/tools/provider/builtin/regex/regex.py
@@ -0,0 +1,19 @@
+from typing import Any
+
+from core.tools.errors import ToolProviderCredentialValidationError
+from core.tools.provider.builtin.regex.tools.regex_extract import RegexExpressionTool
+from core.tools.provider.builtin_tool_provider import BuiltinToolProviderController
+
+
+class RegexProvider(BuiltinToolProviderController):
+ def _validate_credentials(self, credentials: dict[str, Any]) -> None:
+ try:
+ RegexExpressionTool().invoke(
+ user_id='',
+ tool_parameters={
+ 'content': '1+(2+3)*4',
+ 'expression': r'(\d+)',
+ },
+ )
+ except Exception as e:
+ raise ToolProviderCredentialValidationError(str(e))
diff --git a/api/core/tools/provider/builtin/regex/regex.yaml b/api/core/tools/provider/builtin/regex/regex.yaml
new file mode 100644
index 0000000000..d05776f214
--- /dev/null
+++ b/api/core/tools/provider/builtin/regex/regex.yaml
@@ -0,0 +1,15 @@
+identity:
+ author: zhuhao
+ name: regex
+ label:
+ en_US: Regex
+ zh_Hans: 正则表达式提取
+ pt_BR: Regex
+ description:
+ en_US: A tool for regex extraction.
+ zh_Hans: 一个用于正则表达式内容提取的工具。
+ pt_BR: A tool for regex extraction.
+ icon: icon.svg
+ tags:
+ - utilities
+ - productivity
diff --git a/api/core/tools/provider/builtin/regex/tools/regex_extract.py b/api/core/tools/provider/builtin/regex/tools/regex_extract.py
new file mode 100644
index 0000000000..5d8f013d0d
--- /dev/null
+++ b/api/core/tools/provider/builtin/regex/tools/regex_extract.py
@@ -0,0 +1,27 @@
+import re
+from typing import Any, Union
+
+from core.tools.entities.tool_entities import ToolInvokeMessage
+from core.tools.tool.builtin_tool import BuiltinTool
+
+
+class RegexExpressionTool(BuiltinTool):
+ def _invoke(self,
+ user_id: str,
+ tool_parameters: dict[str, Any],
+ ) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
+ """
+ invoke tools
+ """
+ # get expression
+ content = tool_parameters.get('content', '').strip()
+ if not content:
+ return self.create_text_message('Invalid content')
+ expression = tool_parameters.get('expression', '').strip()
+ if not expression:
+ return self.create_text_message('Invalid expression')
+ try:
+ result = re.findall(expression, content)
+ return self.create_text_message(str(result))
+ except Exception as e:
+ return self.create_text_message(f'Failed to extract result, error: {str(e)}')
\ No newline at end of file
diff --git a/api/core/tools/provider/builtin/regex/tools/regex_extract.yaml b/api/core/tools/provider/builtin/regex/tools/regex_extract.yaml
new file mode 100644
index 0000000000..de4100def1
--- /dev/null
+++ b/api/core/tools/provider/builtin/regex/tools/regex_extract.yaml
@@ -0,0 +1,38 @@
+identity:
+ name: regex_extract
+ author: zhuhao
+ label:
+ en_US: Regex Extract
+ zh_Hans: 正则表达式内容提取
+ pt_BR: Regex Extract
+description:
+ human:
+ en_US: A tool for extracting matching content using regular expressions.
+ zh_Hans: 一个用于利用正则表达式提取匹配内容结果的工具。
+ pt_BR: A tool for extracting matching content using regular expressions.
+ llm: A tool for extracting matching content using regular expressions.
+parameters:
+ - name: content
+ type: string
+ required: true
+ label:
+ en_US: Content to be extracted
+ zh_Hans: 内容
+ pt_BR: Content to be extracted
+ human_description:
+ en_US: Content to be extracted
+ zh_Hans: 内容
+ pt_BR: Content to be extracted
+ form: llm
+ - name: expression
+ type: string
+ required: true
+ label:
+ en_US: Regular expression
+ zh_Hans: 正则表达式
+ pt_BR: Regular expression
+ human_description:
+ en_US: Regular expression
+ zh_Hans: 正则表达式
+ pt_BR: Regular expression
+ form: llm
diff --git a/api/core/tools/provider/builtin/searxng/docker/settings.yml b/api/core/tools/provider/builtin/searxng/docker/settings.yml
new file mode 100644
index 0000000000..18e1868800
--- /dev/null
+++ b/api/core/tools/provider/builtin/searxng/docker/settings.yml
@@ -0,0 +1,2501 @@
+general:
+ # Debug mode, only for development. Is overwritten by ${SEARXNG_DEBUG}
+ debug: false
+ # displayed name
+ instance_name: "searxng"
+ # For example: https://example.com/privacy
+ privacypolicy_url: false
+ # use true to use your own donation page written in searx/info/en/donate.md
+ # use false to disable the donation link
+ donation_url: false
+ # mailto:contact@example.com
+ contact_url: false
+ # record stats
+ enable_metrics: true
+
+brand:
+ new_issue_url: https://github.com/searxng/searxng/issues/new
+ docs_url: https://docs.searxng.org/
+ public_instances: https://searx.space
+ wiki_url: https://github.com/searxng/searxng/wiki
+ issue_url: https://github.com/searxng/searxng/issues
+ # custom:
+ # maintainer: "Jon Doe"
+ # # Custom entries in the footer: [title]: [link]
+ # links:
+ # Uptime: https://uptime.searxng.org/history/darmarit-org
+ # About: "https://searxng.org"
+
+search:
+ # Filter results. 0: None, 1: Moderate, 2: Strict
+ safe_search: 0
+ # Existing autocomplete backends: "dbpedia", "duckduckgo", "google", "yandex", "mwmbl",
+ # "seznam", "startpage", "stract", "swisscows", "qwant", "wikipedia" - leave blank to turn it off
+ # by default.
+ autocomplete: ""
+ # minimun characters to type before autocompleter starts
+ autocomplete_min: 4
+ # Default search language - leave blank to detect from browser information or
+ # use codes from 'languages.py'
+ default_lang: "auto"
+ # max_page: 0 # if engine supports paging, 0 means unlimited numbers of pages
+ # Available languages
+ # languages:
+ # - all
+ # - en
+ # - en-US
+ # - de
+ # - it-IT
+ # - fr
+ # - fr-BE
+ # ban time in seconds after engine errors
+ ban_time_on_fail: 5
+ # max ban time in seconds after engine errors
+ max_ban_time_on_fail: 120
+ suspended_times:
+ # Engine suspension time after error (in seconds; set to 0 to disable)
+ # For error "Access denied" and "HTTP error [402, 403]"
+ SearxEngineAccessDenied: 86400
+ # For error "CAPTCHA"
+ SearxEngineCaptcha: 86400
+ # For error "Too many request" and "HTTP error 429"
+ SearxEngineTooManyRequests: 3600
+ # Cloudflare CAPTCHA
+ cf_SearxEngineCaptcha: 1296000
+ cf_SearxEngineAccessDenied: 86400
+ # ReCAPTCHA
+ recaptcha_SearxEngineCaptcha: 604800
+
+ # remove format to deny access, use lower case.
+ # formats: [html, csv, json, rss]
+ formats:
+ - html
+ - json
+
+server:
+ # Is overwritten by ${SEARXNG_PORT} and ${SEARXNG_BIND_ADDRESS}
+ port: 8888
+ bind_address: "127.0.0.1"
+ # public URL of the instance, to ensure correct inbound links. Is overwritten
+ # by ${SEARXNG_URL}.
+ base_url: http://0.0.0.0:8081/ # "http://example.com/location"
+ # rate limit the number of request on the instance, block some bots.
+ # Is overwritten by ${SEARXNG_LIMITER}
+ limiter: false
+ # enable features designed only for public instances.
+ # Is overwritten by ${SEARXNG_PUBLIC_INSTANCE}
+ public_instance: false
+
+ # If your instance owns a /etc/searxng/settings.yml file, then set the following
+ # values there.
+
+ secret_key: "772ba36386fb56d0f8fe818941552dabbe69220d4c0eb4a385a5729cdbc20c2d" # Is overwritten by ${SEARXNG_SECRET}
+ # Proxy image results through SearXNG. Is overwritten by ${SEARXNG_IMAGE_PROXY}
+ image_proxy: false
+ # 1.0 and 1.1 are supported
+ http_protocol_version: "1.0"
+ # POST queries are more secure as they don't show up in history but may cause
+ # problems when using Firefox containers
+ method: "POST"
+ default_http_headers:
+ X-Content-Type-Options: nosniff
+ X-Download-Options: noopen
+ X-Robots-Tag: noindex, nofollow
+ Referrer-Policy: no-referrer
+
+redis:
+ # URL to connect redis database. Is overwritten by ${SEARXNG_REDIS_URL}.
+ # https://docs.searxng.org/admin/settings/settings_redis.html#settings-redis
+ url: false
+
+ui:
+ # Custom static path - leave it blank if you didn't change
+ static_path: ""
+ # Is overwritten by ${SEARXNG_STATIC_USE_HASH}.
+ static_use_hash: false
+ # Custom templates path - leave it blank if you didn't change
+ templates_path: ""
+ # query_in_title: When true, the result page's titles contains the query
+ # it decreases the privacy, since the browser can records the page titles.
+ query_in_title: false
+ # infinite_scroll: When true, automatically loads the next page when scrolling to bottom of the current page.
+ infinite_scroll: false
+ # ui theme
+ default_theme: simple
+ # center the results ?
+ center_alignment: false
+ # URL prefix of the internet archive, don't forget trailing slash (if needed).
+ # cache_url: "https://webcache.googleusercontent.com/search?q=cache:"
+ # Default interface locale - leave blank to detect from browser information or
+ # use codes from the 'locales' config section
+ default_locale: ""
+ # Open result links in a new tab by default
+ # results_on_new_tab: false
+ theme_args:
+ # style of simple theme: auto, light, dark
+ simple_style: auto
+ # Perform search immediately if a category selected.
+ # Disable to select multiple categories at once and start the search manually.
+ search_on_category_select: true
+ # Hotkeys: default or vim
+ hotkeys: default
+
+# Lock arbitrary settings on the preferences page. To find the ID of the user
+# setting you want to lock, check the ID of the form on the page "preferences".
+#
+# preferences:
+# lock:
+# - language
+# - autocomplete
+# - method
+# - query_in_title
+
+# searx supports result proxification using an external service:
+# https://github.com/asciimoo/morty uncomment below section if you have running
+# morty proxy the key is base64 encoded (keep the !!binary notation)
+# Note: since commit af77ec3, morty accepts a base64 encoded key.
+#
+# result_proxy:
+# url: http://127.0.0.1:3000/
+# # the key is a base64 encoded string, the YAML !!binary prefix is optional
+# key: !!binary "your_morty_proxy_key"
+# # [true|false] enable the "proxy" button next to each result
+# proxify_results: true
+
+# communication with search engines
+#
+outgoing:
+ # default timeout in seconds, can be override by engine
+ request_timeout: 3.0
+ # the maximum timeout in seconds
+ # max_request_timeout: 10.0
+ # suffix of searx_useragent, could contain information like an email address
+ # to the administrator
+ useragent_suffix: ""
+ # The maximum number of concurrent connections that may be established.
+ pool_connections: 100
+ # Allow the connection pool to maintain keep-alive connections below this
+ # point.
+ pool_maxsize: 20
+ # See https://www.python-httpx.org/http2/
+ enable_http2: true
+ # uncomment below section if you want to use a custom server certificate
+ # see https://www.python-httpx.org/advanced/#changing-the-verification-defaults
+ # and https://www.python-httpx.org/compatibility/#ssl-configuration
+ # verify: ~/.mitmproxy/mitmproxy-ca-cert.cer
+ #
+ # uncomment below section if you want to use a proxyq see: SOCKS proxies
+ # https://2.python-requests.org/en/latest/user/advanced/#proxies
+ # are also supported: see
+ # https://2.python-requests.org/en/latest/user/advanced/#socks
+ #
+ # proxies:
+ # all://:
+ # - http://host.docker.internal:1080
+ #
+ # using_tor_proxy: true
+ #
+ # Extra seconds to add in order to account for the time taken by the proxy
+ #
+ # extra_proxy_timeout: 10
+ #
+ # uncomment below section only if you have more than one network interface
+ # which can be the source of outgoing search requests
+ #
+ # source_ips:
+ # - 1.1.1.1
+ # - 1.1.1.2
+ # - fe80::/126
+
+# External plugin configuration, for more details see
+# https://docs.searxng.org/dev/plugins.html
+#
+# plugins:
+# - plugin1
+# - plugin2
+# - ...
+
+# Comment or un-comment plugin to activate / deactivate by default.
+#
+# enabled_plugins:
+# # these plugins are enabled if nothing is configured ..
+# - 'Hash plugin'
+# - 'Self Information'
+# - 'Tracker URL remover'
+# - 'Ahmia blacklist' # activation depends on outgoing.using_tor_proxy
+# # these plugins are disabled if nothing is configured ..
+# - 'Hostnames plugin' # see 'hostnames' configuration below
+# - 'Basic Calculator'
+# - 'Open Access DOI rewrite'
+# - 'Tor check plugin'
+# # Read the docs before activate: auto-detection of the language could be
+# # detrimental to users expectations / users can activate the plugin in the
+# # preferences if they want.
+# - 'Autodetect search language'
+
+# Configuration of the "Hostnames plugin":
+#
+# hostnames:
+# replace:
+# '(.*\.)?youtube\.com$': 'invidious.example.com'
+# '(.*\.)?youtu\.be$': 'invidious.example.com'
+# '(.*\.)?reddit\.com$': 'teddit.example.com'
+# '(.*\.)?redd\.it$': 'teddit.example.com'
+# '(www\.)?twitter\.com$': 'nitter.example.com'
+# remove:
+# - '(.*\.)?facebook.com$'
+# low_priority:
+# - '(.*\.)?google(\..*)?$'
+# high_priority:
+# - '(.*\.)?wikipedia.org$'
+#
+# Alternatively you can use external files for configuring the "Hostnames plugin":
+#
+# hostnames:
+# replace: 'rewrite-hosts.yml'
+#
+# Content of 'rewrite-hosts.yml' (place the file in the same directory as 'settings.yml'):
+# '(.*\.)?youtube\.com$': 'invidious.example.com'
+# '(.*\.)?youtu\.be$': 'invidious.example.com'
+#
+
+checker:
+ # disable checker when in debug mode
+ off_when_debug: true
+
+ # use "scheduling: false" to disable scheduling
+ # scheduling: interval or int
+
+ # to activate the scheduler:
+ # * uncomment "scheduling" section
+ # * add "cache2 = name=searxngcache,items=2000,blocks=2000,blocksize=4096,bitmap=1"
+ # to your uwsgi.ini
+
+ # scheduling:
+ # start_after: [300, 1800] # delay to start the first run of the checker
+ # every: [86400, 90000] # how often the checker runs
+
+ # additional tests: only for the YAML anchors (see the engines section)
+ #
+ additional_tests:
+ rosebud: &test_rosebud
+ matrix:
+ query: rosebud
+ lang: en
+ result_container:
+ - not_empty
+ - ['one_title_contains', 'citizen kane']
+ test:
+ - unique_results
+
+ android: &test_android
+ matrix:
+ query: ['android']
+ lang: ['en', 'de', 'fr', 'zh-CN']
+ result_container:
+ - not_empty
+ - ['one_title_contains', 'google']
+ test:
+ - unique_results
+
+ # tests: only for the YAML anchors (see the engines section)
+ tests:
+ infobox: &tests_infobox
+ infobox:
+ matrix:
+ query: ["linux", "new york", "bbc"]
+ result_container:
+ - has_infobox
+
+categories_as_tabs:
+ general:
+ images:
+ videos:
+ news:
+ map:
+ music:
+ it:
+ science:
+ files:
+ social media:
+
+engines:
+ - name: 9gag
+ engine: 9gag
+ shortcut: 9g
+ disabled: true
+
+ - name: alpine linux packages
+ engine: alpinelinux
+ disabled: true
+ shortcut: alp
+
+ - name: annas archive
+ engine: annas_archive
+ disabled: true
+ shortcut: aa
+
+ # - name: annas articles
+ # engine: annas_archive
+ # shortcut: aaa
+ # # https://docs.searxng.org/dev/engines/online/annas_archive.html
+ # aa_content: 'magazine' # book_fiction, book_unknown, book_nonfiction, book_comic
+ # aa_ext: 'pdf' # pdf, epub, ..
+ # aa_sort: oldest' # newest, oldest, largest, smallest
+
+ - name: apk mirror
+ engine: apkmirror
+ timeout: 4.0
+ shortcut: apkm
+ disabled: true
+
+ - name: apple app store
+ engine: apple_app_store
+ shortcut: aps
+ disabled: true
+
+ # Requires Tor
+ - name: ahmia
+ engine: ahmia
+ categories: onions
+ enable_http: true
+ shortcut: ah
+
+ - name: anaconda
+ engine: xpath
+ paging: true
+ first_page_num: 0
+ search_url: https://anaconda.org/search?q={query}&page={pageno}
+ results_xpath: //tbody/tr
+ url_xpath: ./td/h5/a[last()]/@href
+ title_xpath: ./td/h5
+ content_xpath: ./td[h5]/text()
+ categories: it
+ timeout: 6.0
+ shortcut: conda
+ disabled: true
+
+ - name: arch linux wiki
+ engine: archlinux
+ shortcut: al
+
+ - name: artic
+ engine: artic
+ shortcut: arc
+ timeout: 4.0
+
+ - name: arxiv
+ engine: arxiv
+ shortcut: arx
+ timeout: 4.0
+
+ - name: ask
+ engine: ask
+ shortcut: ask
+ disabled: true
+
+ # tmp suspended: dh key too small
+ # - name: base
+ # engine: base
+ # shortcut: bs
+
+ - name: bandcamp
+ engine: bandcamp
+ shortcut: bc
+ categories: music
+
+ - name: wikipedia
+ engine: wikipedia
+ shortcut: wp
+ # add "list" to the array to get results in the results list
+ display_type: ["infobox"]
+ base_url: 'https://{language}.wikipedia.org/'
+ categories: [general]
+
+ - name: bilibili
+ engine: bilibili
+ shortcut: bil
+ disabled: true
+
+ - name: bing
+ engine: bing
+ shortcut: bi
+ disabled: false
+
+ - name: bing images
+ engine: bing_images
+ shortcut: bii
+
+ - name: bing news
+ engine: bing_news
+ shortcut: bin
+
+ - name: bing videos
+ engine: bing_videos
+ shortcut: biv
+
+ - name: bitbucket
+ engine: xpath
+ paging: true
+ search_url: https://bitbucket.org/repo/all/{pageno}?name={query}
+ url_xpath: //article[@class="repo-summary"]//a[@class="repo-link"]/@href
+ title_xpath: //article[@class="repo-summary"]//a[@class="repo-link"]
+ content_xpath: //article[@class="repo-summary"]/p
+ categories: [it, repos]
+ timeout: 4.0
+ disabled: true
+ shortcut: bb
+ about:
+ website: https://bitbucket.org/
+ wikidata_id: Q2493781
+ official_api_documentation: https://developer.atlassian.com/bitbucket
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+
+ - name: bpb
+ engine: bpb
+ shortcut: bpb
+ disabled: true
+
+ - name: btdigg
+ engine: btdigg
+ shortcut: bt
+ disabled: true
+
+ - name: openverse
+ engine: openverse
+ categories: images
+ shortcut: opv
+
+ - name: media.ccc.de
+ engine: ccc_media
+ shortcut: c3tv
+ # We don't set language: de here because media.ccc.de is not just
+ # for a German audience. It contains many English videos and many
+ # German videos have English subtitles.
+ disabled: true
+
+ - name: chefkoch
+ engine: chefkoch
+ shortcut: chef
+ # to show premium or plus results too:
+ # skip_premium: false
+
+ # - name: core.ac.uk
+ # engine: core
+ # categories: science
+ # shortcut: cor
+ # # get your API key from: https://core.ac.uk/api-keys/register/
+ # api_key: 'unset'
+
+ - name: cppreference
+ engine: cppreference
+ shortcut: cpp
+ paging: false
+ disabled: true
+
+ - name: crossref
+ engine: crossref
+ shortcut: cr
+ timeout: 30
+ disabled: true
+
+ - name: crowdview
+ engine: json_engine
+ shortcut: cv
+ categories: general
+ paging: false
+ search_url: https://crowdview-next-js.onrender.com/api/search-v3?query={query}
+ results_query: results
+ url_query: link
+ title_query: title
+ content_query: snippet
+ disabled: true
+ about:
+ website: https://crowdview.ai/
+
+ - name: yep
+ engine: yep
+ shortcut: yep
+ categories: general
+ search_type: web
+ timeout: 5
+ disabled: true
+
+ - name: yep images
+ engine: yep
+ shortcut: yepi
+ categories: images
+ search_type: images
+ disabled: true
+
+ - name: yep news
+ engine: yep
+ shortcut: yepn
+ categories: news
+ search_type: news
+ disabled: true
+
+ - name: curlie
+ engine: xpath
+ shortcut: cl
+ categories: general
+ disabled: true
+ paging: true
+ lang_all: ''
+ search_url: https://curlie.org/search?q={query}&lang={lang}&start={pageno}&stime=92452189
+ page_size: 20
+ results_xpath: //div[@id="site-list-content"]/div[@class="site-item"]
+ url_xpath: ./div[@class="title-and-desc"]/a/@href
+ title_xpath: ./div[@class="title-and-desc"]/a/div
+ content_xpath: ./div[@class="title-and-desc"]/div[@class="site-descr"]
+ about:
+ website: https://curlie.org/
+ wikidata_id: Q60715723
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+
+ - name: currency
+ engine: currency_convert
+ categories: general
+ shortcut: cc
+
+ - name: bahnhof
+ engine: json_engine
+ search_url: https://www.bahnhof.de/api/stations/search/{query}
+ url_prefix: https://www.bahnhof.de/
+ url_query: slug
+ title_query: name
+ content_query: state
+ shortcut: bf
+ disabled: true
+ about:
+ website: https://www.bahn.de
+ wikidata_id: Q22811603
+ use_official_api: false
+ require_api_key: false
+ results: JSON
+ language: de
+ tests:
+ bahnhof:
+ matrix:
+ query: berlin
+ lang: en
+ result_container:
+ - not_empty
+ - ['one_title_contains', 'Berlin Hauptbahnhof']
+ test:
+ - unique_results
+
+ - name: deezer
+ engine: deezer
+ shortcut: dz
+ disabled: true
+
+ - name: destatis
+ engine: destatis
+ shortcut: destat
+ disabled: true
+
+ - name: deviantart
+ engine: deviantart
+ shortcut: da
+ timeout: 3.0
+
+ - name: ddg definitions
+ engine: duckduckgo_definitions
+ shortcut: ddd
+ weight: 2
+ disabled: true
+ tests: *tests_infobox
+
+ # cloudflare protected
+ # - name: digbt
+ # engine: digbt
+ # shortcut: dbt
+ # timeout: 6.0
+ # disabled: true
+
+ - name: docker hub
+ engine: docker_hub
+ shortcut: dh
+ categories: [it, packages]
+
+ - name: encyclosearch
+ engine: json_engine
+ shortcut: es
+ categories: general
+ paging: true
+ search_url: https://encyclosearch.org/encyclosphere/search?q={query}&page={pageno}&resultsPerPage=15
+ results_query: Results
+ url_query: SourceURL
+ title_query: Title
+ content_query: Description
+ disabled: true
+ about:
+ website: https://encyclosearch.org
+ official_api_documentation: https://encyclosearch.org/docs/#/rest-api
+ use_official_api: true
+ require_api_key: false
+ results: JSON
+
+ - name: erowid
+ engine: xpath
+ paging: true
+ first_page_num: 0
+ page_size: 30
+ search_url: https://www.erowid.org/search.php?q={query}&s={pageno}
+ url_xpath: //dl[@class="results-list"]/dt[@class="result-title"]/a/@href
+ title_xpath: //dl[@class="results-list"]/dt[@class="result-title"]/a/text()
+ content_xpath: //dl[@class="results-list"]/dd[@class="result-details"]
+ categories: []
+ shortcut: ew
+ disabled: true
+ about:
+ website: https://www.erowid.org/
+ wikidata_id: Q1430691
+ official_api_documentation:
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+
+ # - name: elasticsearch
+ # shortcut: es
+ # engine: elasticsearch
+ # base_url: http://localhost:9200
+ # username: elastic
+ # password: changeme
+ # index: my-index
+ # # available options: match, simple_query_string, term, terms, custom
+ # query_type: match
+ # # if query_type is set to custom, provide your query here
+ # #custom_query_json: {"query":{"match_all": {}}}
+ # #show_metadata: false
+ # disabled: true
+
+ - name: wikidata
+ engine: wikidata
+ shortcut: wd
+ timeout: 3.0
+ weight: 2
+ # add "list" to the array to get results in the results list
+ display_type: ["infobox"]
+ tests: *tests_infobox
+ categories: [general]
+
+ - name: duckduckgo
+ engine: duckduckgo
+ shortcut: ddg
+
+ - name: duckduckgo images
+ engine: duckduckgo_extra
+ categories: [images, web]
+ ddg_category: images
+ shortcut: ddi
+ disabled: true
+
+ - name: duckduckgo videos
+ engine: duckduckgo_extra
+ categories: [videos, web]
+ ddg_category: videos
+ shortcut: ddv
+ disabled: true
+
+ - name: duckduckgo news
+ engine: duckduckgo_extra
+ categories: [news, web]
+ ddg_category: news
+ shortcut: ddn
+ disabled: true
+
+ - name: duckduckgo weather
+ engine: duckduckgo_weather
+ shortcut: ddw
+ disabled: true
+
+ - name: apple maps
+ engine: apple_maps
+ shortcut: apm
+ disabled: true
+ timeout: 5.0
+
+ - name: emojipedia
+ engine: emojipedia
+ timeout: 4.0
+ shortcut: em
+ disabled: true
+
+ - name: tineye
+ engine: tineye
+ shortcut: tin
+ timeout: 9.0
+ disabled: true
+
+ - name: etymonline
+ engine: xpath
+ paging: true
+ search_url: https://etymonline.com/search?page={pageno}&q={query}
+ url_xpath: //a[contains(@class, "word__name--")]/@href
+ title_xpath: //a[contains(@class, "word__name--")]
+ content_xpath: //section[contains(@class, "word__defination")]
+ first_page_num: 1
+ shortcut: et
+ categories: [dictionaries]
+ about:
+ website: https://www.etymonline.com/
+ wikidata_id: Q1188617
+ official_api_documentation:
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+
+ # - name: ebay
+ # engine: ebay
+ # shortcut: eb
+ # base_url: 'https://www.ebay.com'
+ # disabled: true
+ # timeout: 5
+
+ - name: 1x
+ engine: www1x
+ shortcut: 1x
+ timeout: 3.0
+ disabled: true
+
+ - name: fdroid
+ engine: fdroid
+ shortcut: fd
+ disabled: true
+
+ - name: findthatmeme
+ engine: findthatmeme
+ shortcut: ftm
+ disabled: true
+
+ - name: flickr
+ categories: images
+ shortcut: fl
+ # You can use the engine using the official stable API, but you need an API
+ # key, see: https://www.flickr.com/services/apps/create/
+ # engine: flickr
+ # api_key: 'apikey' # required!
+ # Or you can use the html non-stable engine, activated by default
+ engine: flickr_noapi
+
+ - name: free software directory
+ engine: mediawiki
+ shortcut: fsd
+ categories: [it, software wikis]
+ base_url: https://directory.fsf.org/
+ search_type: title
+ timeout: 5.0
+ disabled: true
+ about:
+ website: https://directory.fsf.org/
+ wikidata_id: Q2470288
+
+ # - name: freesound
+ # engine: freesound
+ # shortcut: fnd
+ # disabled: true
+ # timeout: 15.0
+ # API key required, see: https://freesound.org/docs/api/overview.html
+ # api_key: MyAPIkey
+
+ - name: frinkiac
+ engine: frinkiac
+ shortcut: frk
+ disabled: true
+
+ - name: fyyd
+ engine: fyyd
+ shortcut: fy
+ timeout: 8.0
+ disabled: true
+
+ - name: geizhals
+ engine: geizhals
+ shortcut: geiz
+ disabled: true
+
+ - name: genius
+ engine: genius
+ shortcut: gen
+
+ - name: gentoo
+ engine: mediawiki
+ shortcut: ge
+ categories: ["it", "software wikis"]
+ base_url: "https://wiki.gentoo.org/"
+ api_path: "api.php"
+ search_type: text
+ timeout: 10
+
+ - name: gitlab
+ engine: json_engine
+ paging: true
+ search_url: https://gitlab.com/api/v4/projects?search={query}&page={pageno}
+ url_query: web_url
+ title_query: name_with_namespace
+ content_query: description
+ page_size: 20
+ categories: [it, repos]
+ shortcut: gl
+ timeout: 10.0
+ disabled: true
+ about:
+ website: https://about.gitlab.com/
+ wikidata_id: Q16639197
+ official_api_documentation: https://docs.gitlab.com/ee/api/
+ use_official_api: false
+ require_api_key: false
+ results: JSON
+
+ - name: github
+ engine: github
+ shortcut: gh
+
+ - name: codeberg
+ # https://docs.searxng.org/dev/engines/online/gitea.html
+ engine: gitea
+ base_url: https://codeberg.org
+ shortcut: cb
+ disabled: true
+
+ - name: gitea.com
+ engine: gitea
+ base_url: https://gitea.com
+ shortcut: gitea
+ disabled: true
+
+ - name: goodreads
+ engine: goodreads
+ shortcut: good
+ timeout: 4.0
+ disabled: true
+
+ - name: google
+ engine: google
+ shortcut: go
+ # additional_tests:
+ # android: *test_android
+
+ - name: google images
+ engine: google_images
+ shortcut: goi
+ # additional_tests:
+ # android: *test_android
+ # dali:
+ # matrix:
+ # query: ['Dali Christ']
+ # lang: ['en', 'de', 'fr', 'zh-CN']
+ # result_container:
+ # - ['one_title_contains', 'Salvador']
+
+ - name: google news
+ engine: google_news
+ shortcut: gon
+ # additional_tests:
+ # android: *test_android
+
+ - name: google videos
+ engine: google_videos
+ shortcut: gov
+ # additional_tests:
+ # android: *test_android
+
+ - name: google scholar
+ engine: google_scholar
+ shortcut: gos
+
+ - name: google play apps
+ engine: google_play
+ categories: [files, apps]
+ shortcut: gpa
+ play_categ: apps
+ disabled: true
+
+ - name: google play movies
+ engine: google_play
+ categories: videos
+ shortcut: gpm
+ play_categ: movies
+ disabled: true
+
+ - name: material icons
+ engine: material_icons
+ categories: images
+ shortcut: mi
+ disabled: true
+
+ - name: gpodder
+ engine: json_engine
+ shortcut: gpod
+ timeout: 4.0
+ paging: false
+ search_url: https://gpodder.net/search.json?q={query}
+ url_query: url
+ title_query: title
+ content_query: description
+ page_size: 19
+ categories: music
+ disabled: true
+ about:
+ website: https://gpodder.net
+ wikidata_id: Q3093354
+ official_api_documentation: https://gpoddernet.readthedocs.io/en/latest/api/
+ use_official_api: false
+ requires_api_key: false
+ results: JSON
+
+ - name: habrahabr
+ engine: xpath
+ paging: true
+ search_url: https://habr.com/en/search/page{pageno}/?q={query}
+ results_xpath: //article[contains(@class, "tm-articles-list__item")]
+ url_xpath: .//a[@class="tm-title__link"]/@href
+ title_xpath: .//a[@class="tm-title__link"]
+ content_xpath: .//div[contains(@class, "article-formatted-body")]
+ categories: it
+ timeout: 4.0
+ disabled: true
+ shortcut: habr
+ about:
+ website: https://habr.com/
+ wikidata_id: Q4494434
+ official_api_documentation: https://habr.com/en/docs/help/api/
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+
+ - name: hackernews
+ engine: hackernews
+ shortcut: hn
+ disabled: true
+
+ - name: hex
+ engine: hex
+ shortcut: hex
+ disabled: true
+ # Valid values: name inserted_at updated_at total_downloads recent_downloads
+ sort_criteria: "recent_downloads"
+ page_size: 10
+
+ - name: crates.io
+ engine: crates
+ shortcut: crates
+ disabled: true
+ timeout: 6.0
+
+ - name: hoogle
+ engine: xpath
+ search_url: https://hoogle.haskell.org/?hoogle={query}
+ results_xpath: '//div[@class="result"]'
+ title_xpath: './/div[@class="ans"]//a'
+ url_xpath: './/div[@class="ans"]//a/@href'
+ content_xpath: './/div[@class="from"]'
+ page_size: 20
+ categories: [it, packages]
+ shortcut: ho
+ about:
+ website: https://hoogle.haskell.org/
+ wikidata_id: Q34010
+ official_api_documentation: https://hackage.haskell.org/api
+ use_official_api: false
+ require_api_key: false
+ results: JSON
+
+ - name: imdb
+ engine: imdb
+ shortcut: imdb
+ timeout: 6.0
+ disabled: true
+
+ - name: imgur
+ engine: imgur
+ shortcut: img
+ disabled: true
+
+ - name: ina
+ engine: ina
+ shortcut: in
+ timeout: 6.0
+ disabled: true
+
+ - name: invidious
+ engine: invidious
+ # Instanes will be selected randomly, see https://api.invidious.io/ for
+ # instances that are stable (good uptime) and close to you.
+ base_url:
+ - https://invidious.io.lol
+ - https://invidious.fdn.fr
+ - https://yt.artemislena.eu
+ - https://invidious.tiekoetter.com
+ - https://invidious.flokinet.to
+ - https://vid.puffyan.us
+ - https://invidious.privacydev.net
+ - https://inv.tux.pizza
+ shortcut: iv
+ timeout: 3.0
+ disabled: true
+
+ - name: jisho
+ engine: jisho
+ shortcut: js
+ timeout: 3.0
+ disabled: true
+
+ - name: kickass
+ engine: kickass
+ base_url:
+ - https://kickasstorrents.to
+ - https://kickasstorrents.cr
+ - https://kickasstorrent.cr
+ - https://kickass.sx
+ - https://kat.am
+ shortcut: kc
+ timeout: 4.0
+ disabled: true
+
+ - name: lemmy communities
+ engine: lemmy
+ lemmy_type: Communities
+ shortcut: leco
+
+ - name: lemmy users
+ engine: lemmy
+ network: lemmy communities
+ lemmy_type: Users
+ shortcut: leus
+
+ - name: lemmy posts
+ engine: lemmy
+ network: lemmy communities
+ lemmy_type: Posts
+ shortcut: lepo
+
+ - name: lemmy comments
+ engine: lemmy
+ network: lemmy communities
+ lemmy_type: Comments
+ shortcut: lecom
+
+ - name: library genesis
+ engine: xpath
+ # search_url: https://libgen.is/search.php?req={query}
+ search_url: https://libgen.rs/search.php?req={query}
+ url_xpath: //a[contains(@href,"book/index.php?md5")]/@href
+ title_xpath: //a[contains(@href,"book/")]/text()[1]
+ content_xpath: //td/a[1][contains(@href,"=author")]/text()
+ categories: files
+ timeout: 7.0
+ disabled: true
+ shortcut: lg
+ about:
+ website: https://libgen.fun/
+ wikidata_id: Q22017206
+ official_api_documentation:
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+
+ - name: z-library
+ engine: zlibrary
+ shortcut: zlib
+ categories: files
+ timeout: 7.0
+ disabled: true
+
+ - name: library of congress
+ engine: loc
+ shortcut: loc
+ categories: images
+
+ - name: libretranslate
+ engine: libretranslate
+ # https://github.com/LibreTranslate/LibreTranslate?tab=readme-ov-file#mirrors
+ base_url:
+ - https://translate.terraprint.co
+ - https://trans.zillyhuhn.com
+ # api_key: abc123
+ shortcut: lt
+ disabled: true
+
+ - name: lingva
+ engine: lingva
+ shortcut: lv
+ # set lingva instance in url, by default it will use the official instance
+ # url: https://lingva.thedaviddelta.com
+
+ - name: lobste.rs
+ engine: xpath
+ search_url: https://lobste.rs/search?q={query}&what=stories&order=relevance
+ results_xpath: //li[contains(@class, "story")]
+ url_xpath: .//a[@class="u-url"]/@href
+ title_xpath: .//a[@class="u-url"]
+ content_xpath: .//a[@class="domain"]
+ categories: it
+ shortcut: lo
+ timeout: 5.0
+ disabled: true
+ about:
+ website: https://lobste.rs/
+ wikidata_id: Q60762874
+ official_api_documentation:
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+
+ - name: mastodon users
+ engine: mastodon
+ mastodon_type: accounts
+ base_url: https://mastodon.social
+ shortcut: mau
+
+ - name: mastodon hashtags
+ engine: mastodon
+ mastodon_type: hashtags
+ base_url: https://mastodon.social
+ shortcut: mah
+
+ # - name: matrixrooms
+ # engine: mrs
+ # # https://docs.searxng.org/dev/engines/online/mrs.html
+ # # base_url: https://mrs-api-host
+ # shortcut: mtrx
+ # disabled: true
+
+ - name: mdn
+ shortcut: mdn
+ engine: json_engine
+ categories: [it]
+ paging: true
+ search_url: https://developer.mozilla.org/api/v1/search?q={query}&page={pageno}
+ results_query: documents
+ url_query: mdn_url
+ url_prefix: https://developer.mozilla.org
+ title_query: title
+ content_query: summary
+ about:
+ website: https://developer.mozilla.org
+ wikidata_id: Q3273508
+ official_api_documentation: null
+ use_official_api: false
+ require_api_key: false
+ results: JSON
+
+ - name: metacpan
+ engine: metacpan
+ shortcut: cpan
+ disabled: true
+ number_of_results: 20
+
+ # - name: meilisearch
+ # engine: meilisearch
+ # shortcut: mes
+ # enable_http: true
+ # base_url: http://localhost:7700
+ # index: my-index
+
+ - name: mixcloud
+ engine: mixcloud
+ shortcut: mc
+
+ # MongoDB engine
+ # Required dependency: pymongo
+ # - name: mymongo
+ # engine: mongodb
+ # shortcut: md
+ # exact_match_only: false
+ # host: '127.0.0.1'
+ # port: 27017
+ # enable_http: true
+ # results_per_page: 20
+ # database: 'business'
+ # collection: 'reviews' # name of the db collection
+ # key: 'name' # key in the collection to search for
+
+ - name: mozhi
+ engine: mozhi
+ base_url:
+ - https://mozhi.aryak.me
+ - https://translate.bus-hit.me
+ - https://nyc1.mz.ggtyler.dev
+ # mozhi_engine: google - see https://mozhi.aryak.me for supported engines
+ timeout: 4.0
+ shortcut: mz
+ disabled: true
+
+ - name: mwmbl
+ engine: mwmbl
+ # api_url: https://api.mwmbl.org
+ shortcut: mwm
+ disabled: true
+
+ - name: npm
+ engine: npm
+ shortcut: npm
+ timeout: 5.0
+ disabled: true
+
+ - name: nyaa
+ engine: nyaa
+ shortcut: nt
+ disabled: true
+
+ - name: mankier
+ engine: json_engine
+ search_url: https://www.mankier.com/api/v2/mans/?q={query}
+ results_query: results
+ url_query: url
+ title_query: name
+ content_query: description
+ categories: it
+ shortcut: man
+ about:
+ website: https://www.mankier.com/
+ official_api_documentation: https://www.mankier.com/api
+ use_official_api: true
+ require_api_key: false
+ results: JSON
+
+ # read https://docs.searxng.org/dev/engines/online/mullvad_leta.html
+ # - name: mullvadleta
+ # engine: mullvad_leta
+ # leta_engine: google # choose one of the following: google, brave
+ # use_cache: true # Only 100 non-cache searches per day, suggested only for private instances
+ # search_url: https://leta.mullvad.net
+ # categories: [general, web]
+ # shortcut: ml
+
+ - name: odysee
+ engine: odysee
+ shortcut: od
+ disabled: true
+
+ - name: openairedatasets
+ engine: json_engine
+ paging: true
+ search_url: https://api.openaire.eu/search/datasets?format=json&page={pageno}&size=10&title={query}
+ results_query: response/results/result
+ url_query: metadata/oaf:entity/oaf:result/children/instance/webresource/url/$
+ title_query: metadata/oaf:entity/oaf:result/title/$
+ content_query: metadata/oaf:entity/oaf:result/description/$
+ content_html_to_text: true
+ categories: "science"
+ shortcut: oad
+ timeout: 5.0
+ about:
+ website: https://www.openaire.eu/
+ wikidata_id: Q25106053
+ official_api_documentation: https://api.openaire.eu/
+ use_official_api: false
+ require_api_key: false
+ results: JSON
+
+ - name: openairepublications
+ engine: json_engine
+ paging: true
+ search_url: https://api.openaire.eu/search/publications?format=json&page={pageno}&size=10&title={query}
+ results_query: response/results/result
+ url_query: metadata/oaf:entity/oaf:result/children/instance/webresource/url/$
+ title_query: metadata/oaf:entity/oaf:result/title/$
+ content_query: metadata/oaf:entity/oaf:result/description/$
+ content_html_to_text: true
+ categories: science
+ shortcut: oap
+ timeout: 5.0
+ about:
+ website: https://www.openaire.eu/
+ wikidata_id: Q25106053
+ official_api_documentation: https://api.openaire.eu/
+ use_official_api: false
+ require_api_key: false
+ results: JSON
+
+ - name: openmeteo
+ engine: open_meteo
+ shortcut: om
+ disabled: true
+
+ # - name: opensemanticsearch
+ # engine: opensemantic
+ # shortcut: oss
+ # base_url: 'http://localhost:8983/solr/opensemanticsearch/'
+
+ - name: openstreetmap
+ engine: openstreetmap
+ shortcut: osm
+
+ - name: openrepos
+ engine: xpath
+ paging: true
+ search_url: https://openrepos.net/search/node/{query}?page={pageno}
+ url_xpath: //li[@class="search-result"]//h3[@class="title"]/a/@href
+ title_xpath: //li[@class="search-result"]//h3[@class="title"]/a
+ content_xpath: //li[@class="search-result"]//div[@class="search-snippet-info"]//p[@class="search-snippet"]
+ categories: files
+ timeout: 4.0
+ disabled: true
+ shortcut: or
+ about:
+ website: https://openrepos.net/
+ wikidata_id:
+ official_api_documentation:
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+
+ - name: packagist
+ engine: json_engine
+ paging: true
+ search_url: https://packagist.org/search.json?q={query}&page={pageno}
+ results_query: results
+ url_query: url
+ title_query: name
+ content_query: description
+ categories: [it, packages]
+ disabled: true
+ timeout: 5.0
+ shortcut: pack
+ about:
+ website: https://packagist.org
+ wikidata_id: Q108311377
+ official_api_documentation: https://packagist.org/apidoc
+ use_official_api: true
+ require_api_key: false
+ results: JSON
+
+ - name: pdbe
+ engine: pdbe
+ shortcut: pdb
+ # Hide obsolete PDB entries. Default is not to hide obsolete structures
+ # hide_obsolete: false
+
+ - name: photon
+ engine: photon
+ shortcut: ph
+
+ - name: pinterest
+ engine: pinterest
+ shortcut: pin
+
+ - name: piped
+ engine: piped
+ shortcut: ppd
+ categories: videos
+ piped_filter: videos
+ timeout: 3.0
+
+ # URL to use as link and for embeds
+ frontend_url: https://srv.piped.video
+ # Instance will be selected randomly, for more see https://piped-instances.kavin.rocks/
+ backend_url:
+ - https://pipedapi.kavin.rocks
+ - https://pipedapi-libre.kavin.rocks
+ - https://pipedapi.adminforge.de
+
+ - name: piped.music
+ engine: piped
+ network: piped
+ shortcut: ppdm
+ categories: music
+ piped_filter: music_songs
+ timeout: 3.0
+
+ - name: piratebay
+ engine: piratebay
+ shortcut: tpb
+ # You may need to change this URL to a proxy if piratebay is blocked in your
+ # country
+ url: https://thepiratebay.org/
+ timeout: 3.0
+
+ - name: pixiv
+ shortcut: pv
+ engine: pixiv
+ disabled: true
+ inactive: true
+ pixiv_image_proxies:
+ - https://pximg.example.org
+ # A proxy is required to load the images. Hosting an image proxy server
+ # for Pixiv:
+ # --> https://pixivfe.pages.dev/hosting-image-proxy-server/
+ # Proxies from public instances. Ask the public instances owners if they
+ # agree to receive traffic from SearXNG!
+ # --> https://codeberg.org/VnPower/PixivFE#instances
+ # --> https://github.com/searxng/searxng/pull/3192#issuecomment-1941095047
+ # image proxy of https://pixiv.cat
+ # - https://i.pixiv.cat
+ # image proxy of https://www.pixiv.pics
+ # - https://pximg.cocomi.eu.org
+ # image proxy of https://pixivfe.exozy.me
+ # - https://pximg.exozy.me
+ # image proxy of https://pixivfe.ducks.party
+ # - https://pixiv.ducks.party
+ # image proxy of https://pixiv.perennialte.ch
+ # - https://pximg.perennialte.ch
+
+ - name: podcastindex
+ engine: podcastindex
+ shortcut: podcast
+
+ # Required dependency: psychopg2
+ # - name: postgresql
+ # engine: postgresql
+ # database: postgres
+ # username: postgres
+ # password: postgres
+ # limit: 10
+ # query_str: 'SELECT * from my_table WHERE my_column = %(query)s'
+ # shortcut : psql
+
+ - name: presearch
+ engine: presearch
+ search_type: search
+ categories: [general, web]
+ shortcut: ps
+ timeout: 4.0
+ disabled: true
+
+ - name: presearch images
+ engine: presearch
+ network: presearch
+ search_type: images
+ categories: [images, web]
+ timeout: 4.0
+ shortcut: psimg
+ disabled: true
+
+ - name: presearch videos
+ engine: presearch
+ network: presearch
+ search_type: videos
+ categories: [general, web]
+ timeout: 4.0
+ shortcut: psvid
+ disabled: true
+
+ - name: presearch news
+ engine: presearch
+ network: presearch
+ search_type: news
+ categories: [news, web]
+ timeout: 4.0
+ shortcut: psnews
+ disabled: true
+
+ - name: pub.dev
+ engine: xpath
+ shortcut: pd
+ search_url: https://pub.dev/packages?q={query}&page={pageno}
+ paging: true
+ results_xpath: //div[contains(@class,"packages-item")]
+ url_xpath: ./div/h3/a/@href
+ title_xpath: ./div/h3/a
+ content_xpath: ./div/div/div[contains(@class,"packages-description")]/span
+ categories: [packages, it]
+ timeout: 3.0
+ disabled: true
+ first_page_num: 1
+ about:
+ website: https://pub.dev/
+ official_api_documentation: https://pub.dev/help/api
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+
+ - name: pubmed
+ engine: pubmed
+ shortcut: pub
+ timeout: 3.0
+
+ - name: pypi
+ shortcut: pypi
+ engine: pypi
+
+ - name: qwant
+ qwant_categ: web
+ engine: qwant
+ disabled: true
+ shortcut: qw
+ categories: [general, web]
+ additional_tests:
+ rosebud: *test_rosebud
+
+ - name: qwant news
+ qwant_categ: news
+ engine: qwant
+ shortcut: qwn
+ categories: news
+ network: qwant
+
+ - name: qwant images
+ qwant_categ: images
+ engine: qwant
+ shortcut: qwi
+ categories: [images, web]
+ network: qwant
+
+ - name: qwant videos
+ qwant_categ: videos
+ engine: qwant
+ shortcut: qwv
+ categories: [videos, web]
+ network: qwant
+
+ # - name: library
+ # engine: recoll
+ # shortcut: lib
+ # base_url: 'https://recoll.example.org/'
+ # search_dir: ''
+ # mount_prefix: /export
+ # dl_prefix: 'https://download.example.org'
+ # timeout: 30.0
+ # categories: files
+ # disabled: true
+
+ # - name: recoll library reference
+ # engine: recoll
+ # base_url: 'https://recoll.example.org/'
+ # search_dir: reference
+ # mount_prefix: /export
+ # dl_prefix: 'https://download.example.org'
+ # shortcut: libr
+ # timeout: 30.0
+ # categories: files
+ # disabled: true
+
+ - name: radio browser
+ engine: radio_browser
+ shortcut: rb
+
+ - name: reddit
+ engine: reddit
+ shortcut: re
+ page_size: 25
+ disabled: true
+
+ - name: rottentomatoes
+ engine: rottentomatoes
+ shortcut: rt
+ disabled: true
+
+ # Required dependency: redis
+ # - name: myredis
+ # shortcut : rds
+ # engine: redis_server
+ # exact_match_only: false
+ # host: '127.0.0.1'
+ # port: 6379
+ # enable_http: true
+ # password: ''
+ # db: 0
+
+ # tmp suspended: bad certificate
+ # - name: scanr structures
+ # shortcut: scs
+ # engine: scanr_structures
+ # disabled: true
+
+ - name: searchmysite
+ engine: xpath
+ shortcut: sms
+ categories: general
+ paging: true
+ search_url: https://searchmysite.net/search/?q={query}&page={pageno}
+ results_xpath: //div[contains(@class,'search-result')]
+ url_xpath: .//a[contains(@class,'result-link')]/@href
+ title_xpath: .//span[contains(@class,'result-title-txt')]/text()
+ content_xpath: ./p[@id='result-hightlight']
+ disabled: true
+ about:
+ website: https://searchmysite.net
+
+ - name: sepiasearch
+ engine: sepiasearch
+ shortcut: sep
+
+ - name: soundcloud
+ engine: soundcloud
+ shortcut: sc
+
+ - name: stackoverflow
+ engine: stackexchange
+ shortcut: st
+ api_site: 'stackoverflow'
+ categories: [it, q&a]
+
+ - name: askubuntu
+ engine: stackexchange
+ shortcut: ubuntu
+ api_site: 'askubuntu'
+ categories: [it, q&a]
+
+ - name: internetarchivescholar
+ engine: internet_archive_scholar
+ shortcut: ias
+ timeout: 15.0
+
+ - name: superuser
+ engine: stackexchange
+ shortcut: su
+ api_site: 'superuser'
+ categories: [it, q&a]
+
+ - name: discuss.python
+ engine: discourse
+ shortcut: dpy
+ base_url: 'https://discuss.python.org'
+ categories: [it, q&a]
+ disabled: true
+
+ - name: caddy.community
+ engine: discourse
+ shortcut: caddy
+ base_url: 'https://caddy.community'
+ categories: [it, q&a]
+ disabled: true
+
+ - name: pi-hole.community
+ engine: discourse
+ shortcut: pi
+ categories: [it, q&a]
+ base_url: 'https://discourse.pi-hole.net'
+ disabled: true
+
+ - name: searchcode code
+ engine: searchcode_code
+ shortcut: scc
+ disabled: true
+
+ # - name: searx
+ # engine: searx_engine
+ # shortcut: se
+ # instance_urls :
+ # - http://127.0.0.1:8888/
+ # - ...
+ # disabled: true
+
+ - name: semantic scholar
+ engine: semantic_scholar
+ disabled: true
+ shortcut: se
+
+ # Spotify needs API credentials
+ # - name: spotify
+ # engine: spotify
+ # shortcut: stf
+ # api_client_id: *******
+ # api_client_secret: *******
+
+ # - name: solr
+ # engine: solr
+ # shortcut: slr
+ # base_url: http://localhost:8983
+ # collection: collection_name
+ # sort: '' # sorting: asc or desc
+ # field_list: '' # comma separated list of field names to display on the UI
+ # default_fields: '' # default field to query
+ # query_fields: '' # query fields
+ # enable_http: true
+
+ # - name: springer nature
+ # engine: springer
+ # # get your API key from: https://dev.springernature.com/signup
+ # # working API key, for test & debug: "a69685087d07eca9f13db62f65b8f601"
+ # api_key: 'unset'
+ # shortcut: springer
+ # timeout: 15.0
+
+ - name: startpage
+ engine: startpage
+ shortcut: sp
+ timeout: 6.0
+ disabled: true
+ additional_tests:
+ rosebud: *test_rosebud
+
+ - name: tokyotoshokan
+ engine: tokyotoshokan
+ shortcut: tt
+ timeout: 6.0
+ disabled: true
+
+ - name: solidtorrents
+ engine: solidtorrents
+ shortcut: solid
+ timeout: 4.0
+ base_url:
+ - https://solidtorrents.to
+ - https://bitsearch.to
+
+ # For this demo of the sqlite engine download:
+ # https://liste.mediathekview.de/filmliste-v2.db.bz2
+ # and unpack into searx/data/filmliste-v2.db
+ # Query to test: "!demo concert"
+ #
+ # - name: demo
+ # engine: sqlite
+ # shortcut: demo
+ # categories: general
+ # result_template: default.html
+ # database: searx/data/filmliste-v2.db
+ # query_str: >-
+ # SELECT title || ' (' || time(duration, 'unixepoch') || ')' AS title,
+ # COALESCE( NULLIF(url_video_hd,''), NULLIF(url_video_sd,''), url_video) AS url,
+ # description AS content
+ # FROM film
+ # WHERE title LIKE :wildcard OR description LIKE :wildcard
+ # ORDER BY duration DESC
+
+ - name: tagesschau
+ engine: tagesschau
+ # when set to false, display URLs from Tagesschau, and not the actual source
+ # (e.g. NDR, WDR, SWR, HR, ...)
+ use_source_url: true
+ shortcut: ts
+ disabled: true
+
+ - name: tmdb
+ engine: xpath
+ paging: true
+ categories: movies
+ search_url: https://www.themoviedb.org/search?page={pageno}&query={query}
+ results_xpath: //div[contains(@class,"movie") or contains(@class,"tv")]//div[contains(@class,"card")]
+ url_xpath: .//div[contains(@class,"poster")]/a/@href
+ thumbnail_xpath: .//img/@src
+ title_xpath: .//div[contains(@class,"title")]//h2
+ content_xpath: .//div[contains(@class,"overview")]
+ shortcut: tm
+ disabled: true
+
+ # Requires Tor
+ - name: torch
+ engine: xpath
+ paging: true
+ search_url:
+ http://xmh57jrknzkhv6y3ls3ubitzfqnkrwxhopf5aygthi7d6rplyvk3noyd.onion/cgi-bin/omega/omega?P={query}&DEFAULTOP=and
+ results_xpath: //table//tr
+ url_xpath: ./td[2]/a
+ title_xpath: ./td[2]/b
+ content_xpath: ./td[2]/small
+ categories: onions
+ enable_http: true
+ shortcut: tch
+
+ # torznab engine lets you query any torznab compatible indexer. Using this
+ # engine in combination with Jackett opens the possibility to query a lot of
+ # public and private indexers directly from SearXNG. More details at:
+ # https://docs.searxng.org/dev/engines/online/torznab.html
+ #
+ # - name: Torznab EZTV
+ # engine: torznab
+ # shortcut: eztv
+ # base_url: http://localhost:9117/api/v2.0/indexers/eztv/results/torznab
+ # enable_http: true # if using localhost
+ # api_key: xxxxxxxxxxxxxxx
+ # show_magnet_links: true
+ # show_torrent_files: false
+ # # https://github.com/Jackett/Jackett/wiki/Jackett-Categories
+ # torznab_categories: # optional
+ # - 2000
+ # - 5000
+
+ # tmp suspended - too slow, too many errors
+ # - name: urbandictionary
+ # engine : xpath
+ # search_url : https://www.urbandictionary.com/define.php?term={query}
+ # url_xpath : //*[@class="word"]/@href
+ # title_xpath : //*[@class="def-header"]
+ # content_xpath: //*[@class="meaning"]
+ # shortcut: ud
+
+ - name: unsplash
+ engine: unsplash
+ shortcut: us
+
+ - name: yandex music
+ engine: yandex_music
+ shortcut: ydm
+ disabled: true
+ # https://yandex.com/support/music/access.html
+ inactive: true
+
+ - name: yahoo
+ engine: yahoo
+ shortcut: yh
+ disabled: true
+
+ - name: yahoo news
+ engine: yahoo_news
+ shortcut: yhn
+
+ - name: youtube
+ shortcut: yt
+ # You can use the engine using the official stable API, but you need an API
+ # key See: https://console.developers.google.com/project
+ #
+ # engine: youtube_api
+ # api_key: 'apikey' # required!
+ #
+ # Or you can use the html non-stable engine, activated by default
+ engine: youtube_noapi
+
+ - name: dailymotion
+ engine: dailymotion
+ shortcut: dm
+
+ - name: vimeo
+ engine: vimeo
+ shortcut: vm
+ disabled: true
+
+ - name: wiby
+ engine: json_engine
+ paging: true
+ search_url: https://wiby.me/json/?q={query}&p={pageno}
+ url_query: URL
+ title_query: Title
+ content_query: Snippet
+ categories: [general, web]
+ shortcut: wib
+ disabled: true
+ about:
+ website: https://wiby.me/
+
+ - name: alexandria
+ engine: json_engine
+ shortcut: alx
+ categories: general
+ paging: true
+ search_url: https://api.alexandria.org/?a=1&q={query}&p={pageno}
+ results_query: results
+ title_query: title
+ url_query: url
+ content_query: snippet
+ timeout: 1.5
+ disabled: true
+ about:
+ website: https://alexandria.org/
+ official_api_documentation: https://github.com/alexandria-org/alexandria-api/raw/master/README.md
+ use_official_api: true
+ require_api_key: false
+ results: JSON
+
+ - name: wikibooks
+ engine: mediawiki
+ weight: 0.5
+ shortcut: wb
+ categories: [general, wikimedia]
+ base_url: "https://{language}.wikibooks.org/"
+ search_type: text
+ disabled: true
+ about:
+ website: https://www.wikibooks.org/
+ wikidata_id: Q367
+
+ - name: wikinews
+ engine: mediawiki
+ shortcut: wn
+ categories: [news, wikimedia]
+ base_url: "https://{language}.wikinews.org/"
+ search_type: text
+ srsort: create_timestamp_desc
+ about:
+ website: https://www.wikinews.org/
+ wikidata_id: Q964
+
+ - name: wikiquote
+ engine: mediawiki
+ weight: 0.5
+ shortcut: wq
+ categories: [general, wikimedia]
+ base_url: "https://{language}.wikiquote.org/"
+ search_type: text
+ disabled: true
+ additional_tests:
+ rosebud: *test_rosebud
+ about:
+ website: https://www.wikiquote.org/
+ wikidata_id: Q369
+
+ - name: wikisource
+ engine: mediawiki
+ weight: 0.5
+ shortcut: ws
+ categories: [general, wikimedia]
+ base_url: "https://{language}.wikisource.org/"
+ search_type: text
+ disabled: true
+ about:
+ website: https://www.wikisource.org/
+ wikidata_id: Q263
+
+ - name: wikispecies
+ engine: mediawiki
+ shortcut: wsp
+ categories: [general, science, wikimedia]
+ base_url: "https://species.wikimedia.org/"
+ search_type: text
+ disabled: true
+ about:
+ website: https://species.wikimedia.org/
+ wikidata_id: Q13679
+ tests:
+ wikispecies:
+ matrix:
+ query: "Campbell, L.I. et al. 2011: MicroRNAs"
+ lang: en
+ result_container:
+ - not_empty
+ - ['one_title_contains', 'Tardigrada']
+ test:
+ - unique_results
+
+ - name: wiktionary
+ engine: mediawiki
+ shortcut: wt
+ categories: [dictionaries, wikimedia]
+ base_url: "https://{language}.wiktionary.org/"
+ search_type: text
+ about:
+ website: https://www.wiktionary.org/
+ wikidata_id: Q151
+
+ - name: wikiversity
+ engine: mediawiki
+ weight: 0.5
+ shortcut: wv
+ categories: [general, wikimedia]
+ base_url: "https://{language}.wikiversity.org/"
+ search_type: text
+ disabled: true
+ about:
+ website: https://www.wikiversity.org/
+ wikidata_id: Q370
+
+ - name: wikivoyage
+ engine: mediawiki
+ weight: 0.5
+ shortcut: wy
+ categories: [general, wikimedia]
+ base_url: "https://{language}.wikivoyage.org/"
+ search_type: text
+ disabled: true
+ about:
+ website: https://www.wikivoyage.org/
+ wikidata_id: Q373
+
+ - name: wikicommons.images
+ engine: wikicommons
+ shortcut: wc
+ categories: images
+ search_type: images
+ number_of_results: 10
+
+ - name: wikicommons.videos
+ engine: wikicommons
+ shortcut: wcv
+ categories: videos
+ search_type: videos
+ number_of_results: 10
+
+ - name: wikicommons.audio
+ engine: wikicommons
+ shortcut: wca
+ categories: music
+ search_type: audio
+ number_of_results: 10
+
+ - name: wikicommons.files
+ engine: wikicommons
+ shortcut: wcf
+ categories: files
+ search_type: files
+ number_of_results: 10
+
+ - name: wolframalpha
+ shortcut: wa
+ # You can use the engine using the official stable API, but you need an API
+ # key. See: https://products.wolframalpha.com/api/
+ #
+ # engine: wolframalpha_api
+ # api_key: ''
+ #
+ # Or you can use the html non-stable engine, activated by default
+ engine: wolframalpha_noapi
+ timeout: 6.0
+ categories: general
+ disabled: true
+
+ - name: dictzone
+ engine: dictzone
+ shortcut: dc
+
+ - name: mymemory translated
+ engine: translated
+ shortcut: tl
+ timeout: 5.0
+ # You can use without an API key, but you are limited to 1000 words/day
+ # See: https://mymemory.translated.net/doc/usagelimits.php
+ # api_key: ''
+
+ # Required dependency: mysql-connector-python
+ # - name: mysql
+ # engine: mysql_server
+ # database: mydatabase
+ # username: user
+ # password: pass
+ # limit: 10
+ # query_str: 'SELECT * from mytable WHERE fieldname=%(query)s'
+ # shortcut: mysql
+
+ - name: 1337x
+ engine: 1337x
+ shortcut: 1337x
+ disabled: true
+
+ - name: duden
+ engine: duden
+ shortcut: du
+ disabled: true
+
+ - name: seznam
+ shortcut: szn
+ engine: seznam
+ disabled: true
+
+ # - name: deepl
+ # engine: deepl
+ # shortcut: dpl
+ # # You can use the engine using the official stable API, but you need an API key
+ # # See: https://www.deepl.com/pro-api?cta=header-pro-api
+ # api_key: '' # required!
+ # timeout: 5.0
+ # disabled: true
+
+ - name: mojeek
+ shortcut: mjk
+ engine: mojeek
+ categories: [general, web]
+ disabled: true
+
+ - name: mojeek images
+ shortcut: mjkimg
+ engine: mojeek
+ categories: [images, web]
+ search_type: images
+ paging: false
+ disabled: true
+
+ - name: mojeek news
+ shortcut: mjknews
+ engine: mojeek
+ categories: [news, web]
+ search_type: news
+ paging: false
+ disabled: true
+
+ - name: moviepilot
+ engine: moviepilot
+ shortcut: mp
+ disabled: true
+
+ - name: naver
+ shortcut: nvr
+ categories: [general, web]
+ engine: xpath
+ paging: true
+ search_url: https://search.naver.com/search.naver?where=webkr&sm=osp_hty&ie=UTF-8&query={query}&start={pageno}
+ url_xpath: //a[@class="link_tit"]/@href
+ title_xpath: //a[@class="link_tit"]
+ content_xpath: //div[@class="total_dsc_wrap"]/a
+ first_page_num: 1
+ page_size: 10
+ disabled: true
+ about:
+ website: https://www.naver.com/
+ wikidata_id: Q485639
+ official_api_documentation: https://developers.naver.com/docs/nmt/examples/
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+ language: ko
+
+ - name: rubygems
+ shortcut: rbg
+ engine: xpath
+ paging: true
+ search_url: https://rubygems.org/search?page={pageno}&query={query}
+ results_xpath: /html/body/main/div/a[@class="gems__gem"]
+ url_xpath: ./@href
+ title_xpath: ./span/h2
+ content_xpath: ./span/p
+ suggestion_xpath: /html/body/main/div/div[@class="search__suggestions"]/p/a
+ first_page_num: 1
+ categories: [it, packages]
+ disabled: true
+ about:
+ website: https://rubygems.org/
+ wikidata_id: Q1853420
+ official_api_documentation: https://guides.rubygems.org/rubygems-org-api/
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+
+ - name: peertube
+ engine: peertube
+ shortcut: ptb
+ paging: true
+ # alternatives see: https://instances.joinpeertube.org/instances
+ # base_url: https://tube.4aem.com
+ categories: videos
+ disabled: true
+ timeout: 6.0
+
+ - name: mediathekviewweb
+ engine: mediathekviewweb
+ shortcut: mvw
+ disabled: true
+
+ - name: yacy
+ # https://docs.searxng.org/dev/engines/online/yacy.html
+ engine: yacy
+ categories: general
+ search_type: text
+ base_url:
+ - https://yacy.searchlab.eu
+ # see https://github.com/searxng/searxng/pull/3631#issuecomment-2240903027
+ # - https://search.kyun.li
+ # - https://yacy.securecomcorp.eu
+ # - https://yacy.myserv.ca
+ # - https://yacy.nsupdate.info
+ # - https://yacy.electroncash.de
+ shortcut: ya
+ disabled: true
+ # if you aren't using HTTPS for your local yacy instance disable https
+ # enable_http: false
+ search_mode: 'global'
+ # timeout can be reduced in 'local' search mode
+ timeout: 5.0
+
+ - name: yacy images
+ engine: yacy
+ network: yacy
+ categories: images
+ search_type: image
+ shortcut: yai
+ disabled: true
+ # timeout can be reduced in 'local' search mode
+ timeout: 5.0
+
+ - name: rumble
+ engine: rumble
+ shortcut: ru
+ base_url: https://rumble.com/
+ paging: true
+ categories: videos
+ disabled: true
+
+ - name: livespace
+ engine: livespace
+ shortcut: ls
+ categories: videos
+ disabled: true
+ timeout: 5.0
+
+ - name: wordnik
+ engine: wordnik
+ shortcut: def
+ base_url: https://www.wordnik.com/
+ categories: [dictionaries]
+ timeout: 5.0
+
+ - name: woxikon.de synonyme
+ engine: xpath
+ shortcut: woxi
+ categories: [dictionaries]
+ timeout: 5.0
+ disabled: true
+ search_url: https://synonyme.woxikon.de/synonyme/{query}.php
+ url_xpath: //div[@class="upper-synonyms"]/a/@href
+ content_xpath: //div[@class="synonyms-list-group"]
+ title_xpath: //div[@class="upper-synonyms"]/a
+ no_result_for_http_status: [404]
+ about:
+ website: https://www.woxikon.de/
+ wikidata_id: # No Wikidata ID
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+ language: de
+
+ - name: seekr news
+ engine: seekr
+ shortcut: senews
+ categories: news
+ seekr_category: news
+ disabled: true
+
+ - name: seekr images
+ engine: seekr
+ network: seekr news
+ shortcut: seimg
+ categories: images
+ seekr_category: images
+ disabled: true
+
+ - name: seekr videos
+ engine: seekr
+ network: seekr news
+ shortcut: sevid
+ categories: videos
+ seekr_category: videos
+ disabled: true
+
+ - name: sjp.pwn
+ engine: sjp
+ shortcut: sjp
+ base_url: https://sjp.pwn.pl/
+ timeout: 5.0
+ disabled: true
+
+ - name: stract
+ engine: stract
+ shortcut: str
+ disabled: true
+
+ - name: svgrepo
+ engine: svgrepo
+ shortcut: svg
+ timeout: 10.0
+ disabled: true
+
+ - name: tootfinder
+ engine: tootfinder
+ shortcut: toot
+
+ - name: voidlinux
+ engine: voidlinux
+ shortcut: void
+ disabled: true
+
+ - name: wallhaven
+ engine: wallhaven
+ # api_key: abcdefghijklmnopqrstuvwxyz
+ shortcut: wh
+
+ # wikimini: online encyclopedia for children
+ # The fulltext and title parameter is necessary for Wikimini because
+ # sometimes it will not show the results and redirect instead
+ - name: wikimini
+ engine: xpath
+ shortcut: wkmn
+ search_url: https://fr.wikimini.org/w/index.php?search={query}&title=Sp%C3%A9cial%3ASearch&fulltext=Search
+ url_xpath: //li/div[@class="mw-search-result-heading"]/a/@href
+ title_xpath: //li//div[@class="mw-search-result-heading"]/a
+ content_xpath: //li/div[@class="searchresult"]
+ categories: general
+ disabled: true
+ about:
+ website: https://wikimini.org/
+ wikidata_id: Q3568032
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+ language: fr
+
+ - name: wttr.in
+ engine: wttr
+ shortcut: wttr
+ timeout: 9.0
+
+ - name: yummly
+ engine: yummly
+ shortcut: yum
+ disabled: true
+
+ - name: brave
+ engine: brave
+ shortcut: br
+ time_range_support: true
+ paging: true
+ categories: [general, web]
+ brave_category: search
+ # brave_spellcheck: true
+
+ - name: brave.images
+ engine: brave
+ network: brave
+ shortcut: brimg
+ categories: [images, web]
+ brave_category: images
+
+ - name: brave.videos
+ engine: brave
+ network: brave
+ shortcut: brvid
+ categories: [videos, web]
+ brave_category: videos
+
+ - name: brave.news
+ engine: brave
+ network: brave
+ shortcut: brnews
+ categories: news
+ brave_category: news
+
+ # - name: brave.goggles
+ # engine: brave
+ # network: brave
+ # shortcut: brgog
+ # time_range_support: true
+ # paging: true
+ # categories: [general, web]
+ # brave_category: goggles
+ # Goggles: # required! This should be a URL ending in .goggle
+
+ - name: lib.rs
+ shortcut: lrs
+ engine: lib_rs
+ disabled: true
+
+ - name: sourcehut
+ shortcut: srht
+ engine: xpath
+ paging: true
+ search_url: https://sr.ht/projects?page={pageno}&search={query}
+ results_xpath: (//div[@class="event-list"])[1]/div[@class="event"]
+ url_xpath: ./h4/a[2]/@href
+ title_xpath: ./h4/a[2]
+ content_xpath: ./p
+ first_page_num: 1
+ categories: [it, repos]
+ disabled: true
+ about:
+ website: https://sr.ht
+ wikidata_id: Q78514485
+ official_api_documentation: https://man.sr.ht/
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+
+ - name: goo
+ shortcut: goo
+ engine: xpath
+ paging: true
+ search_url: https://search.goo.ne.jp/web.jsp?MT={query}&FR={pageno}0
+ url_xpath: //div[@class="result"]/p[@class='title fsL1']/a/@href
+ title_xpath: //div[@class="result"]/p[@class='title fsL1']/a
+ content_xpath: //p[contains(@class,'url fsM')]/following-sibling::p
+ first_page_num: 0
+ categories: [general, web]
+ disabled: true
+ timeout: 4.0
+ about:
+ website: https://search.goo.ne.jp
+ wikidata_id: Q249044
+ use_official_api: false
+ require_api_key: false
+ results: HTML
+ language: ja
+
+ - name: bt4g
+ engine: bt4g
+ shortcut: bt4g
+
+ - name: pkg.go.dev
+ engine: pkg_go_dev
+ shortcut: pgo
+ disabled: true
+
+# Doku engine lets you access to any Doku wiki instance:
+# A public one or a privete/corporate one.
+# - name: ubuntuwiki
+# engine: doku
+# shortcut: uw
+# base_url: 'https://doc.ubuntu-fr.org'
+
+# Be careful when enabling this engine if you are
+# running a public instance. Do not expose any sensitive
+# information. You can restrict access by configuring a list
+# of access tokens under tokens.
+# - name: git grep
+# engine: command
+# command: ['git', 'grep', '{{QUERY}}']
+# shortcut: gg
+# tokens: []
+# disabled: true
+# delimiter:
+# chars: ':'
+# keys: ['filepath', 'code']
+
+# Be careful when enabling this engine if you are
+# running a public instance. Do not expose any sensitive
+# information. You can restrict access by configuring a list
+# of access tokens under tokens.
+# - name: locate
+# engine: command
+# command: ['locate', '{{QUERY}}']
+# shortcut: loc
+# tokens: []
+# disabled: true
+# delimiter:
+# chars: ' '
+# keys: ['line']
+
+# Be careful when enabling this engine if you are
+# running a public instance. Do not expose any sensitive
+# information. You can restrict access by configuring a list
+# of access tokens under tokens.
+# - name: find
+# engine: command
+# command: ['find', '.', '-name', '{{QUERY}}']
+# query_type: path
+# shortcut: fnd
+# tokens: []
+# disabled: true
+# delimiter:
+# chars: ' '
+# keys: ['line']
+
+# Be careful when enabling this engine if you are
+# running a public instance. Do not expose any sensitive
+# information. You can restrict access by configuring a list
+# of access tokens under tokens.
+# - name: pattern search in files
+# engine: command
+# command: ['fgrep', '{{QUERY}}']
+# shortcut: fgr
+# tokens: []
+# disabled: true
+# delimiter:
+# chars: ' '
+# keys: ['line']
+
+# Be careful when enabling this engine if you are
+# running a public instance. Do not expose any sensitive
+# information. You can restrict access by configuring a list
+# of access tokens under tokens.
+# - name: regex search in files
+# engine: command
+# command: ['grep', '{{QUERY}}']
+# shortcut: gr
+# tokens: []
+# disabled: true
+# delimiter:
+# chars: ' '
+# keys: ['line']
+
+doi_resolvers:
+ oadoi.org: 'https://oadoi.org/'
+ doi.org: 'https://doi.org/'
+ doai.io: 'https://dissem.in/'
+ sci-hub.se: 'https://sci-hub.se/'
+ sci-hub.st: 'https://sci-hub.st/'
+ sci-hub.ru: 'https://sci-hub.ru/'
+
+default_doi_resolver: 'oadoi.org'
diff --git a/api/core/tools/provider/builtin/searxng/docker/uwsgi.ini b/api/core/tools/provider/builtin/searxng/docker/uwsgi.ini
new file mode 100644
index 0000000000..9db3d76264
--- /dev/null
+++ b/api/core/tools/provider/builtin/searxng/docker/uwsgi.ini
@@ -0,0 +1,54 @@
+[uwsgi]
+# Who will run the code
+uid = searxng
+gid = searxng
+
+# Number of workers (usually CPU count)
+# default value: %k (= number of CPU core, see Dockerfile)
+workers = %k
+
+# Number of threads per worker
+# default value: 4 (see Dockerfile)
+threads = 4
+
+# The right granted on the created socket
+chmod-socket = 666
+
+# Plugin to use and interpreter config
+single-interpreter = true
+master = true
+plugin = python3
+lazy-apps = true
+enable-threads = 4
+
+# Module to import
+module = searx.webapp
+
+# Virtualenv and python path
+pythonpath = /usr/local/searxng/
+chdir = /usr/local/searxng/searx/
+
+# automatically set processes name to something meaningful
+auto-procname = true
+
+# Disable request logging for privacy
+disable-logging = true
+log-5xx = true
+
+# Set the max size of a request (request-body excluded)
+buffer-size = 8192
+
+# No keep alive
+# See https://github.com/searx/searx-docker/issues/24
+add-header = Connection: close
+
+# Follow SIGTERM convention
+# See https://github.com/searxng/searxng/issues/3427
+die-on-term
+
+# uwsgi serves the static files
+static-map = /static=/usr/local/searxng/searx/static
+# expires set to one day
+static-expires = /* 86400
+static-gzip-all = True
+offload-threads = 4
diff --git a/api/core/tools/provider/builtin/searxng/searxng.py b/api/core/tools/provider/builtin/searxng/searxng.py
index 24b94b5ca4..ab354003e6 100644
--- a/api/core/tools/provider/builtin/searxng/searxng.py
+++ b/api/core/tools/provider/builtin/searxng/searxng.py
@@ -17,8 +17,7 @@ class SearXNGProvider(BuiltinToolProviderController):
tool_parameters={
"query": "SearXNG",
"limit": 1,
- "search_type": "page",
- "result_type": "link"
+ "search_type": "general"
},
)
except Exception as e:
diff --git a/api/core/tools/provider/builtin/searxng/searxng.yaml b/api/core/tools/provider/builtin/searxng/searxng.yaml
index 64bd428280..9554c93d5a 100644
--- a/api/core/tools/provider/builtin/searxng/searxng.yaml
+++ b/api/core/tools/provider/builtin/searxng/searxng.yaml
@@ -6,21 +6,18 @@ identity:
zh_Hans: SearXNG
description:
en_US: A free internet metasearch engine.
- zh_Hans: 开源互联网元搜索引擎
+ zh_Hans: 开源免费的互联网元搜索引擎
icon: icon.svg
tags:
- search
- productivity
credentials_for_provider:
searxng_base_url:
- type: secret-input
+ type: text-input
required: true
label:
en_US: SearXNG base URL
zh_Hans: SearXNG base URL
- help:
- en_US: Please input your SearXNG base URL
- zh_Hans: 请输入您的 SearXNG base URL
placeholder:
en_US: Please input your SearXNG base URL
zh_Hans: 请输入您的 SearXNG base URL
diff --git a/api/core/tools/provider/builtin/searxng/tools/searxng_search.py b/api/core/tools/provider/builtin/searxng/tools/searxng_search.py
index 5d12553629..dc835a8e8c 100644
--- a/api/core/tools/provider/builtin/searxng/tools/searxng_search.py
+++ b/api/core/tools/provider/builtin/searxng/tools/searxng_search.py
@@ -1,4 +1,3 @@
-import json
from typing import Any
import requests
@@ -7,90 +6,11 @@ from core.tools.entities.tool_entities import ToolInvokeMessage
from core.tools.tool.builtin_tool import BuiltinTool
-class SearXNGSearchResults(dict):
- """Wrapper for search results."""
-
- def __init__(self, data: str):
- super().__init__(json.loads(data))
- self.__dict__ = self
-
- @property
- def results(self) -> Any:
- return self.get("results", [])
-
-
class SearXNGSearchTool(BuiltinTool):
"""
Tool for performing a search using SearXNG engine.
"""
- SEARCH_TYPE: dict[str, str] = {
- "page": "general",
- "news": "news",
- "image": "images",
- # "video": "videos",
- # "file": "files"
- }
- LINK_FILED: dict[str, str] = {
- "page": "url",
- "news": "url",
- "image": "img_src",
- # "video": "iframe_src",
- # "file": "magnetlink"
- }
- TEXT_FILED: dict[str, str] = {
- "page": "content",
- "news": "content",
- "image": "img_src",
- # "video": "iframe_src",
- # "file": "magnetlink"
- }
-
- def _invoke_query(self, user_id: str, host: str, query: str, search_type: str, result_type: str, topK: int = 5) -> list[dict]:
- """Run query and return the results."""
-
- search_type = search_type.lower()
- if search_type not in self.SEARCH_TYPE.keys():
- search_type= "page"
-
- response = requests.get(host, params={
- "q": query,
- "format": "json",
- "categories": self.SEARCH_TYPE[search_type]
- })
-
- if response.status_code != 200:
- raise Exception(f'Error {response.status_code}: {response.text}')
-
- search_results = SearXNGSearchResults(response.text).results[:topK]
-
- if result_type == 'link':
- results = []
- if search_type == "page" or search_type == "news":
- for r in search_results:
- results.append(self.create_text_message(
- text=f'{r["title"]}: {r.get(self.LINK_FILED[search_type], "")}'
- ))
- elif search_type == "image":
- for r in search_results:
- results.append(self.create_image_message(
- image=r.get(self.LINK_FILED[search_type], "")
- ))
- else:
- for r in search_results:
- results.append(self.create_link_message(
- link=r.get(self.LINK_FILED[search_type], "")
- ))
-
- return results
- else:
- text = ''
- for i, r in enumerate(search_results):
- text += f'{i+1}: {r["title"]} - {r.get(self.TEXT_FILED[search_type], "")}\n'
-
- return self.create_text_message(text=self.summary(user_id=user_id, content=text))
-
-
def _invoke(self, user_id: str, tool_parameters: dict[str, Any]) -> ToolInvokeMessage | list[ToolInvokeMessage]:
"""
Invoke the SearXNG search tool.
@@ -103,23 +23,21 @@ class SearXNGSearchTool(BuiltinTool):
ToolInvokeMessage | list[ToolInvokeMessage]: The result of the tool invocation.
"""
- host = self.runtime.credentials.get('searxng_base_url', None)
+ host = self.runtime.credentials.get('searxng_base_url')
if not host:
raise Exception('SearXNG api is required')
-
- query = tool_parameters.get('query')
- if not query:
- return self.create_text_message('Please input query')
-
- num_results = min(tool_parameters.get('num_results', 5), 20)
- search_type = tool_parameters.get('search_type', 'page') or 'page'
- result_type = tool_parameters.get('result_type', 'text') or 'text'
- return self._invoke_query(
- user_id=user_id,
- host=host,
- query=query,
- search_type=search_type,
- result_type=result_type,
- topK=num_results
- )
+ response = requests.get(host, params={
+ "q": tool_parameters.get('query'),
+ "format": "json",
+ "categories": tool_parameters.get('search_type', 'general')
+ })
+
+ if response.status_code != 200:
+ raise Exception(f'Error {response.status_code}: {response.text}')
+
+ res = response.json().get("results", [])
+ if not res:
+ return self.create_text_message(f"No results found, get response: {response.content}")
+
+ return [self.create_json_message(item) for item in res]
diff --git a/api/core/tools/provider/builtin/searxng/tools/searxng_search.yaml b/api/core/tools/provider/builtin/searxng/tools/searxng_search.yaml
index 0edf1744f4..a5e448a303 100644
--- a/api/core/tools/provider/builtin/searxng/tools/searxng_search.yaml
+++ b/api/core/tools/provider/builtin/searxng/tools/searxng_search.yaml
@@ -1,13 +1,13 @@
identity:
name: searxng_search
- author: Tice
+ author: Junytang
label:
en_US: SearXNG Search
zh_Hans: SearXNG 搜索
description:
human:
- en_US: Perform searches on SearXNG and get results.
- zh_Hans: 在 SearXNG 上进行搜索并获取结果。
+ en_US: SearXNG is a free internet metasearch engine which aggregates results from more than 70 search services.
+ zh_Hans: SearXNG 是一个免费的互联网元搜索引擎,它从70多个不同的搜索服务中聚合搜索结果。
llm: Perform searches on SearXNG and get results.
parameters:
- name: query
@@ -16,9 +16,6 @@ parameters:
label:
en_US: Query string
zh_Hans: 查询语句
- human_description:
- en_US: The search query.
- zh_Hans: 搜索查询语句。
llm_description: Key words for searching
form: llm
- name: search_type
@@ -27,63 +24,46 @@ parameters:
label:
en_US: search type
zh_Hans: 搜索类型
- pt_BR: search type
- human_description:
- en_US: search type for page, news or image.
- zh_Hans: 选择搜索的类型:网页,新闻,图片。
- pt_BR: search type for page, news or image.
- default: Page
+ default: general
options:
- - value: Page
+ - value: general
label:
- en_US: Page
- zh_Hans: 网页
- pt_BR: Page
- - value: News
+ en_US: General
+ zh_Hans: 综合
+ - value: images
+ label:
+ en_US: Images
+ zh_Hans: 图片
+ - value: videos
+ label:
+ en_US: Videos
+ zh_Hans: 视频
+ - value: news
label:
en_US: News
zh_Hans: 新闻
- pt_BR: News
- - value: Image
+ - value: map
label:
- en_US: Image
- zh_Hans: 图片
- pt_BR: Image
- form: form
- - name: num_results
- type: number
- required: true
- label:
- en_US: Number of query results
- zh_Hans: 返回查询数量
- human_description:
- en_US: The number of query results.
- zh_Hans: 返回查询结果的数量。
- form: form
- default: 5
- min: 1
- max: 20
- - name: result_type
- type: select
- required: true
- label:
- en_US: result type
- zh_Hans: 结果类型
- pt_BR: result type
- human_description:
- en_US: return a list of links or texts.
- zh_Hans: 返回一个连接列表还是纯文本内容。
- pt_BR: return a list of links or texts.
- default: text
- options:
- - value: link
+ en_US: Map
+ zh_Hans: 地图
+ - value: music
label:
- en_US: Link
- zh_Hans: 链接
- pt_BR: Link
- - value: text
+ en_US: Music
+ zh_Hans: 音乐
+ - value: it
label:
- en_US: Text
- zh_Hans: 文本
- pt_BR: Text
+ en_US: It
+ zh_Hans: 信息技术
+ - value: science
+ label:
+ en_US: Science
+ zh_Hans: 科学
+ - value: files
+ label:
+ en_US: Files
+ zh_Hans: 文件
+ - value: social_media
+ label:
+ en_US: Social Media
+ zh_Hans: 社交媒体
form: form
diff --git a/api/core/tools/provider/builtin/serper/_assets/icon.svg b/api/core/tools/provider/builtin/serper/_assets/icon.svg
new file mode 100644
index 0000000000..3f973a552e
--- /dev/null
+++ b/api/core/tools/provider/builtin/serper/_assets/icon.svg
@@ -0,0 +1,12 @@
+
+
\ No newline at end of file
diff --git a/api/core/tools/provider/builtin/serper/serper.py b/api/core/tools/provider/builtin/serper/serper.py
new file mode 100644
index 0000000000..2a42109373
--- /dev/null
+++ b/api/core/tools/provider/builtin/serper/serper.py
@@ -0,0 +1,23 @@
+from typing import Any
+
+from core.tools.errors import ToolProviderCredentialValidationError
+from core.tools.provider.builtin.serper.tools.serper_search import SerperSearchTool
+from core.tools.provider.builtin_tool_provider import BuiltinToolProviderController
+
+
+class SerperProvider(BuiltinToolProviderController):
+ def _validate_credentials(self, credentials: dict[str, Any]) -> None:
+ try:
+ SerperSearchTool().fork_tool_runtime(
+ runtime={
+ "credentials": credentials,
+ }
+ ).invoke(
+ user_id='',
+ tool_parameters={
+ "query": "test",
+ "result_type": "link"
+ },
+ )
+ except Exception as e:
+ raise ToolProviderCredentialValidationError(str(e))
diff --git a/api/core/tools/provider/builtin/serper/serper.yaml b/api/core/tools/provider/builtin/serper/serper.yaml
new file mode 100644
index 0000000000..b3b2d76c4b
--- /dev/null
+++ b/api/core/tools/provider/builtin/serper/serper.yaml
@@ -0,0 +1,31 @@
+identity:
+ author: zhuhao
+ name: serper
+ label:
+ en_US: Serper
+ zh_Hans: Serper
+ pt_BR: Serper
+ description:
+ en_US: Serper is a powerful real-time search engine tool API that provides structured data from Google Search.
+ zh_Hans: Serper 是一个强大的实时搜索引擎工具API,可提供来自 Google 搜索引擎搜索的结构化数据。
+ pt_BR: Serper is a powerful real-time search engine tool API that provides structured data from Google Search.
+ icon: icon.svg
+ tags:
+ - search
+credentials_for_provider:
+ serperapi_api_key:
+ type: secret-input
+ required: true
+ label:
+ en_US: Serper API key
+ zh_Hans: Serper API key
+ pt_BR: Serper API key
+ placeholder:
+ en_US: Please input your Serper API key
+ zh_Hans: 请输入你的 Serper API key
+ pt_BR: Please input your Serper API key
+ help:
+ en_US: Get your Serper API key from Serper
+ zh_Hans: 从 Serper 获取您的 Serper API key
+ pt_BR: Get your Serper API key from Serper
+ url: https://serper.dev/api-key
diff --git a/api/core/tools/provider/builtin/serper/tools/serper_search.py b/api/core/tools/provider/builtin/serper/tools/serper_search.py
new file mode 100644
index 0000000000..24facaf4ec
--- /dev/null
+++ b/api/core/tools/provider/builtin/serper/tools/serper_search.py
@@ -0,0 +1,44 @@
+from typing import Any, Union
+
+import requests
+
+from core.tools.entities.tool_entities import ToolInvokeMessage
+from core.tools.tool.builtin_tool import BuiltinTool
+
+SERPER_API_URL = "https://google.serper.dev/search"
+
+
+class SerperSearchTool(BuiltinTool):
+
+ def _parse_response(self, response: dict) -> dict:
+ result = {}
+ if "knowledgeGraph" in response:
+ result["title"] = response["knowledgeGraph"].get("title", "")
+ result["description"] = response["knowledgeGraph"].get("description", "")
+ if "organic" in response:
+ result["organic"] = [
+ {
+ "title": item.get("title", ""),
+ "link": item.get("link", ""),
+ "snippet": item.get("snippet", "")
+ }
+ for item in response["organic"]
+ ]
+ return result
+ def _invoke(self,
+ user_id: str,
+ tool_parameters: dict[str, Any],
+ ) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
+ params = {
+ "q": tool_parameters['query'],
+ "gl": "us",
+ "hl": "en"
+ }
+ headers = {
+ 'X-API-KEY': self.runtime.credentials['serperapi_api_key'],
+ 'Content-Type': 'application/json'
+ }
+ response = requests.get(url=SERPER_API_URL, params=params,headers=headers)
+ response.raise_for_status()
+ valuable_res = self._parse_response(response.json())
+ return self.create_json_message(valuable_res)
diff --git a/api/core/tools/provider/builtin/serper/tools/serper_search.yaml b/api/core/tools/provider/builtin/serper/tools/serper_search.yaml
new file mode 100644
index 0000000000..e1c0a056e6
--- /dev/null
+++ b/api/core/tools/provider/builtin/serper/tools/serper_search.yaml
@@ -0,0 +1,27 @@
+identity:
+ name: serper
+ author: zhuhao
+ label:
+ en_US: Serper
+ zh_Hans: Serper
+ pt_BR: Serper
+description:
+ human:
+ en_US: A tool for performing a Google search and extracting snippets and webpages.Input should be a search query.
+ zh_Hans: 一个用于执行 Google 搜索并提取片段和网页的工具。输入应该是一个搜索查询。
+ pt_BR: A tool for performing a Google search and extracting snippets and webpages.Input should be a search query.
+ llm: A tool for performing a Google search and extracting snippets and webpages.Input should be a search query.
+parameters:
+ - name: query
+ type: string
+ required: true
+ label:
+ en_US: Query string
+ zh_Hans: 查询语句
+ pt_BR: Query string
+ human_description:
+ en_US: used for searching
+ zh_Hans: 用于搜索网页内容
+ pt_BR: used for searching
+ llm_description: key words for searching
+ form: llm
diff --git a/api/core/tools/provider/builtin/spider/spider.py b/api/core/tools/provider/builtin/spider/spider.py
index 6fa431b6bb..5bcc56a724 100644
--- a/api/core/tools/provider/builtin/spider/spider.py
+++ b/api/core/tools/provider/builtin/spider/spider.py
@@ -8,7 +8,13 @@ from core.tools.provider.builtin_tool_provider import BuiltinToolProviderControl
class SpiderProvider(BuiltinToolProviderController):
def _validate_credentials(self, credentials: dict[str, Any]) -> None:
try:
- app = Spider(api_key=credentials["spider_api_key"])
- app.scrape_url(url="https://spider.cloud")
+ app = Spider(api_key=credentials['spider_api_key'])
+ app.scrape_url(url='https://spider.cloud')
+ except AttributeError as e:
+ # Handle cases where NoneType is not iterable, which might indicate API issues
+ if 'NoneType' in str(e) and 'not iterable' in str(e):
+ raise ToolProviderCredentialValidationError('API is currently down, try again in 15 minutes', str(e))
+ else:
+ raise ToolProviderCredentialValidationError('An unexpected error occurred.', str(e))
except Exception as e:
- raise ToolProviderCredentialValidationError(str(e))
+ raise ToolProviderCredentialValidationError('An unexpected error occurred.', str(e))
diff --git a/api/core/tools/provider/builtin/stepfun/__init__.py b/api/core/tools/provider/builtin/stepfun/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/core/tools/provider/builtin/stepfun/_assets/icon.png b/api/core/tools/provider/builtin/stepfun/_assets/icon.png
new file mode 100644
index 0000000000..85b96d0c74
Binary files /dev/null and b/api/core/tools/provider/builtin/stepfun/_assets/icon.png differ
diff --git a/api/core/tools/provider/builtin/stepfun/stepfun.py b/api/core/tools/provider/builtin/stepfun/stepfun.py
new file mode 100644
index 0000000000..e809b04546
--- /dev/null
+++ b/api/core/tools/provider/builtin/stepfun/stepfun.py
@@ -0,0 +1,25 @@
+from typing import Any
+
+from core.tools.errors import ToolProviderCredentialValidationError
+from core.tools.provider.builtin.stepfun.tools.image import StepfunTool
+from core.tools.provider.builtin_tool_provider import BuiltinToolProviderController
+
+
+class StepfunProvider(BuiltinToolProviderController):
+ def _validate_credentials(self, credentials: dict[str, Any]) -> None:
+ try:
+ StepfunTool().fork_tool_runtime(
+ runtime={
+ "credentials": credentials,
+ }
+ ).invoke(
+ user_id='',
+ tool_parameters={
+ "prompt": "cute girl, blue eyes, white hair, anime style",
+ "size": "1024x1024",
+ "n": 1
+ },
+ )
+ except Exception as e:
+ raise ToolProviderCredentialValidationError(str(e))
+
\ No newline at end of file
diff --git a/api/core/tools/provider/builtin/stepfun/stepfun.yaml b/api/core/tools/provider/builtin/stepfun/stepfun.yaml
new file mode 100644
index 0000000000..1f841ec369
--- /dev/null
+++ b/api/core/tools/provider/builtin/stepfun/stepfun.yaml
@@ -0,0 +1,46 @@
+identity:
+ author: Stepfun
+ name: stepfun
+ label:
+ en_US: Image-1X
+ zh_Hans: 阶跃星辰绘画
+ pt_BR: Image-1X
+ description:
+ en_US: Image-1X
+ zh_Hans: 阶跃星辰绘画
+ pt_BR: Image-1X
+ icon: icon.png
+ tags:
+ - image
+ - productivity
+credentials_for_provider:
+ stepfun_api_key:
+ type: secret-input
+ required: true
+ label:
+ en_US: Stepfun API key
+ zh_Hans: 阶跃星辰API key
+ pt_BR: Stepfun API key
+ help:
+ en_US: Please input your stepfun API key
+ zh_Hans: 请输入你的阶跃星辰 API key
+ pt_BR: Please input your stepfun API key
+ placeholder:
+ en_US: Please input your stepfun API key
+ zh_Hans: 请输入你的阶跃星辰 API key
+ pt_BR: Please input your stepfun API key
+ stepfun_base_url:
+ type: text-input
+ required: false
+ label:
+ en_US: Stepfun base URL
+ zh_Hans: 阶跃星辰 base URL
+ pt_BR: Stepfun base URL
+ help:
+ en_US: Please input your Stepfun base URL
+ zh_Hans: 请输入你的阶跃星辰 base URL
+ pt_BR: Please input your Stepfun base URL
+ placeholder:
+ en_US: Please input your Stepfun base URL
+ zh_Hans: 请输入你的阶跃星辰 base URL
+ pt_BR: Please input your Stepfun base URL
diff --git a/api/core/tools/provider/builtin/stepfun/tools/image.py b/api/core/tools/provider/builtin/stepfun/tools/image.py
new file mode 100644
index 0000000000..5e544aada6
--- /dev/null
+++ b/api/core/tools/provider/builtin/stepfun/tools/image.py
@@ -0,0 +1,72 @@
+import random
+from typing import Any, Union
+
+from openai import OpenAI
+from yarl import URL
+
+from core.tools.entities.tool_entities import ToolInvokeMessage
+from core.tools.tool.builtin_tool import BuiltinTool
+
+
+class StepfunTool(BuiltinTool):
+ """ Stepfun Image Generation Tool """
+ def _invoke(self,
+ user_id: str,
+ tool_parameters: dict[str, Any],
+ ) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
+ """
+ invoke tools
+ """
+ base_url = self.runtime.credentials.get('stepfun_base_url', None)
+ if not base_url:
+ base_url = None
+ else:
+ base_url = str(URL(base_url) / 'v1')
+
+ client = OpenAI(
+ api_key=self.runtime.credentials['stepfun_api_key'],
+ base_url=base_url,
+ )
+
+ extra_body = {}
+ model = tool_parameters.get('model', 'step-1x-medium')
+ if not model:
+ return self.create_text_message('Please input model name')
+ # prompt
+ prompt = tool_parameters.get('prompt', '')
+ if not prompt:
+ return self.create_text_message('Please input prompt')
+
+ seed = tool_parameters.get('seed', 0)
+ if seed > 0:
+ extra_body['seed'] = seed
+ steps = tool_parameters.get('steps', 0)
+ if steps > 0:
+ extra_body['steps'] = steps
+ negative_prompt = tool_parameters.get('negative_prompt', '')
+ if negative_prompt:
+ extra_body['negative_prompt'] = negative_prompt
+
+ # call openapi stepfun model
+ response = client.images.generate(
+ prompt=prompt,
+ model=model,
+ size=tool_parameters.get('size', '1024x1024'),
+ n=tool_parameters.get('n', 1),
+ extra_body= extra_body
+ )
+ print(response)
+
+ result = []
+ for image in response.data:
+ result.append(self.create_image_message(image=image.url))
+ result.append(self.create_json_message({
+ "url": image.url,
+ }))
+ return result
+
+ @staticmethod
+ def _generate_random_id(length=8):
+ characters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789'
+ random_id = ''.join(random.choices(characters, k=length))
+ return random_id
diff --git a/api/core/tools/provider/builtin/stepfun/tools/image.yaml b/api/core/tools/provider/builtin/stepfun/tools/image.yaml
new file mode 100644
index 0000000000..1e20b157aa
--- /dev/null
+++ b/api/core/tools/provider/builtin/stepfun/tools/image.yaml
@@ -0,0 +1,158 @@
+identity:
+ name: stepfun
+ author: Stepfun
+ label:
+ en_US: step-1x
+ zh_Hans: 阶跃星辰绘画
+ pt_BR: step-1x
+ description:
+ en_US: step-1x is a powerful drawing tool by stepfun, you can draw the image based on your prompt
+ zh_Hans: step-1x 系列是阶跃星辰提供的强大的绘画工具,它可以根据您的提示词绘制出您想要的图像。
+ pt_BR: step-1x is a powerful drawing tool by stepfun, you can draw the image based on your prompt
+description:
+ human:
+ en_US: step-1x is a text to image tool
+ zh_Hans: step-1x 是一个文本/图像到图像的工具
+ pt_BR: step-1x is a text to image tool
+ llm: step-1x is a tool used to generate images from text or image
+parameters:
+ - name: prompt
+ type: string
+ required: true
+ label:
+ en_US: Prompt
+ zh_Hans: 提示词
+ pt_BR: Prompt
+ human_description:
+ en_US: Image prompt, you can check the official documentation of step-1x
+ zh_Hans: 图像提示词,您可以查看step-1x 的官方文档
+ pt_BR: Image prompt, you can check the official documentation of step-1x
+ llm_description: Image prompt of step-1x you should describe the image you want to generate as a list of words as possible as detailed
+ form: llm
+ - name: model
+ type: select
+ required: false
+ human_description:
+ en_US: used for selecting the model name
+ zh_Hans: 用于选择模型的名字
+ pt_BR: used for selecting the model name
+ label:
+ en_US: Model Name
+ zh_Hans: 模型名字
+ pt_BR: Model Name
+ form: form
+ options:
+ - value: step-1x-turbo
+ label:
+ en_US: turbo
+ zh_Hans: turbo
+ pt_BR: turbo
+ - value: step-1x-medium
+ label:
+ en_US: medium
+ zh_Hans: medium
+ pt_BR: medium
+ - value: step-1x-large
+ label:
+ en_US: large
+ zh_Hans: large
+ pt_BR: large
+ default: step-1x-medium
+ - name: size
+ type: select
+ required: false
+ human_description:
+ en_US: used for selecting the image size
+ zh_Hans: 用于选择图像大小
+ pt_BR: used for selecting the image size
+ label:
+ en_US: Image size
+ zh_Hans: 图像大小
+ pt_BR: Image size
+ form: form
+ options:
+ - value: 256x256
+ label:
+ en_US: 256x256
+ zh_Hans: 256x256
+ pt_BR: 256x256
+ - value: 512x512
+ label:
+ en_US: 512x512
+ zh_Hans: 512x512
+ pt_BR: 512x512
+ - value: 768x768
+ label:
+ en_US: 768x768
+ zh_Hans: 768x768
+ pt_BR: 768x768
+ - value: 1024x1024
+ label:
+ en_US: 1024x1024
+ zh_Hans: 1024x1024
+ pt_BR: 1024x1024
+ - value: 1280x800
+ label:
+ en_US: 1280x800
+ zh_Hans: 1280x800
+ pt_BR: 1280x800
+ - value: 800x1280
+ label:
+ en_US: 800x1280
+ zh_Hans: 800x1280
+ pt_BR: 800x1280
+ default: 1024x1024
+ - name: n
+ type: number
+ required: true
+ human_description:
+ en_US: used for selecting the number of images
+ zh_Hans: 用于选择图像数量
+ pt_BR: used for selecting the number of images
+ label:
+ en_US: Number of images
+ zh_Hans: 图像数量
+ pt_BR: Number of images
+ form: form
+ default: 1
+ min: 1
+ max: 10
+ - name: seed
+ type: number
+ required: false
+ label:
+ en_US: seed
+ zh_Hans: seed
+ pt_BR: seed
+ human_description:
+ en_US: seed
+ zh_Hans: seed
+ pt_BR: seed
+ form: form
+ default: 10
+ - name: steps
+ type: number
+ required: false
+ label:
+ en_US: Steps
+ zh_Hans: Steps
+ pt_BR: Steps
+ human_description:
+ en_US: Steps
+ zh_Hans: Steps
+ pt_BR: Steps
+ form: form
+ default: 10
+ - name: negative_prompt
+ type: string
+ required: false
+ label:
+ en_US: Negative prompt
+ zh_Hans: Negative prompt
+ pt_BR: Negative prompt
+ human_description:
+ en_US: Negative prompt
+ zh_Hans: Negative prompt
+ pt_BR: Negative prompt
+ form: form
+ default: (worst quality:1.3), (nsfw), low quality
diff --git a/api/core/tools/tool/dataset_retriever/dataset_multi_retriever_tool.py b/api/core/tools/tool/dataset_retriever/dataset_multi_retriever_tool.py
index 1a0933af16..7cb7c033bb 100644
--- a/api/core/tools/tool/dataset_retriever/dataset_multi_retriever_tool.py
+++ b/api/core/tools/tool/dataset_retriever/dataset_multi_retriever_tool.py
@@ -177,10 +177,12 @@ class DatasetMultiRetrieverTool(DatasetRetrieverBaseTool):
dataset_id=dataset.id,
query=query,
top_k=self.top_k,
- score_threshold=retrieval_model['score_threshold']
+ score_threshold=retrieval_model.get('score_threshold', .0)
if retrieval_model['score_threshold_enabled'] else None,
- reranking_model=retrieval_model['reranking_model']
+ reranking_model=retrieval_model.get('reranking_model', None)
if retrieval_model['reranking_enable'] else None,
+ reranking_mode=retrieval_model.get('reranking_mode')
+ if retrieval_model.get('reranking_mode') else 'reranking_model',
weights=retrieval_model.get('weights', None),
)
diff --git a/api/core/tools/tool/dataset_retriever/dataset_retriever_tool.py b/api/core/tools/tool/dataset_retriever/dataset_retriever_tool.py
index 397ff7966e..a7e70af628 100644
--- a/api/core/tools/tool/dataset_retriever/dataset_retriever_tool.py
+++ b/api/core/tools/tool/dataset_retriever/dataset_retriever_tool.py
@@ -14,6 +14,7 @@ default_retrieval_model = {
'reranking_provider_name': '',
'reranking_model_name': ''
},
+ 'reranking_mode': 'reranking_model',
'top_k': 2,
'score_threshold_enabled': False
}
@@ -71,14 +72,16 @@ class DatasetRetrieverTool(DatasetRetrieverBaseTool):
else:
if self.top_k > 0:
# retrieval source
- documents = RetrievalService.retrieve(retrival_method=retrieval_model['search_method'],
+ documents = RetrievalService.retrieve(retrival_method=retrieval_model.get('search_method', 'semantic_search'),
dataset_id=dataset.id,
query=query,
top_k=self.top_k,
- score_threshold=retrieval_model['score_threshold']
+ score_threshold=retrieval_model.get('score_threshold', .0)
if retrieval_model['score_threshold_enabled'] else None,
- reranking_model=retrieval_model['reranking_model']
+ reranking_model=retrieval_model.get('reranking_model', None)
if retrieval_model['reranking_enable'] else None,
+ reranking_mode=retrieval_model.get('reranking_mode')
+ if retrieval_model.get('reranking_mode') else 'reranking_model',
weights=retrieval_model.get('weights', None),
)
else:
diff --git a/api/core/tools/tool/workflow_tool.py b/api/core/tools/tool/workflow_tool.py
index 071081303c..12e498e76d 100644
--- a/api/core/tools/tool/workflow_tool.py
+++ b/api/core/tools/tool/workflow_tool.py
@@ -72,6 +72,7 @@ class WorkflowTool(Tool):
result.append(self.create_file_var_message(file))
result.append(self.create_text_message(json.dumps(outputs, ensure_ascii=False)))
+ result.append(self.create_json_message(outputs))
return result
diff --git a/api/core/workflow/entities/node_entities.py b/api/core/workflow/entities/node_entities.py
index 6cc639f55f..5e2258d624 100644
--- a/api/core/workflow/entities/node_entities.py
+++ b/api/core/workflow/entities/node_entities.py
@@ -23,10 +23,12 @@ class NodeType(Enum):
HTTP_REQUEST = 'http-request'
TOOL = 'tool'
VARIABLE_AGGREGATOR = 'variable-aggregator'
+ # TODO: merge this into VARIABLE_AGGREGATOR
VARIABLE_ASSIGNER = 'variable-assigner'
LOOP = 'loop'
ITERATION = 'iteration'
PARAMETER_EXTRACTOR = 'parameter-extractor'
+ CONVERSATION_VARIABLE_ASSIGNER = 'assigner'
@classmethod
def value_of(cls, value: str) -> 'NodeType':
diff --git a/api/core/workflow/entities/variable_pool.py b/api/core/workflow/entities/variable_pool.py
index 270e104e37..39165b4988 100644
--- a/api/core/workflow/entities/variable_pool.py
+++ b/api/core/workflow/entities/variable_pool.py
@@ -14,6 +14,7 @@ VariableValue = Union[str, int, float, dict, list, FileVar]
SYSTEM_VARIABLE_NODE_ID = 'sys'
ENVIRONMENT_VARIABLE_NODE_ID = 'env'
+CONVERSATION_VARIABLE_NODE_ID = 'conversation'
class VariablePool(BaseModel):
@@ -40,6 +41,8 @@ class VariablePool(BaseModel):
default_factory=list
)
+ conversation_variables: Sequence[Variable] | None = None
+
@model_validator(mode="after")
def val_model_after(self):
"""
@@ -54,6 +57,10 @@ class VariablePool(BaseModel):
for var in self.environment_variables or []:
self.add((ENVIRONMENT_VARIABLE_NODE_ID, var.name), var)
+ # Add conversation variables to the variable pool
+ for var in self.conversation_variables or []:
+ self.add((CONVERSATION_VARIABLE_NODE_ID, var.name), var)
+
return self
def add(self, selector: Sequence[str], value: Any, /) -> None:
diff --git a/api/core/workflow/nodes/code/code_node.py b/api/core/workflow/nodes/code/code_node.py
index 7ad09a37a9..6395e91e53 100644
--- a/api/core/workflow/nodes/code/code_node.py
+++ b/api/core/workflow/nodes/code/code_node.py
@@ -92,8 +92,11 @@ class CodeNode(BaseNode):
:return:
"""
if not isinstance(value, str):
- raise ValueError(f"Output variable `{variable}` must be a string")
-
+ if isinstance(value, type(None)):
+ return None
+ else:
+ raise ValueError(f"Output variable `{variable}` must be a string")
+
if len(value) > MAX_STRING_LENGTH:
raise ValueError(f'The length of output variable `{variable}` must be less than {MAX_STRING_LENGTH} characters')
@@ -107,7 +110,10 @@ class CodeNode(BaseNode):
:return:
"""
if not isinstance(value, int | float):
- raise ValueError(f"Output variable `{variable}` must be a number")
+ if isinstance(value, type(None)):
+ return None
+ else:
+ raise ValueError(f"Output variable `{variable}` must be a number")
if value > MAX_NUMBER or value < MIN_NUMBER:
raise ValueError(f'Output variable `{variable}` is out of range, it must be between {MIN_NUMBER} and {MAX_NUMBER}.')
@@ -155,28 +161,31 @@ class CodeNode(BaseNode):
elif isinstance(output_value, list):
first_element = output_value[0] if len(output_value) > 0 else None
if first_element is not None:
- if isinstance(first_element, int | float) and all(isinstance(value, int | float) for value in output_value):
+ if isinstance(first_element, int | float) and all(value is None or isinstance(value, int | float) for value in output_value):
for i, value in enumerate(output_value):
self._check_number(
value=value,
variable=f'{prefix}.{output_name}[{i}]' if prefix else f'{output_name}[{i}]'
)
- elif isinstance(first_element, str) and all(isinstance(value, str) for value in output_value):
+ elif isinstance(first_element, str) and all(value is None or isinstance(value, str) for value in output_value):
for i, value in enumerate(output_value):
self._check_string(
value=value,
variable=f'{prefix}.{output_name}[{i}]' if prefix else f'{output_name}[{i}]'
)
- elif isinstance(first_element, dict) and all(isinstance(value, dict) for value in output_value):
+ elif isinstance(first_element, dict) and all(value is None or isinstance(value, dict) for value in output_value):
for i, value in enumerate(output_value):
- self._transform_result(
- result=value,
- output_schema=None,
- prefix=f'{prefix}.{output_name}[{i}]' if prefix else f'{output_name}[{i}]',
- depth=depth + 1
- )
+ if value is not None:
+ self._transform_result(
+ result=value,
+ output_schema=None,
+ prefix=f'{prefix}.{output_name}[{i}]' if prefix else f'{output_name}[{i}]',
+ depth=depth + 1
+ )
else:
raise ValueError(f'Output {prefix}.{output_name} is not a valid array. make sure all elements are of the same type.')
+ elif isinstance(output_value, type(None)):
+ pass
else:
raise ValueError(f'Output {prefix}.{output_name} is not a valid type.')
@@ -191,16 +200,19 @@ class CodeNode(BaseNode):
if output_config.type == 'object':
# check if output is object
if not isinstance(result.get(output_name), dict):
- raise ValueError(
- f'Output {prefix}{dot}{output_name} is not an object, got {type(result.get(output_name))} instead.'
+ if isinstance(result.get(output_name), type(None)):
+ transformed_result[output_name] = None
+ else:
+ raise ValueError(
+ f'Output {prefix}{dot}{output_name} is not an object, got {type(result.get(output_name))} instead.'
+ )
+ else:
+ transformed_result[output_name] = self._transform_result(
+ result=result[output_name],
+ output_schema=output_config.children,
+ prefix=f'{prefix}.{output_name}',
+ depth=depth + 1
)
-
- transformed_result[output_name] = self._transform_result(
- result=result[output_name],
- output_schema=output_config.children,
- prefix=f'{prefix}.{output_name}',
- depth=depth + 1
- )
elif output_config.type == 'number':
# check if number available
transformed_result[output_name] = self._check_number(
@@ -216,68 +228,80 @@ class CodeNode(BaseNode):
elif output_config.type == 'array[number]':
# check if array of number available
if not isinstance(result[output_name], list):
- raise ValueError(
- f'Output {prefix}{dot}{output_name} is not an array, got {type(result.get(output_name))} instead.'
- )
+ if isinstance(result[output_name], type(None)):
+ transformed_result[output_name] = None
+ else:
+ raise ValueError(
+ f'Output {prefix}{dot}{output_name} is not an array, got {type(result.get(output_name))} instead.'
+ )
+ else:
+ if len(result[output_name]) > MAX_NUMBER_ARRAY_LENGTH:
+ raise ValueError(
+ f'The length of output variable `{prefix}{dot}{output_name}` must be less than {MAX_NUMBER_ARRAY_LENGTH} elements.'
+ )
- if len(result[output_name]) > MAX_NUMBER_ARRAY_LENGTH:
- raise ValueError(
- f'The length of output variable `{prefix}{dot}{output_name}` must be less than {MAX_NUMBER_ARRAY_LENGTH} elements.'
- )
-
- transformed_result[output_name] = [
- self._check_number(
- value=value,
- variable=f'{prefix}{dot}{output_name}[{i}]'
- )
- for i, value in enumerate(result[output_name])
- ]
+ transformed_result[output_name] = [
+ self._check_number(
+ value=value,
+ variable=f'{prefix}{dot}{output_name}[{i}]'
+ )
+ for i, value in enumerate(result[output_name])
+ ]
elif output_config.type == 'array[string]':
# check if array of string available
if not isinstance(result[output_name], list):
- raise ValueError(
- f'Output {prefix}{dot}{output_name} is not an array, got {type(result.get(output_name))} instead.'
- )
+ if isinstance(result[output_name], type(None)):
+ transformed_result[output_name] = None
+ else:
+ raise ValueError(
+ f'Output {prefix}{dot}{output_name} is not an array, got {type(result.get(output_name))} instead.'
+ )
+ else:
+ if len(result[output_name]) > MAX_STRING_ARRAY_LENGTH:
+ raise ValueError(
+ f'The length of output variable `{prefix}{dot}{output_name}` must be less than {MAX_STRING_ARRAY_LENGTH} elements.'
+ )
- if len(result[output_name]) > MAX_STRING_ARRAY_LENGTH:
- raise ValueError(
- f'The length of output variable `{prefix}{dot}{output_name}` must be less than {MAX_STRING_ARRAY_LENGTH} elements.'
- )
-
- transformed_result[output_name] = [
- self._check_string(
- value=value,
- variable=f'{prefix}{dot}{output_name}[{i}]'
- )
- for i, value in enumerate(result[output_name])
- ]
+ transformed_result[output_name] = [
+ self._check_string(
+ value=value,
+ variable=f'{prefix}{dot}{output_name}[{i}]'
+ )
+ for i, value in enumerate(result[output_name])
+ ]
elif output_config.type == 'array[object]':
# check if array of object available
if not isinstance(result[output_name], list):
- raise ValueError(
- f'Output {prefix}{dot}{output_name} is not an array, got {type(result.get(output_name))} instead.'
- )
-
- if len(result[output_name]) > MAX_OBJECT_ARRAY_LENGTH:
- raise ValueError(
- f'The length of output variable `{prefix}{dot}{output_name}` must be less than {MAX_OBJECT_ARRAY_LENGTH} elements.'
- )
-
- for i, value in enumerate(result[output_name]):
- if not isinstance(value, dict):
+ if isinstance(result[output_name], type(None)):
+ transformed_result[output_name] = None
+ else:
raise ValueError(
- f'Output {prefix}{dot}{output_name}[{i}] is not an object, got {type(value)} instead at index {i}.'
+ f'Output {prefix}{dot}{output_name} is not an array, got {type(result.get(output_name))} instead.'
)
+ else:
+ if len(result[output_name]) > MAX_OBJECT_ARRAY_LENGTH:
+ raise ValueError(
+ f'The length of output variable `{prefix}{dot}{output_name}` must be less than {MAX_OBJECT_ARRAY_LENGTH} elements.'
+ )
+
+ for i, value in enumerate(result[output_name]):
+ if not isinstance(value, dict):
+ if isinstance(value, type(None)):
+ pass
+ else:
+ raise ValueError(
+ f'Output {prefix}{dot}{output_name}[{i}] is not an object, got {type(value)} instead at index {i}.'
+ )
- transformed_result[output_name] = [
- self._transform_result(
- result=value,
- output_schema=output_config.children,
- prefix=f'{prefix}{dot}{output_name}[{i}]',
- depth=depth + 1
- )
- for i, value in enumerate(result[output_name])
- ]
+ transformed_result[output_name] = [
+ None if value is None else self._transform_result(
+ result=value,
+ output_schema=output_config.children,
+ prefix=f'{prefix}{dot}{output_name}[{i}]',
+ depth=depth + 1
+ )
+ for i, value in enumerate(result[output_name])
+ ]
else:
raise ValueError(f'Output type {output_config.type} is not supported.')
diff --git a/api/core/workflow/nodes/http_request/http_executor.py b/api/core/workflow/nodes/http_request/http_executor.py
index 3c24c0a018..db18bd00b2 100644
--- a/api/core/workflow/nodes/http_request/http_executor.py
+++ b/api/core/workflow/nodes/http_request/http_executor.py
@@ -337,7 +337,7 @@ class HttpExecutor:
if variable is None:
raise ValueError(f'Variable {variable_selector.variable} not found')
if escape_quotes and isinstance(variable, str):
- value = variable.replace('"', '\\"')
+ value = variable.replace('"', '\\"').replace('\n', '\\n')
else:
value = variable
variable_value_mapping[variable_selector.variable] = value
diff --git a/api/core/workflow/nodes/knowledge_retrieval/entities.py b/api/core/workflow/nodes/knowledge_retrieval/entities.py
index 5758b895f3..7cf392277c 100644
--- a/api/core/workflow/nodes/knowledge_retrieval/entities.py
+++ b/api/core/workflow/nodes/knowledge_retrieval/entities.py
@@ -33,7 +33,6 @@ class WeightedScoreConfig(BaseModel):
"""
Weighted score Config.
"""
- weight_type: str
vector_setting: VectorSetting
keyword_setting: KeywordSetting
@@ -49,7 +48,6 @@ class MultipleRetrievalConfig(BaseModel):
reranking_model: Optional[RerankingModelConfig] = None
weights: Optional[WeightedScoreConfig] = None
-
class ModelConfig(BaseModel):
"""
Model Config.
diff --git a/api/core/workflow/nodes/knowledge_retrieval/knowledge_retrieval_node.py b/api/core/workflow/nodes/knowledge_retrieval/knowledge_retrieval_node.py
index cdd7641b81..7c6811d0e8 100644
--- a/api/core/workflow/nodes/knowledge_retrieval/knowledge_retrieval_node.py
+++ b/api/core/workflow/nodes/knowledge_retrieval/knowledge_retrieval_node.py
@@ -149,7 +149,6 @@ class KnowledgeRetrievalNode(BaseNode):
elif node_data.multiple_retrieval_config.reranking_mode == 'weighted_score':
reranking_model = None
weights = {
- 'weight_type': node_data.multiple_retrieval_config.weights.weight_type,
'vector_setting': {
"vector_weight": node_data.multiple_retrieval_config.weights.vector_setting.vector_weight,
"embedding_provider_name": node_data.multiple_retrieval_config.weights.vector_setting.embedding_provider_name,
diff --git a/api/core/workflow/nodes/tool/tool_node.py b/api/core/workflow/nodes/tool/tool_node.py
index 1141417c55..c10ee542f1 100644
--- a/api/core/workflow/nodes/tool/tool_node.py
+++ b/api/core/workflow/nodes/tool/tool_node.py
@@ -118,6 +118,7 @@ class ToolNode(BaseNode):
for parameter_name in node_data.tool_parameters:
parameter = tool_parameters_dictionary.get(parameter_name)
if not parameter:
+ result[parameter_name] = None
continue
if parameter.type == ToolParameter.ToolParameterType.FILE:
result[parameter_name] = [
@@ -175,13 +176,14 @@ class ToolNode(BaseNode):
ext = path.splitext(url)[1]
mimetype = response.meta.get('mime_type', 'image/jpeg')
filename = response.save_as or url.split('/')[-1]
+ transfer_method = response.meta.get('transfer_method', FileTransferMethod.TOOL_FILE)
# get tool file id
tool_file_id = url.split('/')[-1].split('.')[0]
result.append(FileVar(
tenant_id=self.tenant_id,
type=FileType.IMAGE,
- transfer_method=FileTransferMethod.TOOL_FILE,
+ transfer_method=transfer_method,
url=url,
related_id=tool_file_id,
filename=filename,
diff --git a/api/core/workflow/nodes/variable_assigner/__init__.py b/api/core/workflow/nodes/variable_assigner/__init__.py
new file mode 100644
index 0000000000..552cc367f2
--- /dev/null
+++ b/api/core/workflow/nodes/variable_assigner/__init__.py
@@ -0,0 +1,109 @@
+from collections.abc import Sequence
+from enum import Enum
+from typing import Optional, cast
+
+from sqlalchemy import select
+from sqlalchemy.orm import Session
+
+from core.app.segments import SegmentType, Variable, factory
+from core.workflow.entities.base_node_data_entities import BaseNodeData
+from core.workflow.entities.node_entities import NodeRunResult, NodeType
+from core.workflow.entities.variable_pool import VariablePool
+from core.workflow.nodes.base_node import BaseNode
+from extensions.ext_database import db
+from models import ConversationVariable, WorkflowNodeExecutionStatus
+
+
+class VariableAssignerNodeError(Exception):
+ pass
+
+
+class WriteMode(str, Enum):
+ OVER_WRITE = 'over-write'
+ APPEND = 'append'
+ CLEAR = 'clear'
+
+
+class VariableAssignerData(BaseNodeData):
+ title: str = 'Variable Assigner'
+ desc: Optional[str] = 'Assign a value to a variable'
+ assigned_variable_selector: Sequence[str]
+ write_mode: WriteMode
+ input_variable_selector: Sequence[str]
+
+
+class VariableAssignerNode(BaseNode):
+ _node_data_cls: type[BaseNodeData] = VariableAssignerData
+ _node_type: NodeType = NodeType.CONVERSATION_VARIABLE_ASSIGNER
+
+ def _run(self, variable_pool: VariablePool) -> NodeRunResult:
+ data = cast(VariableAssignerData, self.node_data)
+
+ # Should be String, Number, Object, ArrayString, ArrayNumber, ArrayObject
+ original_variable = variable_pool.get(data.assigned_variable_selector)
+ if not isinstance(original_variable, Variable):
+ raise VariableAssignerNodeError('assigned variable not found')
+
+ match data.write_mode:
+ case WriteMode.OVER_WRITE:
+ income_value = variable_pool.get(data.input_variable_selector)
+ if not income_value:
+ raise VariableAssignerNodeError('input value not found')
+ updated_variable = original_variable.model_copy(update={'value': income_value.value})
+
+ case WriteMode.APPEND:
+ income_value = variable_pool.get(data.input_variable_selector)
+ if not income_value:
+ raise VariableAssignerNodeError('input value not found')
+ updated_value = original_variable.value + [income_value.value]
+ updated_variable = original_variable.model_copy(update={'value': updated_value})
+
+ case WriteMode.CLEAR:
+ income_value = get_zero_value(original_variable.value_type)
+ updated_variable = original_variable.model_copy(update={'value': income_value.to_object()})
+
+ case _:
+ raise VariableAssignerNodeError(f'unsupported write mode: {data.write_mode}')
+
+ # Over write the variable.
+ variable_pool.add(data.assigned_variable_selector, updated_variable)
+
+ # Update conversation variable.
+ # TODO: Find a better way to use the database.
+ conversation_id = variable_pool.get(['sys', 'conversation_id'])
+ if not conversation_id:
+ raise VariableAssignerNodeError('conversation_id not found')
+ update_conversation_variable(conversation_id=conversation_id.text, variable=updated_variable)
+
+ return NodeRunResult(
+ status=WorkflowNodeExecutionStatus.SUCCEEDED,
+ inputs={
+ 'value': income_value.to_object(),
+ },
+ )
+
+
+def update_conversation_variable(conversation_id: str, variable: Variable):
+ stmt = select(ConversationVariable).where(
+ ConversationVariable.id == variable.id, ConversationVariable.conversation_id == conversation_id
+ )
+ with Session(db.engine) as session:
+ row = session.scalar(stmt)
+ if not row:
+ raise VariableAssignerNodeError('conversation variable not found in the database')
+ row.data = variable.model_dump_json()
+ session.commit()
+
+
+def get_zero_value(t: SegmentType):
+ match t:
+ case SegmentType.ARRAY_OBJECT | SegmentType.ARRAY_STRING | SegmentType.ARRAY_NUMBER:
+ return factory.build_segment([])
+ case SegmentType.OBJECT:
+ return factory.build_segment({})
+ case SegmentType.STRING:
+ return factory.build_segment('')
+ case SegmentType.NUMBER:
+ return factory.build_segment(0)
+ case _:
+ raise VariableAssignerNodeError(f'unsupported variable type: {t}')
diff --git a/api/core/workflow/workflow_engine_manager.py b/api/core/workflow/workflow_engine_manager.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/docker/entrypoint.sh b/api/docker/entrypoint.sh
index a53d84c6e9..9cf5c505d1 100755
--- a/api/docker/entrypoint.sh
+++ b/api/docker/entrypoint.sh
@@ -8,8 +8,21 @@ if [[ "${MIGRATION_ENABLED}" == "true" ]]; then
fi
if [[ "${MODE}" == "worker" ]]; then
- exec celery -A app.celery worker -P ${CELERY_WORKER_CLASS:-gevent} -c ${CELERY_WORKER_AMOUNT:-1} --loglevel INFO \
+
+ # Get the number of available CPU cores
+ if [ "${CELERY_AUTO_SCALE,,}" = "true" ]; then
+ # Set MAX_WORKERS to the number of available cores if not specified
+ AVAILABLE_CORES=$(nproc)
+ MAX_WORKERS=${CELERY_MAX_WORKERS:-$AVAILABLE_CORES}
+ MIN_WORKERS=${CELERY_MIN_WORKERS:-1}
+ CONCURRENCY_OPTION="--autoscale=${MAX_WORKERS},${MIN_WORKERS}"
+ else
+ CONCURRENCY_OPTION="-c ${CELERY_WORKER_AMOUNT:-1}"
+ fi
+
+ exec celery -A app.celery worker -P ${CELERY_WORKER_CLASS:-gevent} $CONCURRENCY_OPTION --loglevel INFO \
-Q ${CELERY_QUEUES:-dataset,generation,mail,ops_trace,app_deletion}
+
elif [[ "${MODE}" == "beat" ]]; then
exec celery -A app.celery beat --loglevel INFO
else
diff --git a/api/extensions/ext_database.py b/api/extensions/ext_database.py
index 9121c6ead9..c248e173a2 100644
--- a/api/extensions/ext_database.py
+++ b/api/extensions/ext_database.py
@@ -1,6 +1,16 @@
from flask_sqlalchemy import SQLAlchemy
+from sqlalchemy import MetaData
-db = SQLAlchemy()
+POSTGRES_INDEXES_NAMING_CONVENTION = {
+ 'ix': '%(column_0_label)s_idx',
+ 'uq': '%(table_name)s_%(column_0_name)s_key',
+ 'ck': '%(table_name)s_%(constraint_name)s_check',
+ 'fk': '%(table_name)s_%(column_0_name)s_fkey',
+ 'pk': '%(table_name)s_pkey',
+}
+
+metadata = MetaData(naming_convention=POSTGRES_INDEXES_NAMING_CONVENTION)
+db = SQLAlchemy(metadata=metadata)
def init_app(app):
diff --git a/api/fields/conversation_variable_fields.py b/api/fields/conversation_variable_fields.py
new file mode 100644
index 0000000000..782a848c1a
--- /dev/null
+++ b/api/fields/conversation_variable_fields.py
@@ -0,0 +1,21 @@
+from flask_restful import fields
+
+from libs.helper import TimestampField
+
+conversation_variable_fields = {
+ 'id': fields.String,
+ 'name': fields.String,
+ 'value_type': fields.String(attribute='value_type.value'),
+ 'value': fields.String,
+ 'description': fields.String,
+ 'created_at': TimestampField,
+ 'updated_at': TimestampField,
+}
+
+paginated_conversation_variable_fields = {
+ 'page': fields.Integer,
+ 'limit': fields.Integer,
+ 'total': fields.Integer,
+ 'has_more': fields.Boolean,
+ 'data': fields.List(fields.Nested(conversation_variable_fields), attribute='data'),
+}
diff --git a/api/fields/dataset_fields.py b/api/fields/dataset_fields.py
index 120b66a92d..a9f79b5c67 100644
--- a/api/fields/dataset_fields.py
+++ b/api/fields/dataset_fields.py
@@ -29,7 +29,6 @@ vector_setting_fields = {
}
weighted_score_fields = {
- 'weight_type': fields.String,
'keyword_setting': fields.Nested(keyword_setting_fields),
'vector_setting': fields.Nested(vector_setting_fields),
}
diff --git a/api/fields/workflow_fields.py b/api/fields/workflow_fields.py
index ff33a97ff2..c1dd0e184a 100644
--- a/api/fields/workflow_fields.py
+++ b/api/fields/workflow_fields.py
@@ -32,11 +32,12 @@ class EnvironmentVariableField(fields.Raw):
return value
-environment_variable_fields = {
+conversation_variable_fields = {
'id': fields.String,
'name': fields.String,
- 'value': fields.Raw,
'value_type': fields.String(attribute='value_type.value'),
+ 'value': fields.Raw,
+ 'description': fields.String,
}
workflow_fields = {
@@ -50,4 +51,5 @@ workflow_fields = {
'updated_at': TimestampField,
'tool_published': fields.Boolean,
'environment_variables': fields.List(EnvironmentVariableField()),
+ 'conversation_variables': fields.List(fields.Nested(conversation_variable_fields)),
}
diff --git a/api/libs/oauth_data_source.py b/api/libs/oauth_data_source.py
index a5c7814a54..358858ceb1 100644
--- a/api/libs/oauth_data_source.py
+++ b/api/libs/oauth_data_source.py
@@ -154,11 +154,11 @@ class NotionOAuth(OAuthDataSource):
for page_result in page_results:
page_id = page_result['id']
page_name = 'Untitled'
- for key in ['Name', 'title', 'Title', 'Page']:
- if key in page_result['properties']:
- if len(page_result['properties'][key].get('title', [])) > 0:
- page_name = page_result['properties'][key]['title'][0]['plain_text']
- break
+ for key in page_result['properties']:
+ if 'title' in page_result['properties'][key] and page_result['properties'][key]['title']:
+ title_list = page_result['properties'][key]['title']
+ if len(title_list) > 0 and 'plain_text' in title_list[0]:
+ page_name = title_list[0]['plain_text']
page_icon = page_result['icon']
if page_icon:
icon_type = page_icon['type']
diff --git a/api/migrations/README b/api/migrations/README
index 0e04844159..220678df7a 100644
--- a/api/migrations/README
+++ b/api/migrations/README
@@ -1 +1,2 @@
Single-database configuration for Flask.
+
diff --git a/api/migrations/alembic.ini b/api/migrations/alembic.ini
index ec9d45c26a..aa21ecabcd 100644
--- a/api/migrations/alembic.ini
+++ b/api/migrations/alembic.ini
@@ -3,6 +3,7 @@
[alembic]
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
+file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
diff --git a/api/migrations/versions/2024_08_09_0801-1787fbae959a_update_tools_original_url_length.py b/api/migrations/versions/2024_08_09_0801-1787fbae959a_update_tools_original_url_length.py
new file mode 100644
index 0000000000..db966252f1
--- /dev/null
+++ b/api/migrations/versions/2024_08_09_0801-1787fbae959a_update_tools_original_url_length.py
@@ -0,0 +1,39 @@
+"""update tools original_url length
+
+Revision ID: 1787fbae959a
+Revises: eeb2e349e6ac
+Create Date: 2024-08-09 08:01:12.817620
+
+"""
+import sqlalchemy as sa
+from alembic import op
+
+import models as models
+
+# revision identifiers, used by Alembic.
+revision = '1787fbae959a'
+down_revision = 'eeb2e349e6ac'
+branch_labels = None
+depends_on = None
+
+
+def upgrade():
+ # ### commands auto generated by Alembic - please adjust! ###
+ with op.batch_alter_table('tool_files', schema=None) as batch_op:
+ batch_op.alter_column('original_url',
+ existing_type=sa.VARCHAR(length=255),
+ type_=sa.String(length=2048),
+ existing_nullable=True)
+
+ # ### end Alembic commands ###
+
+
+def downgrade():
+ # ### commands auto generated by Alembic - please adjust! ###
+ with op.batch_alter_table('tool_files', schema=None) as batch_op:
+ batch_op.alter_column('original_url',
+ existing_type=sa.String(length=2048),
+ type_=sa.VARCHAR(length=255),
+ existing_nullable=True)
+
+ # ### end Alembic commands ###
diff --git a/api/migrations/versions/2024_08_13_0633-63a83fcf12ba_support_conversation_variables.py b/api/migrations/versions/2024_08_13_0633-63a83fcf12ba_support_conversation_variables.py
new file mode 100644
index 0000000000..16e1efd4ef
--- /dev/null
+++ b/api/migrations/versions/2024_08_13_0633-63a83fcf12ba_support_conversation_variables.py
@@ -0,0 +1,51 @@
+"""support conversation variables
+
+Revision ID: 63a83fcf12ba
+Revises: 1787fbae959a
+Create Date: 2024-08-13 06:33:07.950379
+
+"""
+import sqlalchemy as sa
+from alembic import op
+
+import models as models
+
+# revision identifiers, used by Alembic.
+revision = '63a83fcf12ba'
+down_revision = '1787fbae959a'
+branch_labels = None
+depends_on = None
+
+
+def upgrade():
+ # ### commands auto generated by Alembic - please adjust! ###
+ op.create_table('workflow__conversation_variables',
+ sa.Column('id', models.types.StringUUID(), nullable=False),
+ sa.Column('conversation_id', models.types.StringUUID(), nullable=False),
+ sa.Column('app_id', models.types.StringUUID(), nullable=False),
+ sa.Column('data', sa.Text(), nullable=False),
+ sa.Column('created_at', sa.DateTime(), server_default=sa.text('CURRENT_TIMESTAMP(0)'), nullable=False),
+ sa.Column('updated_at', sa.DateTime(), server_default=sa.text('CURRENT_TIMESTAMP'), nullable=False),
+ sa.PrimaryKeyConstraint('id', 'conversation_id', name=op.f('workflow__conversation_variables_pkey'))
+ )
+ with op.batch_alter_table('workflow__conversation_variables', schema=None) as batch_op:
+ batch_op.create_index(batch_op.f('workflow__conversation_variables_app_id_idx'), ['app_id'], unique=False)
+ batch_op.create_index(batch_op.f('workflow__conversation_variables_created_at_idx'), ['created_at'], unique=False)
+
+ with op.batch_alter_table('workflows', schema=None) as batch_op:
+ batch_op.add_column(sa.Column('conversation_variables', sa.Text(), server_default='{}', nullable=False))
+
+ # ### end Alembic commands ###
+
+
+def downgrade():
+ # ### commands auto generated by Alembic - please adjust! ###
+ with op.batch_alter_table('workflows', schema=None) as batch_op:
+ batch_op.drop_column('conversation_variables')
+
+ with op.batch_alter_table('workflow__conversation_variables', schema=None) as batch_op:
+ batch_op.drop_index(batch_op.f('workflow__conversation_variables_created_at_idx'))
+ batch_op.drop_index(batch_op.f('workflow__conversation_variables_app_id_idx'))
+
+ op.drop_table('workflow__conversation_variables')
+ # ### end Alembic commands ###
diff --git a/api/models/__init__.py b/api/models/__init__.py
index 3b832cd22d..f831356841 100644
--- a/api/models/__init__.py
+++ b/api/models/__init__.py
@@ -1,15 +1,19 @@
from enum import Enum
-from sqlalchemy import CHAR, TypeDecorator
-from sqlalchemy.dialects.postgresql import UUID
+from .model import AppMode
+from .types import StringUUID
+from .workflow import ConversationVariable, WorkflowNodeExecutionStatus
+
+__all__ = ['ConversationVariable', 'StringUUID', 'AppMode', 'WorkflowNodeExecutionStatus']
class CreatedByRole(Enum):
"""
Enum class for createdByRole
"""
- ACCOUNT = "account"
- END_USER = "end_user"
+
+ ACCOUNT = 'account'
+ END_USER = 'end_user'
@classmethod
def value_of(cls, value: str) -> 'CreatedByRole':
@@ -23,49 +27,3 @@ class CreatedByRole(Enum):
if role.value == value:
return role
raise ValueError(f'invalid createdByRole value {value}')
-
-
-class CreatedFrom(Enum):
- """
- Enum class for createdFrom
- """
- SERVICE_API = "service-api"
- WEB_APP = "web-app"
- EXPLORE = "explore"
-
- @classmethod
- def value_of(cls, value: str) -> 'CreatedFrom':
- """
- Get value of given mode.
-
- :param value: mode value
- :return: mode
- """
- for role in cls:
- if role.value == value:
- return role
- raise ValueError(f'invalid createdFrom value {value}')
-
-
-class StringUUID(TypeDecorator):
- impl = CHAR
- cache_ok = True
-
- def process_bind_param(self, value, dialect):
- if value is None:
- return value
- elif dialect.name == 'postgresql':
- return str(value)
- else:
- return value.hex
-
- def load_dialect_impl(self, dialect):
- if dialect.name == 'postgresql':
- return dialect.type_descriptor(UUID())
- else:
- return dialect.type_descriptor(CHAR(36))
-
- def process_result_value(self, value, dialect):
- if value is None:
- return value
- return str(value)
diff --git a/api/models/account.py b/api/models/account.py
index d36b2b9fda..67d940b7b7 100644
--- a/api/models/account.py
+++ b/api/models/account.py
@@ -4,7 +4,8 @@ import json
from flask_login import UserMixin
from extensions.ext_database import db
-from models import StringUUID
+
+from .types import StringUUID
class AccountStatus(str, enum.Enum):
diff --git a/api/models/api_based_extension.py b/api/models/api_based_extension.py
index d1f9cd78a7..7f69323628 100644
--- a/api/models/api_based_extension.py
+++ b/api/models/api_based_extension.py
@@ -1,7 +1,8 @@
import enum
from extensions.ext_database import db
-from models import StringUUID
+
+from .types import StringUUID
class APIBasedExtensionPoint(enum.Enum):
diff --git a/api/models/dataset.py b/api/models/dataset.py
index 40f9f4cf83..0d48177eb6 100644
--- a/api/models/dataset.py
+++ b/api/models/dataset.py
@@ -16,9 +16,10 @@ from configs import dify_config
from core.rag.retrieval.retrival_methods import RetrievalMethod
from extensions.ext_database import db
from extensions.ext_storage import storage
-from models import StringUUID
-from models.account import Account
-from models.model import App, Tag, TagBinding, UploadFile
+
+from .account import Account
+from .model import App, Tag, TagBinding, UploadFile
+from .types import StringUUID
class Dataset(db.Model):
diff --git a/api/models/model.py b/api/models/model.py
index a6f517ea6b..9909b10dc0 100644
--- a/api/models/model.py
+++ b/api/models/model.py
@@ -14,8 +14,8 @@ from core.file.upload_file_parser import UploadFileParser
from extensions.ext_database import db
from libs.helper import generate_string
-from . import StringUUID
from .account import Account, Tenant
+from .types import StringUUID
class DifySetup(db.Model):
@@ -1116,7 +1116,7 @@ class Site(db.Model):
@property
def app_base_url(self):
return (
- dify_config.APP_WEB_URL if dify_config.APP_WEB_URL else request.host_url.rstrip('/'))
+ dify_config.APP_WEB_URL if dify_config.APP_WEB_URL else request.url_root.rstrip('/'))
class ApiToken(db.Model):
diff --git a/api/models/provider.py b/api/models/provider.py
index 4c14c33f09..5d92ee6eb6 100644
--- a/api/models/provider.py
+++ b/api/models/provider.py
@@ -1,7 +1,8 @@
from enum import Enum
from extensions.ext_database import db
-from models import StringUUID
+
+from .types import StringUUID
class ProviderType(Enum):
diff --git a/api/models/source.py b/api/models/source.py
index 265e68f014..adc00028be 100644
--- a/api/models/source.py
+++ b/api/models/source.py
@@ -3,7 +3,8 @@ import json
from sqlalchemy.dialects.postgresql import JSONB
from extensions.ext_database import db
-from models import StringUUID
+
+from .types import StringUUID
class DataSourceOauthBinding(db.Model):
diff --git a/api/models/tool.py b/api/models/tool.py
index f322944f5f..79a70c6b1f 100644
--- a/api/models/tool.py
+++ b/api/models/tool.py
@@ -2,7 +2,8 @@ import json
from enum import Enum
from extensions.ext_database import db
-from models import StringUUID
+
+from .types import StringUUID
class ToolProviderName(Enum):
diff --git a/api/models/tools.py b/api/models/tools.py
index 49212916ec..069dc5bad0 100644
--- a/api/models/tools.py
+++ b/api/models/tools.py
@@ -6,8 +6,9 @@ from core.tools.entities.common_entities import I18nObject
from core.tools.entities.tool_bundle import ApiToolBundle
from core.tools.entities.tool_entities import ApiProviderSchemaType, WorkflowToolParameterConfiguration
from extensions.ext_database import db
-from models import StringUUID
-from models.model import Account, App, Tenant
+
+from .model import Account, App, Tenant
+from .types import StringUUID
class BuiltinToolProvider(db.Model):
@@ -299,4 +300,4 @@ class ToolFile(db.Model):
# mime type
mimetype = db.Column(db.String(255), nullable=False)
# original url
- original_url = db.Column(db.String(255), nullable=True)
\ No newline at end of file
+ original_url = db.Column(db.String(2048), nullable=True)
\ No newline at end of file
diff --git a/api/models/types.py b/api/models/types.py
new file mode 100644
index 0000000000..1614ec2018
--- /dev/null
+++ b/api/models/types.py
@@ -0,0 +1,26 @@
+from sqlalchemy import CHAR, TypeDecorator
+from sqlalchemy.dialects.postgresql import UUID
+
+
+class StringUUID(TypeDecorator):
+ impl = CHAR
+ cache_ok = True
+
+ def process_bind_param(self, value, dialect):
+ if value is None:
+ return value
+ elif dialect.name == 'postgresql':
+ return str(value)
+ else:
+ return value.hex
+
+ def load_dialect_impl(self, dialect):
+ if dialect.name == 'postgresql':
+ return dialect.type_descriptor(UUID())
+ else:
+ return dialect.type_descriptor(CHAR(36))
+
+ def process_result_value(self, value, dialect):
+ if value is None:
+ return value
+ return str(value)
\ No newline at end of file
diff --git a/api/models/web.py b/api/models/web.py
index 6fd27206a9..0e901d5f84 100644
--- a/api/models/web.py
+++ b/api/models/web.py
@@ -1,7 +1,8 @@
from extensions.ext_database import db
-from models import StringUUID
-from models.model import Message
+
+from .model import Message
+from .types import StringUUID
class SavedMessage(db.Model):
diff --git a/api/models/workflow.py b/api/models/workflow.py
index 805c637994..7f4b56daff 100644
--- a/api/models/workflow.py
+++ b/api/models/workflow.py
@@ -3,18 +3,18 @@ from collections.abc import Mapping, Sequence
from enum import Enum
from typing import Any, Optional, Union
+from sqlalchemy import func
+from sqlalchemy.orm import Mapped
+
import contexts
from constants import HIDDEN_VALUE
-from core.app.segments import (
- SecretVariable,
- Variable,
- factory,
-)
+from core.app.segments import SecretVariable, Variable, factory
from core.helper import encrypter
from extensions.ext_database import db
from libs import helper
-from models import StringUUID
-from models.account import Account
+
+from .account import Account
+from .types import StringUUID
class CreatedByRole(Enum):
@@ -122,6 +122,7 @@ class Workflow(db.Model):
updated_by = db.Column(StringUUID)
updated_at = db.Column(db.DateTime)
_environment_variables = db.Column('environment_variables', db.Text, nullable=False, server_default='{}')
+ _conversation_variables = db.Column('conversation_variables', db.Text, nullable=False, server_default='{}')
@property
def created_by_account(self):
@@ -249,9 +250,27 @@ class Workflow(db.Model):
'graph': self.graph_dict,
'features': self.features_dict,
'environment_variables': [var.model_dump(mode='json') for var in environment_variables],
+ 'conversation_variables': [var.model_dump(mode='json') for var in self.conversation_variables],
}
return result
+ @property
+ def conversation_variables(self) -> Sequence[Variable]:
+ # TODO: find some way to init `self._conversation_variables` when instance created.
+ if self._conversation_variables is None:
+ self._conversation_variables = '{}'
+
+ variables_dict: dict[str, Any] = json.loads(self._conversation_variables)
+ results = [factory.build_variable_from_mapping(v) for v in variables_dict.values()]
+ return results
+
+ @conversation_variables.setter
+ def conversation_variables(self, value: Sequence[Variable]) -> None:
+ self._conversation_variables = json.dumps(
+ {var.name: var.model_dump() for var in value},
+ ensure_ascii=False,
+ )
+
class WorkflowRunTriggeredFrom(Enum):
"""
@@ -705,3 +724,34 @@ class WorkflowAppLog(db.Model):
created_by_role = CreatedByRole.value_of(self.created_by_role)
return db.session.get(EndUser, self.created_by) \
if created_by_role == CreatedByRole.END_USER else None
+
+
+class ConversationVariable(db.Model):
+ __tablename__ = 'workflow__conversation_variables'
+
+ id: Mapped[str] = db.Column(StringUUID, primary_key=True)
+ conversation_id: Mapped[str] = db.Column(StringUUID, nullable=False, primary_key=True)
+ app_id: Mapped[str] = db.Column(StringUUID, nullable=False, index=True)
+ data = db.Column(db.Text, nullable=False)
+ created_at = db.Column(db.DateTime, nullable=False, index=True, server_default=db.text('CURRENT_TIMESTAMP(0)'))
+ updated_at = db.Column(db.DateTime, nullable=False, server_default=func.current_timestamp(), onupdate=func.current_timestamp())
+
+ def __init__(self, *, id: str, app_id: str, conversation_id: str, data: str) -> None:
+ self.id = id
+ self.app_id = app_id
+ self.conversation_id = conversation_id
+ self.data = data
+
+ @classmethod
+ def from_variable(cls, *, app_id: str, conversation_id: str, variable: Variable) -> 'ConversationVariable':
+ obj = cls(
+ id=variable.id,
+ app_id=app_id,
+ conversation_id=conversation_id,
+ data=variable.model_dump_json(),
+ )
+ return obj
+
+ def to_variable(self) -> Variable:
+ mapping = json.loads(self.data)
+ return factory.build_variable_from_mapping(mapping)
diff --git a/api/poetry.lock b/api/poetry.lock
index abde108a7a..89d017f656 100644
--- a/api/poetry.lock
+++ b/api/poetry.lock
@@ -1,91 +1,103 @@
# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand.
+[[package]]
+name = "aiohappyeyeballs"
+version = "2.3.4"
+description = "Happy Eyeballs for asyncio"
+optional = false
+python-versions = "<4.0,>=3.8"
+files = [
+ {file = "aiohappyeyeballs-2.3.4-py3-none-any.whl", hash = "sha256:40a16ceffcf1fc9e142fd488123b2e218abc4188cf12ac20c67200e1579baa42"},
+ {file = "aiohappyeyeballs-2.3.4.tar.gz", hash = "sha256:7e1ae8399c320a8adec76f6c919ed5ceae6edd4c3672f4d9eae2b27e37c80ff6"},
+]
+
[[package]]
name = "aiohttp"
-version = "3.9.5"
+version = "3.10.1"
description = "Async http client/server framework (asyncio)"
optional = false
python-versions = ">=3.8"
files = [
- {file = "aiohttp-3.9.5-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:fcde4c397f673fdec23e6b05ebf8d4751314fa7c24f93334bf1f1364c1c69ac7"},
- {file = "aiohttp-3.9.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5d6b3f1fabe465e819aed2c421a6743d8debbde79b6a8600739300630a01bf2c"},
- {file = "aiohttp-3.9.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6ae79c1bc12c34082d92bf9422764f799aee4746fd7a392db46b7fd357d4a17a"},
- {file = "aiohttp-3.9.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4d3ebb9e1316ec74277d19c5f482f98cc65a73ccd5430540d6d11682cd857430"},
- {file = "aiohttp-3.9.5-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:84dabd95154f43a2ea80deffec9cb44d2e301e38a0c9d331cc4aa0166fe28ae3"},
- {file = "aiohttp-3.9.5-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c8a02fbeca6f63cb1f0475c799679057fc9268b77075ab7cf3f1c600e81dd46b"},
- {file = "aiohttp-3.9.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c26959ca7b75ff768e2776d8055bf9582a6267e24556bb7f7bd29e677932be72"},
- {file = "aiohttp-3.9.5-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:714d4e5231fed4ba2762ed489b4aec07b2b9953cf4ee31e9871caac895a839c0"},
- {file = "aiohttp-3.9.5-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e7a6a8354f1b62e15d48e04350f13e726fa08b62c3d7b8401c0a1314f02e3558"},
- {file = "aiohttp-3.9.5-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:c413016880e03e69d166efb5a1a95d40f83d5a3a648d16486592c49ffb76d0db"},
- {file = "aiohttp-3.9.5-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:ff84aeb864e0fac81f676be9f4685f0527b660f1efdc40dcede3c251ef1e867f"},
- {file = "aiohttp-3.9.5-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:ad7f2919d7dac062f24d6f5fe95d401597fbb015a25771f85e692d043c9d7832"},
- {file = "aiohttp-3.9.5-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:702e2c7c187c1a498a4e2b03155d52658fdd6fda882d3d7fbb891a5cf108bb10"},
- {file = "aiohttp-3.9.5-cp310-cp310-win32.whl", hash = "sha256:67c3119f5ddc7261d47163ed86d760ddf0e625cd6246b4ed852e82159617b5fb"},
- {file = "aiohttp-3.9.5-cp310-cp310-win_amd64.whl", hash = "sha256:471f0ef53ccedec9995287f02caf0c068732f026455f07db3f01a46e49d76bbb"},
- {file = "aiohttp-3.9.5-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:e0ae53e33ee7476dd3d1132f932eeb39bf6125083820049d06edcdca4381f342"},
- {file = "aiohttp-3.9.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c088c4d70d21f8ca5c0b8b5403fe84a7bc8e024161febdd4ef04575ef35d474d"},
- {file = "aiohttp-3.9.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:639d0042b7670222f33b0028de6b4e2fad6451462ce7df2af8aee37dcac55424"},
- {file = "aiohttp-3.9.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f26383adb94da5e7fb388d441bf09c61e5e35f455a3217bfd790c6b6bc64b2ee"},
- {file = "aiohttp-3.9.5-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:66331d00fb28dc90aa606d9a54304af76b335ae204d1836f65797d6fe27f1ca2"},
- {file = "aiohttp-3.9.5-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4ff550491f5492ab5ed3533e76b8567f4b37bd2995e780a1f46bca2024223233"},
- {file = "aiohttp-3.9.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f22eb3a6c1080d862befa0a89c380b4dafce29dc6cd56083f630073d102eb595"},
- {file = "aiohttp-3.9.5-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a81b1143d42b66ffc40a441379387076243ef7b51019204fd3ec36b9f69e77d6"},
- {file = "aiohttp-3.9.5-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:f64fd07515dad67f24b6ea4a66ae2876c01031de91c93075b8093f07c0a2d93d"},
- {file = "aiohttp-3.9.5-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:93e22add827447d2e26d67c9ac0161756007f152fdc5210277d00a85f6c92323"},
- {file = "aiohttp-3.9.5-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:55b39c8684a46e56ef8c8d24faf02de4a2b2ac60d26cee93bc595651ff545de9"},
- {file = "aiohttp-3.9.5-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:4715a9b778f4293b9f8ae7a0a7cef9829f02ff8d6277a39d7f40565c737d3771"},
- {file = "aiohttp-3.9.5-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:afc52b8d969eff14e069a710057d15ab9ac17cd4b6753042c407dcea0e40bf75"},
- {file = "aiohttp-3.9.5-cp311-cp311-win32.whl", hash = "sha256:b3df71da99c98534be076196791adca8819761f0bf6e08e07fd7da25127150d6"},
- {file = "aiohttp-3.9.5-cp311-cp311-win_amd64.whl", hash = "sha256:88e311d98cc0bf45b62fc46c66753a83445f5ab20038bcc1b8a1cc05666f428a"},
- {file = "aiohttp-3.9.5-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:c7a4b7a6cf5b6eb11e109a9755fd4fda7d57395f8c575e166d363b9fc3ec4678"},
- {file = "aiohttp-3.9.5-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:0a158704edf0abcac8ac371fbb54044f3270bdbc93e254a82b6c82be1ef08f3c"},
- {file = "aiohttp-3.9.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d153f652a687a8e95ad367a86a61e8d53d528b0530ef382ec5aaf533140ed00f"},
- {file = "aiohttp-3.9.5-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:82a6a97d9771cb48ae16979c3a3a9a18b600a8505b1115cfe354dfb2054468b4"},
- {file = "aiohttp-3.9.5-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:60cdbd56f4cad9f69c35eaac0fbbdf1f77b0ff9456cebd4902f3dd1cf096464c"},
- {file = "aiohttp-3.9.5-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8676e8fd73141ded15ea586de0b7cda1542960a7b9ad89b2b06428e97125d4fa"},
- {file = "aiohttp-3.9.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:da00da442a0e31f1c69d26d224e1efd3a1ca5bcbf210978a2ca7426dfcae9f58"},
- {file = "aiohttp-3.9.5-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:18f634d540dd099c262e9f887c8bbacc959847cfe5da7a0e2e1cf3f14dbf2daf"},
- {file = "aiohttp-3.9.5-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:320e8618eda64e19d11bdb3bd04ccc0a816c17eaecb7e4945d01deee2a22f95f"},
- {file = "aiohttp-3.9.5-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:2faa61a904b83142747fc6a6d7ad8fccff898c849123030f8e75d5d967fd4a81"},
- {file = "aiohttp-3.9.5-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:8c64a6dc3fe5db7b1b4d2b5cb84c4f677768bdc340611eca673afb7cf416ef5a"},
- {file = "aiohttp-3.9.5-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:393c7aba2b55559ef7ab791c94b44f7482a07bf7640d17b341b79081f5e5cd1a"},
- {file = "aiohttp-3.9.5-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:c671dc117c2c21a1ca10c116cfcd6e3e44da7fcde37bf83b2be485ab377b25da"},
- {file = "aiohttp-3.9.5-cp312-cp312-win32.whl", hash = "sha256:5a7ee16aab26e76add4afc45e8f8206c95d1d75540f1039b84a03c3b3800dd59"},
- {file = "aiohttp-3.9.5-cp312-cp312-win_amd64.whl", hash = "sha256:5ca51eadbd67045396bc92a4345d1790b7301c14d1848feaac1d6a6c9289e888"},
- {file = "aiohttp-3.9.5-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:694d828b5c41255e54bc2dddb51a9f5150b4eefa9886e38b52605a05d96566e8"},
- {file = "aiohttp-3.9.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0605cc2c0088fcaae79f01c913a38611ad09ba68ff482402d3410bf59039bfb8"},
- {file = "aiohttp-3.9.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4558e5012ee03d2638c681e156461d37b7a113fe13970d438d95d10173d25f78"},
- {file = "aiohttp-3.9.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9dbc053ac75ccc63dc3a3cc547b98c7258ec35a215a92bd9f983e0aac95d3d5b"},
- {file = "aiohttp-3.9.5-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4109adee842b90671f1b689901b948f347325045c15f46b39797ae1bf17019de"},
- {file = "aiohttp-3.9.5-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a6ea1a5b409a85477fd8e5ee6ad8f0e40bf2844c270955e09360418cfd09abac"},
- {file = "aiohttp-3.9.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f3c2890ca8c59ee683fd09adf32321a40fe1cf164e3387799efb2acebf090c11"},
- {file = "aiohttp-3.9.5-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3916c8692dbd9d55c523374a3b8213e628424d19116ac4308e434dbf6d95bbdd"},
- {file = "aiohttp-3.9.5-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:8d1964eb7617907c792ca00b341b5ec3e01ae8c280825deadbbd678447b127e1"},
- {file = "aiohttp-3.9.5-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:d5ab8e1f6bee051a4bf6195e38a5c13e5e161cb7bad83d8854524798bd9fcd6e"},
- {file = "aiohttp-3.9.5-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:52c27110f3862a1afbcb2af4281fc9fdc40327fa286c4625dfee247c3ba90156"},
- {file = "aiohttp-3.9.5-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:7f64cbd44443e80094309875d4f9c71d0401e966d191c3d469cde4642bc2e031"},
- {file = "aiohttp-3.9.5-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:8b4f72fbb66279624bfe83fd5eb6aea0022dad8eec62b71e7bf63ee1caadeafe"},
- {file = "aiohttp-3.9.5-cp38-cp38-win32.whl", hash = "sha256:6380c039ec52866c06d69b5c7aad5478b24ed11696f0e72f6b807cfb261453da"},
- {file = "aiohttp-3.9.5-cp38-cp38-win_amd64.whl", hash = "sha256:da22dab31d7180f8c3ac7c7635f3bcd53808f374f6aa333fe0b0b9e14b01f91a"},
- {file = "aiohttp-3.9.5-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1732102949ff6087589408d76cd6dea656b93c896b011ecafff418c9661dc4ed"},
- {file = "aiohttp-3.9.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c6021d296318cb6f9414b48e6a439a7f5d1f665464da507e8ff640848ee2a58a"},
- {file = "aiohttp-3.9.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:239f975589a944eeb1bad26b8b140a59a3a320067fb3cd10b75c3092405a1372"},
- {file = "aiohttp-3.9.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3b7b30258348082826d274504fbc7c849959f1989d86c29bc355107accec6cfb"},
- {file = "aiohttp-3.9.5-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cd2adf5c87ff6d8b277814a28a535b59e20bfea40a101db6b3bdca7e9926bc24"},
- {file = "aiohttp-3.9.5-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e9a3d838441bebcf5cf442700e3963f58b5c33f015341f9ea86dcd7d503c07e2"},
- {file = "aiohttp-3.9.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9e3a1ae66e3d0c17cf65c08968a5ee3180c5a95920ec2731f53343fac9bad106"},
- {file = "aiohttp-3.9.5-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9c69e77370cce2d6df5d12b4e12bdcca60c47ba13d1cbbc8645dd005a20b738b"},
- {file = "aiohttp-3.9.5-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0cbf56238f4bbf49dab8c2dc2e6b1b68502b1e88d335bea59b3f5b9f4c001475"},
- {file = "aiohttp-3.9.5-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:d1469f228cd9ffddd396d9948b8c9cd8022b6d1bf1e40c6f25b0fb90b4f893ed"},
- {file = "aiohttp-3.9.5-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:45731330e754f5811c314901cebdf19dd776a44b31927fa4b4dbecab9e457b0c"},
- {file = "aiohttp-3.9.5-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:3fcb4046d2904378e3aeea1df51f697b0467f2aac55d232c87ba162709478c46"},
- {file = "aiohttp-3.9.5-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8cf142aa6c1a751fcb364158fd710b8a9be874b81889c2bd13aa8893197455e2"},
- {file = "aiohttp-3.9.5-cp39-cp39-win32.whl", hash = "sha256:7b179eea70833c8dee51ec42f3b4097bd6370892fa93f510f76762105568cf09"},
- {file = "aiohttp-3.9.5-cp39-cp39-win_amd64.whl", hash = "sha256:38d80498e2e169bc61418ff36170e0aad0cd268da8b38a17c4cf29d254a8b3f1"},
- {file = "aiohttp-3.9.5.tar.gz", hash = "sha256:edea7d15772ceeb29db4aff55e482d4bcfb6ae160ce144f2682de02f6d693551"},
+ {file = "aiohttp-3.10.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:47b4c2412960e64d97258f40616efddaebcb34ff664c8a972119ed38fac2a62c"},
+ {file = "aiohttp-3.10.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e7dbf637f87dd315fa1f36aaed8afa929ee2c607454fb7791e74c88a0d94da59"},
+ {file = "aiohttp-3.10.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c8fb76214b5b739ce59e2236a6489d9dc3483649cfd6f563dbf5d8e40dbdd57d"},
+ {file = "aiohttp-3.10.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c577cdcf8f92862363b3d598d971c6a84ed8f0bf824d4cc1ce70c2fb02acb4a"},
+ {file = "aiohttp-3.10.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:777e23609899cb230ad2642b4bdf1008890f84968be78de29099a8a86f10b261"},
+ {file = "aiohttp-3.10.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b07286a1090483799599a2f72f76ac396993da31f6e08efedb59f40876c144fa"},
+ {file = "aiohttp-3.10.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b9db600a86414a9a653e3c1c7f6a2f6a1894ab8f83d11505247bd1b90ad57157"},
+ {file = "aiohttp-3.10.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:01c3f1eb280008e51965a8d160a108c333136f4a39d46f516c64d2aa2e6a53f2"},
+ {file = "aiohttp-3.10.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:f5dd109a925fee4c9ac3f6a094900461a2712df41745f5d04782ebcbe6479ccb"},
+ {file = "aiohttp-3.10.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:8c81ff4afffef9b1186639506d70ea90888218f5ddfff03870e74ec80bb59970"},
+ {file = "aiohttp-3.10.1-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:2a384dfbe8bfebd203b778a30a712886d147c61943675f4719b56725a8bbe803"},
+ {file = "aiohttp-3.10.1-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:b9fb6508893dc31cfcbb8191ef35abd79751db1d6871b3e2caee83959b4d91eb"},
+ {file = "aiohttp-3.10.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:88596384c3bec644a96ae46287bb646d6a23fa6014afe3799156aef42669c6bd"},
+ {file = "aiohttp-3.10.1-cp310-cp310-win32.whl", hash = "sha256:68164d43c580c2e8bf8e0eb4960142919d304052ccab92be10250a3a33b53268"},
+ {file = "aiohttp-3.10.1-cp310-cp310-win_amd64.whl", hash = "sha256:d6bbe2c90c10382ca96df33b56e2060404a4f0f88673e1e84b44c8952517e5f3"},
+ {file = "aiohttp-3.10.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:f6979b4f20d3e557a867da9d9227de4c156fcdcb348a5848e3e6190fd7feb972"},
+ {file = "aiohttp-3.10.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:03c0c380c83f8a8d4416224aafb88d378376d6f4cadebb56b060688251055cd4"},
+ {file = "aiohttp-3.10.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1c2b104e81b3c3deba7e6f5bc1a9a0e9161c380530479970766a6655b8b77c7c"},
+ {file = "aiohttp-3.10.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b023b68c61ab0cd48bd38416b421464a62c381e32b9dc7b4bdfa2905807452a4"},
+ {file = "aiohttp-3.10.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1a07c76a82390506ca0eabf57c0540cf5a60c993c442928fe4928472c4c6e5e6"},
+ {file = "aiohttp-3.10.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:41d8dab8c64ded1edf117d2a64f353efa096c52b853ef461aebd49abae979f16"},
+ {file = "aiohttp-3.10.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:615348fab1a9ef7d0960a905e83ad39051ae9cb0d2837da739b5d3a7671e497a"},
+ {file = "aiohttp-3.10.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:256ee6044214ee9d66d531bb374f065ee94e60667d6bbeaa25ca111fc3997158"},
+ {file = "aiohttp-3.10.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:b7d5bb926805022508b7ddeaad957f1fce7a8d77532068d7bdb431056dc630cd"},
+ {file = "aiohttp-3.10.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:028faf71b338f069077af6315ad54281612705d68889f5d914318cbc2aab0d50"},
+ {file = "aiohttp-3.10.1-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:5c12310d153b27aa630750be44e79313acc4e864c421eb7d2bc6fa3429c41bf8"},
+ {file = "aiohttp-3.10.1-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:de1a91d5faded9054957ed0a9e01b9d632109341942fc123947ced358c5d9009"},
+ {file = "aiohttp-3.10.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9c186b270979fb1dee3ababe2d12fb243ed7da08b30abc83ebac3a928a4ddb15"},
+ {file = "aiohttp-3.10.1-cp311-cp311-win32.whl", hash = "sha256:4a9ce70f5e00380377aac0e568abd075266ff992be2e271765f7b35d228a990c"},
+ {file = "aiohttp-3.10.1-cp311-cp311-win_amd64.whl", hash = "sha256:a77c79bac8d908d839d32c212aef2354d2246eb9deb3e2cb01ffa83fb7a6ea5d"},
+ {file = "aiohttp-3.10.1-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:2212296cdb63b092e295c3e4b4b442e7b7eb41e8a30d0f53c16d5962efed395d"},
+ {file = "aiohttp-3.10.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:4dcb127ca3eb0a61205818a606393cbb60d93b7afb9accd2fd1e9081cc533144"},
+ {file = "aiohttp-3.10.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:cb8b79a65332e1a426ccb6290ce0409e1dc16b4daac1cc5761e059127fa3d134"},
+ {file = "aiohttp-3.10.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68cc24f707ed9cb961f6ee04020ca01de2c89b2811f3cf3361dc7c96a14bfbcc"},
+ {file = "aiohttp-3.10.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9cb54f5725b4b37af12edf6c9e834df59258c82c15a244daa521a065fbb11717"},
+ {file = "aiohttp-3.10.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:51d03e948e53b3639ce4d438f3d1d8202898ec6655cadcc09ec99229d4adc2a9"},
+ {file = "aiohttp-3.10.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:786299d719eb5d868f161aeec56d589396b053925b7e0ce36e983d30d0a3e55c"},
+ {file = "aiohttp-3.10.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abda4009a30d51d3f06f36bc7411a62b3e647fa6cc935ef667e3e3d3a7dd09b1"},
+ {file = "aiohttp-3.10.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:67f7639424c313125213954e93a6229d3a1d386855d70c292a12628f600c7150"},
+ {file = "aiohttp-3.10.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:8e5a26d7aac4c0d8414a347da162696eea0629fdce939ada6aedf951abb1d745"},
+ {file = "aiohttp-3.10.1-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:120548d89f14b76a041088b582454d89389370632ee12bf39d919cc5c561d1ca"},
+ {file = "aiohttp-3.10.1-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:f5293726943bdcea24715b121d8c4ae12581441d22623b0e6ab12d07ce85f9c4"},
+ {file = "aiohttp-3.10.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:1f8605e573ed6c44ec689d94544b2c4bb1390aaa723a8b5a2cc0a5a485987a68"},
+ {file = "aiohttp-3.10.1-cp312-cp312-win32.whl", hash = "sha256:e7168782621be4448d90169a60c8b37e9b0926b3b79b6097bc180c0a8a119e73"},
+ {file = "aiohttp-3.10.1-cp312-cp312-win_amd64.whl", hash = "sha256:8fbf8c0ded367c5c8eaf585f85ca8dd85ff4d5b73fb8fe1e6ac9e1b5e62e11f7"},
+ {file = "aiohttp-3.10.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:54b7f4a20d7cc6bfa4438abbde069d417bb7a119f870975f78a2b99890226d55"},
+ {file = "aiohttp-3.10.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2fa643ca990323db68911b92f3f7a0ca9ae300ae340d0235de87c523601e58d9"},
+ {file = "aiohttp-3.10.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d8311d0d690487359fe2247ec5d2cac9946e70d50dced8c01ce9e72341c21151"},
+ {file = "aiohttp-3.10.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:222821c60b8f6a64c5908cb43d69c0ee978a1188f6a8433d4757d39231b42cdb"},
+ {file = "aiohttp-3.10.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e7b55d9ede66af7feb6de87ff277e0ccf6d51c7db74cc39337fe3a0e31b5872d"},
+ {file = "aiohttp-3.10.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5a95151a5567b3b00368e99e9c5334a919514f60888a6b6d2054fea5e66e527e"},
+ {file = "aiohttp-3.10.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4e9e9171d2fe6bfd9d3838a6fe63b1e91b55e0bf726c16edf265536e4eafed19"},
+ {file = "aiohttp-3.10.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a57e73f9523e980f6101dc9a83adcd7ac0006ea8bf7937ca3870391c7bb4f8ff"},
+ {file = "aiohttp-3.10.1-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:0df51a3d70a2bfbb9c921619f68d6d02591f24f10e9c76de6f3388c89ed01de6"},
+ {file = "aiohttp-3.10.1-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:b0de63ff0307eac3961b4af74382d30220d4813f36b7aaaf57f063a1243b4214"},
+ {file = "aiohttp-3.10.1-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:8db9b749f589b5af8e4993623dbda6716b2b7a5fcb0fa2277bf3ce4b278c7059"},
+ {file = "aiohttp-3.10.1-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:6b14c19172eb53b63931d3e62a9749d6519f7c121149493e6eefca055fcdb352"},
+ {file = "aiohttp-3.10.1-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:5cd57ad998e3038aa87c38fe85c99ed728001bf5dde8eca121cadee06ee3f637"},
+ {file = "aiohttp-3.10.1-cp38-cp38-win32.whl", hash = "sha256:df31641e3f02b77eb3c5fb63c0508bee0fc067cf153da0e002ebbb0db0b6d91a"},
+ {file = "aiohttp-3.10.1-cp38-cp38-win_amd64.whl", hash = "sha256:93094eba50bc2ad4c40ff4997ead1fdcd41536116f2e7d6cfec9596a8ecb3615"},
+ {file = "aiohttp-3.10.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:440954ddc6b77257e67170d57b1026aa9545275c33312357472504eef7b4cc0b"},
+ {file = "aiohttp-3.10.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f9f8beed277488a52ee2b459b23c4135e54d6a819eaba2e120e57311015b58e9"},
+ {file = "aiohttp-3.10.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:d8a8221a63602008550022aa3a4152ca357e1dde7ab3dd1da7e1925050b56863"},
+ {file = "aiohttp-3.10.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a702bd3663b5cbf3916e84bf332400d24cdb18399f0877ca6b313ce6c08bfb43"},
+ {file = "aiohttp-3.10.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1988b370536eb14f0ce7f3a4a5b422ab64c4e255b3f5d7752c5f583dc8c967fc"},
+ {file = "aiohttp-3.10.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7ccf1f0a304352c891d124ac1a9dea59b14b2abed1704aaa7689fc90ef9c5be1"},
+ {file = "aiohttp-3.10.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc3ea6ef2a83edad84bbdb5d96e22f587b67c68922cd7b6f9d8f24865e655bcf"},
+ {file = "aiohttp-3.10.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:89b47c125ab07f0831803b88aeb12b04c564d5f07a1c1a225d4eb4d2f26e8b5e"},
+ {file = "aiohttp-3.10.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:21778552ef3d44aac3278cc6f6d13a6423504fa5f09f2df34bfe489ed9ded7f5"},
+ {file = "aiohttp-3.10.1-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:bde0693073fd5e542e46ea100aa6c1a5d36282dbdbad85b1c3365d5421490a92"},
+ {file = "aiohttp-3.10.1-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:bf66149bb348d8e713f3a8e0b4f5b952094c2948c408e1cfef03b49e86745d60"},
+ {file = "aiohttp-3.10.1-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:587237571a85716d6f71f60d103416c9df7d5acb55d96d3d3ced65f39bff9c0c"},
+ {file = "aiohttp-3.10.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:bfe33cba6e127d0b5b417623c9aa621f0a69f304742acdca929a9fdab4593693"},
+ {file = "aiohttp-3.10.1-cp39-cp39-win32.whl", hash = "sha256:9fbff00646cf8211b330690eb2fd64b23e1ce5b63a342436c1d1d6951d53d8dd"},
+ {file = "aiohttp-3.10.1-cp39-cp39-win_amd64.whl", hash = "sha256:5951c328f9ac42d7bce7a6ded535879bc9ae13032818d036749631fa27777905"},
+ {file = "aiohttp-3.10.1.tar.gz", hash = "sha256:8b0d058e4e425d3b45e8ec70d49b402f4d6b21041e674798b1f91ba027c73f28"},
]
[package.dependencies]
+aiohappyeyeballs = ">=2.3.0"
aiosignal = ">=1.1.2"
async-timeout = {version = ">=4.0,<5.0", markers = "python_version < \"3.11\""}
attrs = ">=17.3.0"
@@ -94,7 +106,7 @@ multidict = ">=4.5,<7.0"
yarl = ">=1.0,<2.0"
[package.extras]
-speedups = ["Brotli", "aiodns", "brotlicffi"]
+speedups = ["Brotli", "aiodns (>=3.2.0)", "brotlicffi"]
[[package]]
name = "aiohttp-retry"
@@ -145,16 +157,16 @@ tz = ["backports.zoneinfo"]
[[package]]
name = "alibabacloud-credentials"
-version = "0.3.4"
+version = "0.3.5"
description = "The alibabacloud credentials module of alibabaCloud Python SDK."
optional = false
python-versions = ">=3.6"
files = [
- {file = "alibabacloud_credentials-0.3.4.tar.gz", hash = "sha256:c15a34fe782c318d4cf24cb041a0385ac4ccd2548e524e5d7fe1cff56a9a6acc"},
+ {file = "alibabacloud_credentials-0.3.5.tar.gz", hash = "sha256:ad065ec95921eaf51939195485d0e5cc9e0ea050282059c7d8bf74bdb5496177"},
]
[package.dependencies]
-alibabacloud-tea = "*"
+alibabacloud-tea = ">=0.3.9"
[[package]]
name = "alibabacloud-endpoint-util"
@@ -171,16 +183,16 @@ alibabacloud-tea = ">=0.0.1"
[[package]]
name = "alibabacloud-gateway-spi"
-version = "0.0.1"
+version = "0.0.2"
description = "Alibaba Cloud Gateway SPI SDK Library for Python"
optional = false
python-versions = ">=3.6"
files = [
- {file = "alibabacloud_gateway_spi-0.0.1.tar.gz", hash = "sha256:1b259855708afc3c04d8711d8530c63f7645e1edc0cf97e2fd15461b08e11c30"},
+ {file = "alibabacloud_gateway_spi-0.0.2.tar.gz", hash = "sha256:f932c8ba67291531dfbee6ca521dcf3523eb4ff93512bf0aaf135f2d4fc4704d"},
]
[package.dependencies]
-alibabacloud_credentials = ">=0.2.0,<1.0.0"
+alibabacloud_credentials = ">=0.3.4,<1.0.0"
[[package]]
name = "alibabacloud-gpdb20160503"
@@ -294,19 +306,19 @@ alibabacloud-tea = ">=0.0.1"
[[package]]
name = "alibabacloud-tea-openapi"
-version = "0.3.10"
+version = "0.3.11"
description = "Alibaba Cloud openapi SDK Library for Python"
optional = false
python-versions = ">=3.6"
files = [
- {file = "alibabacloud_tea_openapi-0.3.10.tar.gz", hash = "sha256:46e9c54ea857346306cd5c628dc33479349b559179ed2fdb2251dbe6ec9a1cf1"},
+ {file = "alibabacloud_tea_openapi-0.3.11.tar.gz", hash = "sha256:3f5cace1b1aeb8a64587574097403cfd066b86ee4c3c9abde587f9abfcad38de"},
]
[package.dependencies]
alibabacloud_credentials = ">=0.3.1,<1.0.0"
alibabacloud_gateway_spi = ">=0.0.1,<1.0.0"
alibabacloud_openapi_util = ">=0.2.1,<1.0.0"
-alibabacloud_tea_util = ">=0.3.12,<1.0.0"
+alibabacloud_tea_util = ">=0.3.13,<1.0.0"
alibabacloud_tea_xml = ">=0.0.2,<1.0.0"
[[package]]
@@ -493,22 +505,22 @@ files = [
[[package]]
name = "attrs"
-version = "23.2.0"
+version = "24.2.0"
description = "Classes Without Boilerplate"
optional = false
python-versions = ">=3.7"
files = [
- {file = "attrs-23.2.0-py3-none-any.whl", hash = "sha256:99b87a485a5820b23b879f04c2305b44b951b502fd64be915879d77a7e8fc6f1"},
- {file = "attrs-23.2.0.tar.gz", hash = "sha256:935dc3b529c262f6cf76e50877d35a4bd3c1de194fd41f47a2b7ae8f19971f30"},
+ {file = "attrs-24.2.0-py3-none-any.whl", hash = "sha256:81921eb96de3191c8258c199618104dd27ac608d9366f5e35d011eae1867ede2"},
+ {file = "attrs-24.2.0.tar.gz", hash = "sha256:5cfb1b9148b5b086569baec03f20d7b6bf3bcacc9a42bebf87ffaaca362f6346"},
]
[package.extras]
-cov = ["attrs[tests]", "coverage[toml] (>=5.3)"]
-dev = ["attrs[tests]", "pre-commit"]
-docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope-interface"]
-tests = ["attrs[tests-no-zope]", "zope-interface"]
-tests-mypy = ["mypy (>=1.6)", "pytest-mypy-plugins"]
-tests-no-zope = ["attrs[tests-mypy]", "cloudpickle", "hypothesis", "pympler", "pytest (>=4.3.0)", "pytest-xdist[psutil]"]
+benchmark = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-codspeed", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
+cov = ["cloudpickle", "coverage[toml] (>=5.3)", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
+dev = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pre-commit", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
+docs = ["cogapp", "furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier (<24.7)"]
+tests = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
+tests-mypy = ["mypy (>=1.11.1)", "pytest-mypy-plugins"]
[[package]]
name = "authlib"
@@ -669,17 +681,17 @@ files = [
[[package]]
name = "boto3"
-version = "1.34.136"
+version = "1.34.148"
description = "The AWS SDK for Python"
optional = false
python-versions = ">=3.8"
files = [
- {file = "boto3-1.34.136-py3-none-any.whl", hash = "sha256:d41037e2c680ab8d6c61a0a4ee6bf1fdd9e857f43996672830a95d62d6f6fa79"},
- {file = "boto3-1.34.136.tar.gz", hash = "sha256:0314e6598f59ee0f34eb4e6d1a0f69fa65c146d2b88a6e837a527a9956ec2731"},
+ {file = "boto3-1.34.148-py3-none-any.whl", hash = "sha256:d63d36e5a34533ba69188d56f96da132730d5e9932c4e11c02d79319cd1afcec"},
+ {file = "boto3-1.34.148.tar.gz", hash = "sha256:2058397f0a92c301e3116e9e65fbbc70ea49270c250882d65043d19b7c6e2d17"},
]
[package.dependencies]
-botocore = ">=1.34.136,<1.35.0"
+botocore = ">=1.34.148,<1.35.0"
jmespath = ">=0.7.1,<2.0.0"
s3transfer = ">=0.10.0,<0.11.0"
@@ -688,13 +700,13 @@ crt = ["botocore[crt] (>=1.21.0,<2.0a0)"]
[[package]]
name = "botocore"
-version = "1.34.147"
+version = "1.34.155"
description = "Low-level, data-driven core of boto 3."
optional = false
python-versions = ">=3.8"
files = [
- {file = "botocore-1.34.147-py3-none-any.whl", hash = "sha256:be94a2f4874b1d1705cae2bd512c475047497379651678593acb6c61c50d91de"},
- {file = "botocore-1.34.147.tar.gz", hash = "sha256:2e8f000b77e4ca345146cb2edab6403769a517b564f627bb084ab335417f3dbe"},
+ {file = "botocore-1.34.155-py3-none-any.whl", hash = "sha256:f2696c11bb0cad627d42512937befd2e3f966aedd15de00d90ee13cf7a16b328"},
+ {file = "botocore-1.34.155.tar.gz", hash = "sha256:3aa88abfef23909f68d3e6679a3d4b4bb3c6288a6cfbf9e253aa68dac8edad64"},
]
[package.dependencies]
@@ -703,7 +715,7 @@ python-dateutil = ">=2.1,<3.0.0"
urllib3 = {version = ">=1.25.4,<2.2.0 || >2.2.0,<3", markers = "python_version >= \"3.10\""}
[package.extras]
-crt = ["awscrt (==0.20.11)"]
+crt = ["awscrt (==0.21.2)"]
[[package]]
name = "bottleneck"
@@ -1011,63 +1023,78 @@ files = [
[[package]]
name = "cffi"
-version = "1.16.0"
+version = "1.17.0"
description = "Foreign Function Interface for Python calling C code."
optional = false
python-versions = ">=3.8"
files = [
- {file = "cffi-1.16.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6b3d6606d369fc1da4fd8c357d026317fbb9c9b75d36dc16e90e84c26854b088"},
- {file = "cffi-1.16.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ac0f5edd2360eea2f1daa9e26a41db02dd4b0451b48f7c318e217ee092a213e9"},
- {file = "cffi-1.16.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7e61e3e4fa664a8588aa25c883eab612a188c725755afff6289454d6362b9673"},
- {file = "cffi-1.16.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a72e8961a86d19bdb45851d8f1f08b041ea37d2bd8d4fd19903bc3083d80c896"},
- {file = "cffi-1.16.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5b50bf3f55561dac5438f8e70bfcdfd74543fd60df5fa5f62d94e5867deca684"},
- {file = "cffi-1.16.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7651c50c8c5ef7bdb41108b7b8c5a83013bfaa8a935590c5d74627c047a583c7"},
- {file = "cffi-1.16.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e4108df7fe9b707191e55f33efbcb2d81928e10cea45527879a4749cbe472614"},
- {file = "cffi-1.16.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:32c68ef735dbe5857c810328cb2481e24722a59a2003018885514d4c09af9743"},
- {file = "cffi-1.16.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:673739cb539f8cdaa07d92d02efa93c9ccf87e345b9a0b556e3ecc666718468d"},
- {file = "cffi-1.16.0-cp310-cp310-win32.whl", hash = "sha256:9f90389693731ff1f659e55c7d1640e2ec43ff725cc61b04b2f9c6d8d017df6a"},
- {file = "cffi-1.16.0-cp310-cp310-win_amd64.whl", hash = "sha256:e6024675e67af929088fda399b2094574609396b1decb609c55fa58b028a32a1"},
- {file = "cffi-1.16.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b84834d0cf97e7d27dd5b7f3aca7b6e9263c56308ab9dc8aae9784abb774d404"},
- {file = "cffi-1.16.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1b8ebc27c014c59692bb2664c7d13ce7a6e9a629be20e54e7271fa696ff2b417"},
- {file = "cffi-1.16.0-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ee07e47c12890ef248766a6e55bd38ebfb2bb8edd4142d56db91b21ea68b7627"},
- {file = "cffi-1.16.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d8a9d3ebe49f084ad71f9269834ceccbf398253c9fac910c4fd7053ff1386936"},
- {file = "cffi-1.16.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e70f54f1796669ef691ca07d046cd81a29cb4deb1e5f942003f401c0c4a2695d"},
- {file = "cffi-1.16.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5bf44d66cdf9e893637896c7faa22298baebcd18d1ddb6d2626a6e39793a1d56"},
- {file = "cffi-1.16.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b78010e7b97fef4bee1e896df8a4bbb6712b7f05b7ef630f9d1da00f6444d2e"},
- {file = "cffi-1.16.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:c6a164aa47843fb1b01e941d385aab7215563bb8816d80ff3a363a9f8448a8dc"},
- {file = "cffi-1.16.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e09f3ff613345df5e8c3667da1d918f9149bd623cd9070c983c013792a9a62eb"},
- {file = "cffi-1.16.0-cp311-cp311-win32.whl", hash = "sha256:2c56b361916f390cd758a57f2e16233eb4f64bcbeee88a4881ea90fca14dc6ab"},
- {file = "cffi-1.16.0-cp311-cp311-win_amd64.whl", hash = "sha256:db8e577c19c0fda0beb7e0d4e09e0ba74b1e4c092e0e40bfa12fe05b6f6d75ba"},
- {file = "cffi-1.16.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:fa3a0128b152627161ce47201262d3140edb5a5c3da88d73a1b790a959126956"},
- {file = "cffi-1.16.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:68e7c44931cc171c54ccb702482e9fc723192e88d25a0e133edd7aff8fcd1f6e"},
- {file = "cffi-1.16.0-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abd808f9c129ba2beda4cfc53bde801e5bcf9d6e0f22f095e45327c038bfe68e"},
- {file = "cffi-1.16.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:88e2b3c14bdb32e440be531ade29d3c50a1a59cd4e51b1dd8b0865c54ea5d2e2"},
- {file = "cffi-1.16.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fcc8eb6d5902bb1cf6dc4f187ee3ea80a1eba0a89aba40a5cb20a5087d961357"},
- {file = "cffi-1.16.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b7be2d771cdba2942e13215c4e340bfd76398e9227ad10402a8767ab1865d2e6"},
- {file = "cffi-1.16.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e715596e683d2ce000574bae5d07bd522c781a822866c20495e52520564f0969"},
- {file = "cffi-1.16.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:2d92b25dbf6cae33f65005baf472d2c245c050b1ce709cc4588cdcdd5495b520"},
- {file = "cffi-1.16.0-cp312-cp312-win32.whl", hash = "sha256:b2ca4e77f9f47c55c194982e10f058db063937845bb2b7a86c84a6cfe0aefa8b"},
- {file = "cffi-1.16.0-cp312-cp312-win_amd64.whl", hash = "sha256:68678abf380b42ce21a5f2abde8efee05c114c2fdb2e9eef2efdb0257fba1235"},
- {file = "cffi-1.16.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0c9ef6ff37e974b73c25eecc13952c55bceed9112be2d9d938ded8e856138bcc"},
- {file = "cffi-1.16.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a09582f178759ee8128d9270cd1344154fd473bb77d94ce0aeb2a93ebf0feaf0"},
- {file = "cffi-1.16.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e760191dd42581e023a68b758769e2da259b5d52e3103c6060ddc02c9edb8d7b"},
- {file = "cffi-1.16.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:80876338e19c951fdfed6198e70bc88f1c9758b94578d5a7c4c91a87af3cf31c"},
- {file = "cffi-1.16.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a6a14b17d7e17fa0d207ac08642c8820f84f25ce17a442fd15e27ea18d67c59b"},
- {file = "cffi-1.16.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6602bc8dc6f3a9e02b6c22c4fc1e47aa50f8f8e6d3f78a5e16ac33ef5fefa324"},
- {file = "cffi-1.16.0-cp38-cp38-win32.whl", hash = "sha256:131fd094d1065b19540c3d72594260f118b231090295d8c34e19a7bbcf2e860a"},
- {file = "cffi-1.16.0-cp38-cp38-win_amd64.whl", hash = "sha256:31d13b0f99e0836b7ff893d37af07366ebc90b678b6664c955b54561fc36ef36"},
- {file = "cffi-1.16.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:582215a0e9adbe0e379761260553ba11c58943e4bbe9c36430c4ca6ac74b15ed"},
- {file = "cffi-1.16.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b29ebffcf550f9da55bec9e02ad430c992a87e5f512cd63388abb76f1036d8d2"},
- {file = "cffi-1.16.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dc9b18bf40cc75f66f40a7379f6a9513244fe33c0e8aa72e2d56b0196a7ef872"},
- {file = "cffi-1.16.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cb4a35b3642fc5c005a6755a5d17c6c8b6bcb6981baf81cea8bfbc8903e8ba8"},
- {file = "cffi-1.16.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b86851a328eedc692acf81fb05444bdf1891747c25af7529e39ddafaf68a4f3f"},
- {file = "cffi-1.16.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c0f31130ebc2d37cdd8e44605fb5fa7ad59049298b3f745c74fa74c62fbfcfc4"},
- {file = "cffi-1.16.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f8e709127c6c77446a8c0a8c8bf3c8ee706a06cd44b1e827c3e6a2ee6b8c098"},
- {file = "cffi-1.16.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:748dcd1e3d3d7cd5443ef03ce8685043294ad6bd7c02a38d1bd367cfd968e000"},
- {file = "cffi-1.16.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8895613bcc094d4a1b2dbe179d88d7fb4a15cee43c052e8885783fac397d91fe"},
- {file = "cffi-1.16.0-cp39-cp39-win32.whl", hash = "sha256:ed86a35631f7bfbb28e108dd96773b9d5a6ce4811cf6ea468bb6a359b256b1e4"},
- {file = "cffi-1.16.0-cp39-cp39-win_amd64.whl", hash = "sha256:3686dffb02459559c74dd3d81748269ffb0eb027c39a6fc99502de37d501faa8"},
- {file = "cffi-1.16.0.tar.gz", hash = "sha256:bcb3ef43e58665bbda2fb198698fcae6776483e0c4a631aa5647806c25e02cc0"},
+ {file = "cffi-1.17.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f9338cc05451f1942d0d8203ec2c346c830f8e86469903d5126c1f0a13a2bcbb"},
+ {file = "cffi-1.17.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a0ce71725cacc9ebf839630772b07eeec220cbb5f03be1399e0457a1464f8e1a"},
+ {file = "cffi-1.17.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c815270206f983309915a6844fe994b2fa47e5d05c4c4cef267c3b30e34dbe42"},
+ {file = "cffi-1.17.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d6bdcd415ba87846fd317bee0774e412e8792832e7805938987e4ede1d13046d"},
+ {file = "cffi-1.17.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8a98748ed1a1df4ee1d6f927e151ed6c1a09d5ec21684de879c7ea6aa96f58f2"},
+ {file = "cffi-1.17.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0a048d4f6630113e54bb4b77e315e1ba32a5a31512c31a273807d0027a7e69ab"},
+ {file = "cffi-1.17.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:24aa705a5f5bd3a8bcfa4d123f03413de5d86e497435693b638cbffb7d5d8a1b"},
+ {file = "cffi-1.17.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:856bf0924d24e7f93b8aee12a3a1095c34085600aa805693fb7f5d1962393206"},
+ {file = "cffi-1.17.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:4304d4416ff032ed50ad6bb87416d802e67139e31c0bde4628f36a47a3164bfa"},
+ {file = "cffi-1.17.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:331ad15c39c9fe9186ceaf87203a9ecf5ae0ba2538c9e898e3a6967e8ad3db6f"},
+ {file = "cffi-1.17.0-cp310-cp310-win32.whl", hash = "sha256:669b29a9eca6146465cc574659058ed949748f0809a2582d1f1a324eb91054dc"},
+ {file = "cffi-1.17.0-cp310-cp310-win_amd64.whl", hash = "sha256:48b389b1fd5144603d61d752afd7167dfd205973a43151ae5045b35793232aa2"},
+ {file = "cffi-1.17.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c5d97162c196ce54af6700949ddf9409e9833ef1003b4741c2b39ef46f1d9720"},
+ {file = "cffi-1.17.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5ba5c243f4004c750836f81606a9fcb7841f8874ad8f3bf204ff5e56332b72b9"},
+ {file = "cffi-1.17.0-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bb9333f58fc3a2296fb1d54576138d4cf5d496a2cc118422bd77835e6ae0b9cb"},
+ {file = "cffi-1.17.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:435a22d00ec7d7ea533db494da8581b05977f9c37338c80bc86314bec2619424"},
+ {file = "cffi-1.17.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d1df34588123fcc88c872f5acb6f74ae59e9d182a2707097f9e28275ec26a12d"},
+ {file = "cffi-1.17.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:df8bb0010fdd0a743b7542589223a2816bdde4d94bb5ad67884348fa2c1c67e8"},
+ {file = "cffi-1.17.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a8b5b9712783415695663bd463990e2f00c6750562e6ad1d28e072a611c5f2a6"},
+ {file = "cffi-1.17.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ffef8fd58a36fb5f1196919638f73dd3ae0db1a878982b27a9a5a176ede4ba91"},
+ {file = "cffi-1.17.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:4e67d26532bfd8b7f7c05d5a766d6f437b362c1bf203a3a5ce3593a645e870b8"},
+ {file = "cffi-1.17.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:45f7cd36186db767d803b1473b3c659d57a23b5fa491ad83c6d40f2af58e4dbb"},
+ {file = "cffi-1.17.0-cp311-cp311-win32.whl", hash = "sha256:a9015f5b8af1bb6837a3fcb0cdf3b874fe3385ff6274e8b7925d81ccaec3c5c9"},
+ {file = "cffi-1.17.0-cp311-cp311-win_amd64.whl", hash = "sha256:b50aaac7d05c2c26dfd50c3321199f019ba76bb650e346a6ef3616306eed67b0"},
+ {file = "cffi-1.17.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:aec510255ce690d240f7cb23d7114f6b351c733a74c279a84def763660a2c3bc"},
+ {file = "cffi-1.17.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:2770bb0d5e3cc0e31e7318db06efcbcdb7b31bcb1a70086d3177692a02256f59"},
+ {file = "cffi-1.17.0-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:db9a30ec064129d605d0f1aedc93e00894b9334ec74ba9c6bdd08147434b33eb"},
+ {file = "cffi-1.17.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a47eef975d2b8b721775a0fa286f50eab535b9d56c70a6e62842134cf7841195"},
+ {file = "cffi-1.17.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f3e0992f23bbb0be00a921eae5363329253c3b86287db27092461c887b791e5e"},
+ {file = "cffi-1.17.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6107e445faf057c118d5050560695e46d272e5301feffda3c41849641222a828"},
+ {file = "cffi-1.17.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eb862356ee9391dc5a0b3cbc00f416b48c1b9a52d252d898e5b7696a5f9fe150"},
+ {file = "cffi-1.17.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:c1c13185b90bbd3f8b5963cd8ce7ad4ff441924c31e23c975cb150e27c2bf67a"},
+ {file = "cffi-1.17.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:17c6d6d3260c7f2d94f657e6872591fe8733872a86ed1345bda872cfc8c74885"},
+ {file = "cffi-1.17.0-cp312-cp312-win32.whl", hash = "sha256:c3b8bd3133cd50f6b637bb4322822c94c5ce4bf0d724ed5ae70afce62187c492"},
+ {file = "cffi-1.17.0-cp312-cp312-win_amd64.whl", hash = "sha256:dca802c8db0720ce1c49cce1149ff7b06e91ba15fa84b1d59144fef1a1bc7ac2"},
+ {file = "cffi-1.17.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:6ce01337d23884b21c03869d2f68c5523d43174d4fc405490eb0091057943118"},
+ {file = "cffi-1.17.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:cab2eba3830bf4f6d91e2d6718e0e1c14a2f5ad1af68a89d24ace0c6b17cced7"},
+ {file = "cffi-1.17.0-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:14b9cbc8f7ac98a739558eb86fabc283d4d564dafed50216e7f7ee62d0d25377"},
+ {file = "cffi-1.17.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b00e7bcd71caa0282cbe3c90966f738e2db91e64092a877c3ff7f19a1628fdcb"},
+ {file = "cffi-1.17.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:41f4915e09218744d8bae14759f983e466ab69b178de38066f7579892ff2a555"},
+ {file = "cffi-1.17.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e4760a68cab57bfaa628938e9c2971137e05ce48e762a9cb53b76c9b569f1204"},
+ {file = "cffi-1.17.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:011aff3524d578a9412c8b3cfaa50f2c0bd78e03eb7af7aa5e0df59b158efb2f"},
+ {file = "cffi-1.17.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:a003ac9edc22d99ae1286b0875c460351f4e101f8c9d9d2576e78d7e048f64e0"},
+ {file = "cffi-1.17.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:ef9528915df81b8f4c7612b19b8628214c65c9b7f74db2e34a646a0a2a0da2d4"},
+ {file = "cffi-1.17.0-cp313-cp313-win32.whl", hash = "sha256:70d2aa9fb00cf52034feac4b913181a6e10356019b18ef89bc7c12a283bf5f5a"},
+ {file = "cffi-1.17.0-cp313-cp313-win_amd64.whl", hash = "sha256:b7b6ea9e36d32582cda3465f54c4b454f62f23cb083ebc7a94e2ca6ef011c3a7"},
+ {file = "cffi-1.17.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:964823b2fc77b55355999ade496c54dde161c621cb1f6eac61dc30ed1b63cd4c"},
+ {file = "cffi-1.17.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:516a405f174fd3b88829eabfe4bb296ac602d6a0f68e0d64d5ac9456194a5b7e"},
+ {file = "cffi-1.17.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dec6b307ce928e8e112a6bb9921a1cb00a0e14979bf28b98e084a4b8a742bd9b"},
+ {file = "cffi-1.17.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e4094c7b464cf0a858e75cd14b03509e84789abf7b79f8537e6a72152109c76e"},
+ {file = "cffi-1.17.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2404f3de742f47cb62d023f0ba7c5a916c9c653d5b368cc966382ae4e57da401"},
+ {file = "cffi-1.17.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3aa9d43b02a0c681f0bfbc12d476d47b2b2b6a3f9287f11ee42989a268a1833c"},
+ {file = "cffi-1.17.0-cp38-cp38-win32.whl", hash = "sha256:0bb15e7acf8ab35ca8b24b90af52c8b391690ef5c4aec3d31f38f0d37d2cc499"},
+ {file = "cffi-1.17.0-cp38-cp38-win_amd64.whl", hash = "sha256:93a7350f6706b31f457c1457d3a3259ff9071a66f312ae64dc024f049055f72c"},
+ {file = "cffi-1.17.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1a2ddbac59dc3716bc79f27906c010406155031a1c801410f1bafff17ea304d2"},
+ {file = "cffi-1.17.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:6327b572f5770293fc062a7ec04160e89741e8552bf1c358d1a23eba68166759"},
+ {file = "cffi-1.17.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dbc183e7bef690c9abe5ea67b7b60fdbca81aa8da43468287dae7b5c046107d4"},
+ {file = "cffi-1.17.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5bdc0f1f610d067c70aa3737ed06e2726fd9d6f7bfee4a351f4c40b6831f4e82"},
+ {file = "cffi-1.17.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6d872186c1617d143969defeadac5a904e6e374183e07977eedef9c07c8953bf"},
+ {file = "cffi-1.17.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0d46ee4764b88b91f16661a8befc6bfb24806d885e27436fdc292ed7e6f6d058"},
+ {file = "cffi-1.17.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f76a90c345796c01d85e6332e81cab6d70de83b829cf1d9762d0a3da59c7932"},
+ {file = "cffi-1.17.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0e60821d312f99d3e1569202518dddf10ae547e799d75aef3bca3a2d9e8ee693"},
+ {file = "cffi-1.17.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:eb09b82377233b902d4c3fbeeb7ad731cdab579c6c6fda1f763cd779139e47c3"},
+ {file = "cffi-1.17.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:24658baf6224d8f280e827f0a50c46ad819ec8ba380a42448e24459daf809cf4"},
+ {file = "cffi-1.17.0-cp39-cp39-win32.whl", hash = "sha256:0fdacad9e0d9fc23e519efd5ea24a70348305e8d7d85ecbb1a5fa66dc834e7fb"},
+ {file = "cffi-1.17.0-cp39-cp39-win_amd64.whl", hash = "sha256:7cbc78dc018596315d4e7841c8c3a7ae31cc4d638c9b627f87d52e8abaaf2d29"},
+ {file = "cffi-1.17.0.tar.gz", hash = "sha256:f3157624b7558b914cb039fd1af735e5e8049a87c817cc215109ad1c8779df76"},
]
[package.dependencies]
@@ -1343,77 +1370,77 @@ testing = ["pytest (>=7.2.1)", "pytest-cov (>=4.0.0)", "tox (>=4.4.3)"]
[[package]]
name = "clickhouse-connect"
-version = "0.7.16"
+version = "0.7.18"
description = "ClickHouse Database Core Driver for Python, Pandas, and Superset"
optional = false
python-versions = "~=3.8"
files = [
- {file = "clickhouse-connect-0.7.16.tar.gz", hash = "sha256:253a2089efad5729903d00382f73fa8da2cbbfdb118db498cf708ee9f4a2134f"},
- {file = "clickhouse_connect-0.7.16-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:00413deb9e086aabf661d18ac3a3539f25eb773c3675f49353e0d7e6ef1205fc"},
- {file = "clickhouse_connect-0.7.16-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:faadaf206ea7753782db017daedbf592e4edc7c71cb985aad787eb9dc516bf21"},
- {file = "clickhouse_connect-0.7.16-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1db8f1168f33fda78adddb733913b211ddf648984d8fef8d934e30df876e5f23"},
- {file = "clickhouse_connect-0.7.16-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8fa630bf50fb064cc53b7ea5d862066476d3c6074003f6d39d2594fb1a7abf67"},
- {file = "clickhouse_connect-0.7.16-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2cba9547dad41b2d333458615208a3c7db6f56a63473ffea2c05c44225ffa020"},
- {file = "clickhouse_connect-0.7.16-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:480f7856fcf42a21f17886e0b42d70499067c865fc2a0ea7c0eb5c0bdca281a8"},
- {file = "clickhouse_connect-0.7.16-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:b65f3eb570cbcf9fa383b4e0925d1ceb3efd3deba42a435625cad75b3a9ff7f3"},
- {file = "clickhouse_connect-0.7.16-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:b78d3cc0fe42374bb9d5a05ba71578dc69f7e4b4c771e86dcf292ae0412265cc"},
- {file = "clickhouse_connect-0.7.16-cp310-cp310-win32.whl", hash = "sha256:1cb76b26fcde1ba6a8ae68e1db1f9e42d458879a0d4d2c9843cc998f42f445ac"},
- {file = "clickhouse_connect-0.7.16-cp310-cp310-win_amd64.whl", hash = "sha256:9298b344168271e952ea41021963ca1b81b9b3c38be8b036cb64a2556edbb4b7"},
- {file = "clickhouse_connect-0.7.16-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:8ae39a765735cc6e786e5f9a0dba799e7f8ee0bbd5dfc5d5ff755dfa9dd13855"},
- {file = "clickhouse_connect-0.7.16-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3f32546f65dd234a49310cda454713a5f7fbc8ba978744e070355c7ea8819a5a"},
- {file = "clickhouse_connect-0.7.16-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:20865c81a5b378625a528ac8960e08cdca316147f87fad6deb9f16c0d5e5f62f"},
- {file = "clickhouse_connect-0.7.16-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:609c076261d779703bf29e7a27dafc8283153403ceab1ec23d50eb2acabc4b9d"},
- {file = "clickhouse_connect-0.7.16-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e07862e75ac7419c5671384055f11ca5e76dc2c0be4a6f3aed7bf419997184bc"},
- {file = "clickhouse_connect-0.7.16-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:d5db7da6f20b9a49b288063de9b3224a56634f8cb94d19d435af518ed81872c3"},
- {file = "clickhouse_connect-0.7.16-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:955c567ede68a10325045bb2adf1314ff569dfb7e52f6074c18182f3803279f6"},
- {file = "clickhouse_connect-0.7.16-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:df517bfe23d85f5aeeb17b262c06d0a5c24e0baea09688a96d02dc8589ef8b07"},
- {file = "clickhouse_connect-0.7.16-cp311-cp311-win32.whl", hash = "sha256:7f2c6132fc90df6a8318abb9f257c2b777404908b7d168ac08235d516f65a663"},
- {file = "clickhouse_connect-0.7.16-cp311-cp311-win_amd64.whl", hash = "sha256:ca1dba53da86691a11671d846988dc4f6ad02a66f5a0df9a87a46dc4ec9bb0a1"},
- {file = "clickhouse_connect-0.7.16-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:f8f7260073b6ee63e19d442ebb6954bc7741a5ce4ed563eb8074c8c6a0158eca"},
- {file = "clickhouse_connect-0.7.16-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:9b3dd93ada1099cb6df244d79973c811e90a4590685e78e60e8846914b3c261e"},
- {file = "clickhouse_connect-0.7.16-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d3c3458bce25fe9c10e1dbf82dbeeeb2f04e382130f9811cc3bedf44c2028ca"},
- {file = "clickhouse_connect-0.7.16-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dcc302390b4ea975efd8d2ca53d295d40dc766179dd5e9fc158e808f01d9280d"},
- {file = "clickhouse_connect-0.7.16-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a94f6d095d7174c55825e0b5c04b77897a1b2a8a8bbb38f3f773fd3113a7be27"},
- {file = "clickhouse_connect-0.7.16-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:6b7e2572993ef2e1dee5012875a7a2d08cede319e32ccdd2db90ed26a0d0c037"},
- {file = "clickhouse_connect-0.7.16-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:e9c35ee425309ed8ef63bae31e1d3c5f35706fa27ae2836e61e7cb9bbe7f00cb"},
- {file = "clickhouse_connect-0.7.16-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:eb0471d5a32d07eaa37772871ee9e6b5eb37ab907c3c154833824ed68ee4795b"},
- {file = "clickhouse_connect-0.7.16-cp312-cp312-win32.whl", hash = "sha256:b531ee18b4ce16f1d2b8f6249859cbd600f7e0f312f80dda8deb969791a90f17"},
- {file = "clickhouse_connect-0.7.16-cp312-cp312-win_amd64.whl", hash = "sha256:38392308344770864843f7f8b914799684c13ce4b272d5a3a55e5512ff8a3ae0"},
- {file = "clickhouse_connect-0.7.16-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:052ca80d66e49c94d103c9842d2a5b0ebf4610981b79164660ef6b1bdc4b5e85"},
- {file = "clickhouse_connect-0.7.16-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b496059d145c68e956aa10cd04e5c7cb4e97312eb3f7829cec8f4f7024f8ced6"},
- {file = "clickhouse_connect-0.7.16-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:de1e423fc9c415b9fdcbb6f23eccae981e3f0f0cf142e518efec709bda7c1394"},
- {file = "clickhouse_connect-0.7.16-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:555c64719cbc72675d58ea6dfc144fa8064ea1d673a54afd2d54e34c58f17c6b"},
- {file = "clickhouse_connect-0.7.16-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a0c3c063ab23df8f71a36505880bf5de6c18aee246938d787447e52b4d9d5531"},
- {file = "clickhouse_connect-0.7.16-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:5ed62e08cfe445d0430b91c26fb276e2a5175e456e9786594fb6e67c9ebd8c6c"},
- {file = "clickhouse_connect-0.7.16-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:d9eb056bd14ca3c1d7e3edd7ca79ea970d45e5e536930dbb6179aeb965d5bc3d"},
- {file = "clickhouse_connect-0.7.16-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:54e0a03b685ee6c138954846dafb6ec0e0baf8257f2587c61e34c017f3dc9d63"},
- {file = "clickhouse_connect-0.7.16-cp38-cp38-win32.whl", hash = "sha256:d8402c3145387726bd19f916ca2890576be70c4493f030c068f6f03a75addff7"},
- {file = "clickhouse_connect-0.7.16-cp38-cp38-win_amd64.whl", hash = "sha256:70e376d2ebc0f092fae35f7b50ff7296ee8ffd2dda3536238f6c39a5c949d115"},
- {file = "clickhouse_connect-0.7.16-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:cee4f91ad22401c3b96f5df3f3149ef2894e7c2d00b5abd9da80119e7b6592f7"},
- {file = "clickhouse_connect-0.7.16-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a3009145f35e9ac2535dbd8fdbdc218abfe0971c9bc9b730eb5c3f6c40faeb5f"},
- {file = "clickhouse_connect-0.7.16-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7d0ef9f877ffbcb0f526ce9c35c657fc54930d043e45c077d9d886c0f1add727"},
- {file = "clickhouse_connect-0.7.16-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:acc437b3ff2f7991b209b861a89c003ac1971c890775190178438780e967a9d3"},
- {file = "clickhouse_connect-0.7.16-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7ed836dcee4ac097bd83714abe0af987b1ef767675a555e7643d793164c3f1cc"},
- {file = "clickhouse_connect-0.7.16-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:4c4e0d173239c0b4594c8703fae5c8ba3241c4e0763a8cf436b94564692671f9"},
- {file = "clickhouse_connect-0.7.16-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:a17a348dd8c00df343a01128497e8c3a6ae431f13c7a88e363ac12c035316ce0"},
- {file = "clickhouse_connect-0.7.16-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:805ae7ad39c043af13e2b5af45abb70330f0907749dc87ad4a2481a4ac209cc6"},
- {file = "clickhouse_connect-0.7.16-cp39-cp39-win32.whl", hash = "sha256:38fc6ca1bd73cf4dcebd22fbb8dceda267908ff674fc57fbc23c3b5df9c21ac1"},
- {file = "clickhouse_connect-0.7.16-cp39-cp39-win_amd64.whl", hash = "sha256:3dc67e99e40b5a8bc493a21016830b0f3800006a6038c1fd881f7cae6246cc44"},
- {file = "clickhouse_connect-0.7.16-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:b7f526fef71bd5265f47915340a6369a5b5685278b72b5aff281cc521a8ec376"},
- {file = "clickhouse_connect-0.7.16-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e00f87ba68bbc63dd32d7a304fd629b759f24b09f88fbc2bac0a9ed1fe7b2938"},
- {file = "clickhouse_connect-0.7.16-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:09c84f3b64d6bebedcfbbd19e8369b3df2cb7d313afb2a0d64a3e151d344c1c1"},
- {file = "clickhouse_connect-0.7.16-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:18d104ab78edee26e8cef056e2db83f03e1da918df0946e1ef1ad9a27a024dd0"},
- {file = "clickhouse_connect-0.7.16-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:cc1ad53e282ff5b4288fdfcf6df72cda542d9d997de5889d66a1f8e2b9f477f0"},
- {file = "clickhouse_connect-0.7.16-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:fddc99322054f5d3df8715ab3724bd36ac636f8ceaed4f5f3f60d377abd22d22"},
- {file = "clickhouse_connect-0.7.16-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:765a2de98197d1b4f6424611ceaca2ae896a1d7093b943403973888cb7c144e6"},
- {file = "clickhouse_connect-0.7.16-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1540e0a93e5f2147400f644606a399c91705066f05d5a91429616ee9812f4521"},
- {file = "clickhouse_connect-0.7.16-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ba928c4178b0d4a513e1b0ad32a464ab56cb1bc27736a7f41b32e4eb70eb08d6"},
- {file = "clickhouse_connect-0.7.16-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:a17ffc22e905081f002173b30959089de6987fd40c87e7794da9d978d723e610"},
- {file = "clickhouse_connect-0.7.16-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:26df09787232b495285d8358db145b9770f472e2e30147912634c5b56392e73f"},
- {file = "clickhouse_connect-0.7.16-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a2a3ce33241441dc7c718c19e31645323e6c5da793d46bbb670fd4e8557b8605"},
- {file = "clickhouse_connect-0.7.16-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29f9dc9cc1f4ec4a333bf119abb5cee13563e89bc990d4d77b8f43cf630e9fb1"},
- {file = "clickhouse_connect-0.7.16-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a16a7ada11996a6fa0959c83e2e46ff32773e57eca40eff86176fd62a30054ca"},
- {file = "clickhouse_connect-0.7.16-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:ead20e1d4f3c5493dd075b7dc81b5d21be4b876aca6952e1c155824876c621f3"},
+ {file = "clickhouse-connect-0.7.18.tar.gz", hash = "sha256:516aba1fdcf58973b0d0d90168a60c49f6892b6db1183b932f80ae057994eadb"},
+ {file = "clickhouse_connect-0.7.18-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:43e712b8fada717160153022314473826adffde00e8cbe8068e0aa1c187c2395"},
+ {file = "clickhouse_connect-0.7.18-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0a21244d24c9b2a7d1ea2cf23f254884113e0f6d9950340369ce154d7d377165"},
+ {file = "clickhouse_connect-0.7.18-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:347b19f3674b57906dea94dd0e8b72aaedc822131cc2a2383526b19933ed7a33"},
+ {file = "clickhouse_connect-0.7.18-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23c5aa1b144491211f662ed26f279845fb367c37d49b681b783ca4f8c51c7891"},
+ {file = "clickhouse_connect-0.7.18-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e99b4271ed08cc59162a6025086f1786ded5b8a29f4c38e2d3b2a58af04f85f5"},
+ {file = "clickhouse_connect-0.7.18-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:27d76d1dbe988350567dab7fbcc0a54cdd25abedc5585326c753974349818694"},
+ {file = "clickhouse_connect-0.7.18-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:d2cd40b4e07df277192ab6bcb187b3f61e0074ad0e256908bf443b3080be4a6c"},
+ {file = "clickhouse_connect-0.7.18-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8f4ae2c4fb66b2b49f2e7f893fe730712a61a068e79f7272e60d4dd7d64df260"},
+ {file = "clickhouse_connect-0.7.18-cp310-cp310-win32.whl", hash = "sha256:ed871195b25a4e1acfd37f59527ceb872096f0cd65d76af8c91f581c033b1cc0"},
+ {file = "clickhouse_connect-0.7.18-cp310-cp310-win_amd64.whl", hash = "sha256:0c4989012e434b9c167bddf9298ca6eb076593e48a2cab7347cd70a446a7b5d3"},
+ {file = "clickhouse_connect-0.7.18-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:52cfcd77fc63561e7b51940e32900c13731513d703d7fc54a3a6eb1fa4f7be4e"},
+ {file = "clickhouse_connect-0.7.18-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:71d7bb9a24b0eacf8963044d6a1dd9e86dfcdd30afe1bd4a581c00910c83895a"},
+ {file = "clickhouse_connect-0.7.18-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:395cfe09d1d39be4206fc1da96fe316f270077791f9758fcac44fd2765446dba"},
+ {file = "clickhouse_connect-0.7.18-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac55b2b2eb068b02cbb1afbfc8b2255734e28a646d633c43a023a9b95e08023b"},
+ {file = "clickhouse_connect-0.7.18-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4d59bb1df3814acb321f0fe87a4a6eea658463d5e59f6dc8ae10072df1205591"},
+ {file = "clickhouse_connect-0.7.18-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:da5ea738641a7ad0ab7a8e1d8d6234639ea1e61c6eac970bbc6b94547d2c2fa7"},
+ {file = "clickhouse_connect-0.7.18-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:72eb32a75026401777e34209694ffe64db0ce610475436647ed45589b4ab4efe"},
+ {file = "clickhouse_connect-0.7.18-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:43bdd638b1ff27649d0ed9ed5000a8b8d754be891a8d279b27c72c03e3d12dcb"},
+ {file = "clickhouse_connect-0.7.18-cp311-cp311-win32.whl", hash = "sha256:f45bdcba1dc84a1f60a8d827310f615ecbc322518c2d36bba7bf878631007152"},
+ {file = "clickhouse_connect-0.7.18-cp311-cp311-win_amd64.whl", hash = "sha256:6df629ab4b646a49a74e791e14a1b6a73ccbe6c4ee25f864522588d376b66279"},
+ {file = "clickhouse_connect-0.7.18-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:32a35e1e63e4ae708432cbe29c8d116518d2d7b9ecb575b912444c3078b20e20"},
+ {file = "clickhouse_connect-0.7.18-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:357529b8c08305ab895cdc898b60a3dc9b36637dfa4dbfedfc1d00548fc88edc"},
+ {file = "clickhouse_connect-0.7.18-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2aa124d2bb65e29443779723e52398e8724e4bf56db94c9a93fd8208b9d6e2bf"},
+ {file = "clickhouse_connect-0.7.18-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8e3646254607e38294e20bf2e20b780b1c3141fb246366a1ad2021531f2c9c1b"},
+ {file = "clickhouse_connect-0.7.18-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:433e50309af9d46d1b52e5b93ea105332565558be35296c7555c9c2753687586"},
+ {file = "clickhouse_connect-0.7.18-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:251e67753909f76f8b136cad734501e0daf5977ed62747e18baa2b187f41c92c"},
+ {file = "clickhouse_connect-0.7.18-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:a9980916495da3ed057e56ce2c922fc23de614ea5d74ed470b8450b58902ccee"},
+ {file = "clickhouse_connect-0.7.18-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:555e00660c04a524ea00409f783265ccd0d0192552eb9d4dc10d2aeaf2fa6575"},
+ {file = "clickhouse_connect-0.7.18-cp312-cp312-win32.whl", hash = "sha256:f4770c100f0608511f7e572b63a6b222fb780fc67341c11746d361c2b03d36d3"},
+ {file = "clickhouse_connect-0.7.18-cp312-cp312-win_amd64.whl", hash = "sha256:fd44a7885d992410668d083ba38d6a268a1567f49709300b4ff84eb6aef63b70"},
+ {file = "clickhouse_connect-0.7.18-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9ac122dcabe1a9d3c14d331fade70a0adc78cf4006c8b91ee721942cdaa1190e"},
+ {file = "clickhouse_connect-0.7.18-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:1e89db8e8cc9187f2e9cd6aa32062f67b3b4de7b21b8703f103e89d659eda736"},
+ {file = "clickhouse_connect-0.7.18-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c34bb25e5ab9a97a4154d43fdcd16751c9aa4a6e6f959016e4c5fe5b692728ed"},
+ {file = "clickhouse_connect-0.7.18-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:929441a6689a78c63c6a05ee7eb39a183601d93714835ebd537c0572101f7ab1"},
+ {file = "clickhouse_connect-0.7.18-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e8852df54b04361e57775d8ae571cd87e6983f7ed968890c62bbba6a2f2c88fd"},
+ {file = "clickhouse_connect-0.7.18-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:56333eb772591162627455e2c21c8541ed628a9c6e7c115193ad00f24fc59440"},
+ {file = "clickhouse_connect-0.7.18-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:ac6633d2996100552d2ae47ac5e4eb551e11f69d05637ea84f1e13ac0f2bc21a"},
+ {file = "clickhouse_connect-0.7.18-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:265085ab548fb49981fe2aef9f46652ee24d5583bf12e652abb13ee2d7e77581"},
+ {file = "clickhouse_connect-0.7.18-cp38-cp38-win32.whl", hash = "sha256:5ee6c1f74df5fb19b341c389cfed7535fb627cbb9cb1a9bdcbda85045b86cd49"},
+ {file = "clickhouse_connect-0.7.18-cp38-cp38-win_amd64.whl", hash = "sha256:c7a28f810775ce68577181e752ecd2dc8caae77f288b6b9f6a7ce4d36657d4fb"},
+ {file = "clickhouse_connect-0.7.18-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:67f9a3953693b609ab068071be5ac9521193f728b29057e913b386582f84b0c2"},
+ {file = "clickhouse_connect-0.7.18-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:77e202b8606096769bf45e68b46e6bb8c78c2c451c29cb9b3a7bf505b4060d44"},
+ {file = "clickhouse_connect-0.7.18-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8abcbd17f243ca8399a06fb08970d68e73d1ad671f84bb38518449248093f655"},
+ {file = "clickhouse_connect-0.7.18-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:192605c2a9412e4c7d4baab85e432a58a0a5520615f05bc14f13c2836cfc6eeb"},
+ {file = "clickhouse_connect-0.7.18-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c17108b190ab34645ee1981440ae129ecd7ca0cb6a93b4e5ce3ffc383355243f"},
+ {file = "clickhouse_connect-0.7.18-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ac1be43360a6e602784eb60547a03a6c2c574744cb8982ec15aac0e0e57709bd"},
+ {file = "clickhouse_connect-0.7.18-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:cf403781d4ffd5a47aa7eff591940df182de4d9c423cfdc7eb6ade1a1b100e22"},
+ {file = "clickhouse_connect-0.7.18-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:937c6481ec083e2a0bcf178ea363b72d437ab0c8fcbe65143db64b12c1e077c0"},
+ {file = "clickhouse_connect-0.7.18-cp39-cp39-win32.whl", hash = "sha256:77635fea4b3fc4b1568a32674f04d35f4e648e3180528a9bb776e46e76090e4a"},
+ {file = "clickhouse_connect-0.7.18-cp39-cp39-win_amd64.whl", hash = "sha256:5ef60eb76be54b6d6bd8f189b076939e2cca16b50b92b763e7a9c7a62b488045"},
+ {file = "clickhouse_connect-0.7.18-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:7bf76743d7b92b6cac6b4ef2e7a4c2d030ecf2fd542fcfccb374b2432b8d1027"},
+ {file = "clickhouse_connect-0.7.18-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:65b344f174d63096eec098137b5d9c3bb545d67dd174966246c4aa80f9c0bc1e"},
+ {file = "clickhouse_connect-0.7.18-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:24dcc19338cd540e6a3e32e8a7c72c5fc4930c0dd5a760f76af9d384b3e57ddc"},
+ {file = "clickhouse_connect-0.7.18-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:31f5e42d5fd4eaab616926bae344c17202950d9d9c04716d46bccce6b31dbb73"},
+ {file = "clickhouse_connect-0.7.18-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:a890421403c7a59ef85e3afc4ff0d641c5553c52fbb9d6ce30c0a0554649fac6"},
+ {file = "clickhouse_connect-0.7.18-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:d61de71d2b82446dd66ade1b925270366c36a2b11779d5d1bcf71b1bfdd161e6"},
+ {file = "clickhouse_connect-0.7.18-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e81c4f2172e8d6f3dc4dd64ff2dc426920c0caeed969b4ec5bdd0b2fad1533e4"},
+ {file = "clickhouse_connect-0.7.18-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:092cb8e8acdcccce01d239760405fbd8c266052def49b13ad0a96814f5e521ca"},
+ {file = "clickhouse_connect-0.7.18-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a1ae8b1bab7f06815abf9d833a66849faa2b9dfadcc5728fd14c494e2879afa8"},
+ {file = "clickhouse_connect-0.7.18-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:e08ebec4db83109024c97ca2d25740bf57915160d7676edd5c4390777c3e3ec0"},
+ {file = "clickhouse_connect-0.7.18-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:e5e42ec23b59597b512b994fec68ac1c2fa6def8594848cc3ae2459cf5e9d76a"},
+ {file = "clickhouse_connect-0.7.18-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f1aad4543a1ae4d40dc815ef85031a1809fe101687380d516383b168a7407ab2"},
+ {file = "clickhouse_connect-0.7.18-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:46cb4c604bd696535b1e091efb8047b833ff4220d31dbd95558c3587fda533a7"},
+ {file = "clickhouse_connect-0.7.18-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:05e1ef335b81bf6b5908767c3b55e842f1f8463742992653551796eeb8f2d7d6"},
+ {file = "clickhouse_connect-0.7.18-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:094e089de4a50a170f5fd1c0ebb2ea357e055266220bb11dfd7ddf2d4e9c9123"},
]
[package.dependencies]
@@ -1970,26 +1997,6 @@ files = [
{file = "distro-1.9.0.tar.gz", hash = "sha256:2fa77c6fd8940f116ee1d6b94a2f90b13b5ea8d019b98bc8bafdcabcdd9bdbed"},
]
-[[package]]
-name = "dnspython"
-version = "2.6.1"
-description = "DNS toolkit"
-optional = false
-python-versions = ">=3.8"
-files = [
- {file = "dnspython-2.6.1-py3-none-any.whl", hash = "sha256:5ef3b9680161f6fa89daf8ad451b5f1a33b18ae8a1c6778cdf4b43f08c0a6e50"},
- {file = "dnspython-2.6.1.tar.gz", hash = "sha256:e8f0f9c23a7b7cb99ded64e6c3a6f3e701d78f50c55e002b839dea7225cff7cc"},
-]
-
-[package.extras]
-dev = ["black (>=23.1.0)", "coverage (>=7.0)", "flake8 (>=7)", "mypy (>=1.8)", "pylint (>=3)", "pytest (>=7.4)", "pytest-cov (>=4.1.0)", "sphinx (>=7.2.0)", "twine (>=4.0.0)", "wheel (>=0.42.0)"]
-dnssec = ["cryptography (>=41)"]
-doh = ["h2 (>=4.1.0)", "httpcore (>=1.0.0)", "httpx (>=0.26.0)"]
-doq = ["aioquic (>=0.9.25)"]
-idna = ["idna (>=3.6)"]
-trio = ["trio (>=0.23)"]
-wmi = ["wmi (>=1.5.1)"]
-
[[package]]
name = "docstring-parser"
version = "0.16"
@@ -2076,38 +2083,23 @@ files = [
[[package]]
name = "duckduckgo-search"
-version = "6.2.1"
+version = "6.2.6"
description = "Search for words, documents, images, news, maps and text translation using the DuckDuckGo.com search engine."
optional = false
python-versions = ">=3.8"
files = [
- {file = "duckduckgo_search-6.2.1-py3-none-any.whl", hash = "sha256:1a03f799b85fdfa08d5e6478624683f373b9dc35e6f145544b9cab72a4f575fa"},
- {file = "duckduckgo_search-6.2.1.tar.gz", hash = "sha256:d664ec096193e3fb43bdfae4b0ad9c04e44094b58f41998adcdd20a86ee1ed74"},
+ {file = "duckduckgo_search-6.2.6-py3-none-any.whl", hash = "sha256:c8171bcd6ff4d051f78c70ea23bd34c0d8e779d72973829d3a6b40ccc05cd7c2"},
+ {file = "duckduckgo_search-6.2.6.tar.gz", hash = "sha256:96529ecfbd55afa28705b38413003cb3cfc620e55762d33184887545de27dc96"},
]
[package.dependencies]
click = ">=8.1.7"
-pyreqwest-impersonate = ">=0.5.0"
+primp = ">=0.5.5"
[package.extras]
-dev = ["mypy (>=1.10.1)", "pytest (>=8.2.2)", "pytest-asyncio (>=0.23.7)", "ruff (>=0.5.2)"]
+dev = ["mypy (>=1.11.0)", "pytest (>=8.3.1)", "pytest-asyncio (>=0.23.8)", "ruff (>=0.5.5)"]
lxml = ["lxml (>=5.2.2)"]
-[[package]]
-name = "email-validator"
-version = "2.2.0"
-description = "A robust email address syntax and deliverability validation library."
-optional = false
-python-versions = ">=3.8"
-files = [
- {file = "email_validator-2.2.0-py3-none-any.whl", hash = "sha256:561977c2d73ce3611850a06fa56b414621e0c8faa9d66f2611407d87465da631"},
- {file = "email_validator-2.2.0.tar.gz", hash = "sha256:cb690f344c617a714f22e66ae771445a1ceb46821152df8e165c5f9a364582b7"},
-]
-
-[package.dependencies]
-dnspython = ">=2.0.0"
-idna = ">=2.0.0"
-
[[package]]
name = "emoji"
version = "2.12.1"
@@ -2173,45 +2165,23 @@ test = ["pytest (>=6)"]
[[package]]
name = "fastapi"
-version = "0.111.1"
+version = "0.112.0"
description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production"
optional = false
python-versions = ">=3.8"
files = [
- {file = "fastapi-0.111.1-py3-none-any.whl", hash = "sha256:4f51cfa25d72f9fbc3280832e84b32494cf186f50158d364a8765aabf22587bf"},
- {file = "fastapi-0.111.1.tar.gz", hash = "sha256:ddd1ac34cb1f76c2e2d7f8545a4bcb5463bce4834e81abf0b189e0c359ab2413"},
+ {file = "fastapi-0.112.0-py3-none-any.whl", hash = "sha256:3487ded9778006a45834b8c816ec4a48d522e2631ca9e75ec5a774f1b052f821"},
+ {file = "fastapi-0.112.0.tar.gz", hash = "sha256:d262bc56b7d101d1f4e8fc0ad2ac75bb9935fec504d2b7117686cec50710cf05"},
]
[package.dependencies]
-email_validator = ">=2.0.0"
-fastapi-cli = ">=0.0.2"
-httpx = ">=0.23.0"
-jinja2 = ">=2.11.2"
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<2.0.0 || >2.0.0,<2.0.1 || >2.0.1,<2.1.0 || >2.1.0,<3.0.0"
-python-multipart = ">=0.0.7"
starlette = ">=0.37.2,<0.38.0"
typing-extensions = ">=4.8.0"
-uvicorn = {version = ">=0.12.0", extras = ["standard"]}
[package.extras]
-all = ["email_validator (>=2.0.0)", "httpx (>=0.23.0)", "itsdangerous (>=1.1.0)", "jinja2 (>=2.11.2)", "orjson (>=3.2.1)", "pydantic-extra-types (>=2.0.0)", "pydantic-settings (>=2.0.0)", "python-multipart (>=0.0.7)", "pyyaml (>=5.3.1)", "ujson (>=4.0.1,!=4.0.2,!=4.1.0,!=4.2.0,!=4.3.0,!=5.0.0,!=5.1.0)", "uvicorn[standard] (>=0.12.0)"]
-
-[[package]]
-name = "fastapi-cli"
-version = "0.0.4"
-description = "Run and manage FastAPI apps from the command line with FastAPI CLI. 🚀"
-optional = false
-python-versions = ">=3.8"
-files = [
- {file = "fastapi_cli-0.0.4-py3-none-any.whl", hash = "sha256:a2552f3a7ae64058cdbb530be6fa6dbfc975dc165e4fa66d224c3d396e25e809"},
- {file = "fastapi_cli-0.0.4.tar.gz", hash = "sha256:e2e9ffaffc1f7767f488d6da34b6f5a377751c996f397902eb6abb99a67bde32"},
-]
-
-[package.dependencies]
-typer = ">=0.12.3"
-
-[package.extras]
-standard = ["fastapi", "uvicorn[standard] (>=0.15.0)"]
+all = ["email_validator (>=2.0.0)", "fastapi-cli[standard] (>=0.0.5)", "httpx (>=0.23.0)", "itsdangerous (>=1.1.0)", "jinja2 (>=2.11.2)", "orjson (>=3.2.1)", "pydantic-extra-types (>=2.0.0)", "pydantic-settings (>=2.0.0)", "python-multipart (>=0.0.7)", "pyyaml (>=5.3.1)", "ujson (>=4.0.1,!=4.0.2,!=4.1.0,!=4.2.0,!=4.3.0,!=5.0.0,!=5.1.0)", "uvicorn[standard] (>=0.12.0)"]
+standard = ["email_validator (>=2.0.0)", "fastapi-cli[standard] (>=0.0.5)", "httpx (>=0.23.0)", "jinja2 (>=2.11.2)", "python-multipart (>=0.0.7)", "uvicorn[standard] (>=0.12.0)"]
[[package]]
name = "fastavro"
@@ -2761,64 +2731,66 @@ test = ["cffi (>=1.12.2)", "coverage (>=5.0)", "dnspython (>=1.16.0,<2.0)", "idn
[[package]]
name = "gmpy2"
-version = "2.1.5"
-description = "gmpy2 interface to GMP/MPIR, MPFR, and MPC for Python 2.7 and 3.5+"
+version = "2.2.1"
+description = "gmpy2 interface to GMP, MPFR, and MPC for Python 3.7+"
optional = false
-python-versions = "*"
+python-versions = ">=3.7"
files = [
- {file = "gmpy2-2.1.5-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:d8e531a799f09cc66bd2de16b867cf19ce981bbc005bd026fa8d9af46cbdc08b"},
- {file = "gmpy2-2.1.5-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:eec3b3c9413dd1ea4413af57fc9c92ccbb4d5bb8336da5efbbda8f107fd90eec"},
- {file = "gmpy2-2.1.5-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:55dcf08d4278b439c1ba37d9b6893bb77bc34b55ccc9b1ad8645d4596a12700e"},
- {file = "gmpy2-2.1.5-cp27-cp27m-win_amd64.whl", hash = "sha256:8947f3b8a1c90f5bae26caf83b9ba2313e52cd06472f7c2be7a5b3a32bdc1bdd"},
- {file = "gmpy2-2.1.5-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:3459447d579dd0620a09c2aa4a9c1dbfc46cc8084b6928b901607e8565f04a83"},
- {file = "gmpy2-2.1.5-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:931adb3006afb55562094e9a866a1db584c11bc9b4a370d1f4719b551b5403fe"},
- {file = "gmpy2-2.1.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:df404ae9a97b9f399d9ca6890b02bef175a373f87e317f93cbaae00f68774e11"},
- {file = "gmpy2-2.1.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:cd876ee5232b0d70dd0bae2b39f54a75f6cc9bbf1dd90b8f0fda8c267fa383a2"},
- {file = "gmpy2-2.1.5-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4877978256fbb6d6b51cc3892183327171c174fbf60671962ab7aa5e70af8eb3"},
- {file = "gmpy2-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:565d0444f0d174d84bcbcb0da8feede0ce09733dabd905b63343b94d666e46c0"},
- {file = "gmpy2-2.1.5-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:85614559144edad1223a46cae4a3e965818022cb2bb44438f3c42406395a9eb7"},
- {file = "gmpy2-2.1.5-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:29441b7d31ea60c93249667c6ef33f2560d34ce3cf284d7e4e32e91ed1f9ac1b"},
- {file = "gmpy2-2.1.5-cp310-cp310-win_amd64.whl", hash = "sha256:8946dc912c647f7cd29a587339c9e79860d9b34a3a59cbdc04d6d6fe20cfff39"},
- {file = "gmpy2-2.1.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:58097d7ef48f3eabc86e55ca078d3eee5fa3574d9d585f944ee7bc0f00900864"},
- {file = "gmpy2-2.1.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:fa312ec90e643c8ed2224e204f43239c2e27d14261b349c84912c8858a54c5d5"},
- {file = "gmpy2-2.1.5-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9ac72073e7938c2307e7e4645367709a32036787f5e176c4acf881c7d8efff28"},
- {file = "gmpy2-2.1.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3245fd34217649f6c48515ef42da67eb43794f24a20fc961dc2c0c99bb8ebb39"},
- {file = "gmpy2-2.1.5-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:4e9d64c1e1e66a2137617c361714022da3de75787d51bd1aed205eb28ddb362c"},
- {file = "gmpy2-2.1.5-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:81004086f2543399b6b425989fc96cc02dd38ab74dcbfd3acb324af1a6770eaf"},
- {file = "gmpy2-2.1.5-cp311-cp311-win_amd64.whl", hash = "sha256:03beaccf3843c9e9d9cf70102a74cd1e617e792337b64ae73a417b80bf96b385"},
- {file = "gmpy2-2.1.5-cp35-cp35m-macosx_10_9_x86_64.whl", hash = "sha256:131d441cc0e77620d88a900eaa6eee8648ba630621b8337b966cda76964e7662"},
- {file = "gmpy2-2.1.5-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:b6a04cfa85607b47e86eefe102b1124c6d0a8981f4197a3afd7071f0719ac9b6"},
- {file = "gmpy2-2.1.5-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:09800f5a7566093d74702ad31f775f176df539f1138f4475ba8edf11903a2b2b"},
- {file = "gmpy2-2.1.5-cp35-cp35m-win_amd64.whl", hash = "sha256:a3a61cd88aca0a891e26ada53f2bf3f4433d4fb1c771f12dec97e8edc17f9f7e"},
- {file = "gmpy2-2.1.5-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:411d1ea2f5a04d8857a7fe1e59d28d384f19232cb7519f29565c087bda364685"},
- {file = "gmpy2-2.1.5-cp36-cp36m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a5fa902b1c6911d41e6045c94eac57cf2ea76f71946ca65ab65ae8f5d20b2aae"},
- {file = "gmpy2-2.1.5-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:51b95d8e2d6914552118d0316c8ce566441b709e001e66c5db16495be1a429ac"},
- {file = "gmpy2-2.1.5-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7bbe8d39d83e96b5f81b26e65f99a3e8794cf1edfd891e154a233757a26764fb"},
- {file = "gmpy2-2.1.5-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:e7f324dd859a1324bbc5d5375f431f1ac81c6487035a34cba12fbe8658a888f0"},
- {file = "gmpy2-2.1.5-cp36-cp36m-win_amd64.whl", hash = "sha256:c9e9909d12d06697867568007e9b945246f567116fa5b830513f72766ca8b0c7"},
- {file = "gmpy2-2.1.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:4957d9164a8b2a93263e8a43f99c635a84c1a4044a256e1a496503dd624376a8"},
- {file = "gmpy2-2.1.5-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:efda6e0508d0c7fe79d0fc3fccd3bab90937dba05384224cbc08398856805ce6"},
- {file = "gmpy2-2.1.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b8f59c69dd138d84d471530e0907c254429855a839b93b00c7e9fa7ec766feae"},
- {file = "gmpy2-2.1.5-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:943078c7abef7757758bb0f313b4346cf9b0c91f93039b5980d22f2ee0d53177"},
- {file = "gmpy2-2.1.5-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:c935af5fcd2fbd2ed89d0e0cf1c7fd11603101293dbddb46fd1325c56363573f"},
- {file = "gmpy2-2.1.5-cp37-cp37m-win_amd64.whl", hash = "sha256:18233c35d5bbddfe2ec8c269e216dc841ce24ba5f2b00e79e8278ba843eb22dc"},
- {file = "gmpy2-2.1.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6300a0e427bb8b12442db2629b7b271d4d0cd3dbffe2e3880c408932993d31ba"},
- {file = "gmpy2-2.1.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:8d2299682455ee22830f7c0f5851a86ae121ccc5fca2f483be7229a91a2f3be5"},
- {file = "gmpy2-2.1.5-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f50d91d779fe24e7dd3feaa1c06e47e11452a73d0a8c67daeea055a6d58cf233"},
- {file = "gmpy2-2.1.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c398e6e5bb470f0529ca4e2490d5a396bc9c50c860818f297f47486e51e86673"},
- {file = "gmpy2-2.1.5-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:cfdb61f87edf9a7897e7c3e9204f141ddb1de68ecb7038edf0c676bdea815ef2"},
- {file = "gmpy2-2.1.5-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:356c986799a3b34bdcf845961976398556bcfe104e115379effefc50b2cce320"},
- {file = "gmpy2-2.1.5-cp38-cp38-win_amd64.whl", hash = "sha256:c40ed4d68e0b54efa53a9d9fe62662342dd85212f08382b852ca9effab2e7666"},
- {file = "gmpy2-2.1.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6131ccb4f34849b0fa54b9dd8261c00b16fcf4c3332696cb16469a21c217f884"},
- {file = "gmpy2-2.1.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:2fee8bb2934300173d8de0ce670bdfedbb5b09817db94c2467aafa18380a1286"},
- {file = "gmpy2-2.1.5-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c23c98db9cccb63872dd32bdd98275c9503809117d8a23ddd683d8baa3e3ee67"},
- {file = "gmpy2-2.1.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b2764dfc443c364b918506ecad8973a61b76ca0b5afdf460f940134166a2a3e7"},
- {file = "gmpy2-2.1.5-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:4fccf90d28f934f76cc4252007d2e94cc38700ed016d3fd787974f79819381fd"},
- {file = "gmpy2-2.1.5-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0baf36b2724e154bf98ea17f4ff8234543dc7af7297ce3a0a7098bca0209b768"},
- {file = "gmpy2-2.1.5-cp39-cp39-win_amd64.whl", hash = "sha256:8739ca54323ff28bc317920ed96723a13558a3c442ef77ac325eb3cdd5d32d05"},
- {file = "gmpy2-2.1.5.tar.gz", hash = "sha256:bc297f1fd8c377ae67a4f493fc0f926e5d1b157e5c342e30a4d84dc7b9f95d96"},
+ {file = "gmpy2-2.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:431d599e1542b6e0b3618d3e296702c25215c97fb461d596e27adbe69d765dc6"},
+ {file = "gmpy2-2.2.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5e51848975837751d1038e82d006e8bb488b179f093ba7fc8a59e1d8a2c61663"},
+ {file = "gmpy2-2.2.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:89bdf26520b0bf39e148f97a7c9dd17e163637fdcd5fa3699fd70b5e9c246531"},
+ {file = "gmpy2-2.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a187cf303b94efb4c8915106406acac16e8dbaa3cdb6e856fa096673c3c02f1b"},
+ {file = "gmpy2-2.2.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:d26806e518dadd9ed6cf57fc5fb67e8e6ca533bd9a77fd079558ffadd57150c8"},
+ {file = "gmpy2-2.2.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:416d2f1c4a1af3c00946a8f85b4547ba2bede3903cae3095be12fbc0128f9f5f"},
+ {file = "gmpy2-2.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:b3cb0f02570f483d27581ea5659c43df0ff7759aaeb475219e0d9e10e8511a80"},
+ {file = "gmpy2-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:98e947491c67523d3147a500f377bb64d0b115e4ab8a12d628fb324bb0e142bf"},
+ {file = "gmpy2-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4ccd319a3a87529484167ae1391f937ac4a8724169fd5822bbb541d1eab612b0"},
+ {file = "gmpy2-2.2.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:827bcd433e5d62f1b732f45e6949419da4a53915d6c80a3c7a5a03d5a783a03a"},
+ {file = "gmpy2-2.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b7131231fc96f57272066295c81cbf11b3233a9471659bca29ddc90a7bde9bfa"},
+ {file = "gmpy2-2.2.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:1cc6f2bb68ee00c20aae554e111dc781a76140e00c31e4eda5c8f2d4168ed06c"},
+ {file = "gmpy2-2.2.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ae388fe46e3d20af4675451a4b6c12fc1bb08e6e0e69ee47072638be21bf42d8"},
+ {file = "gmpy2-2.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:8b472ee3c123b77979374da2293ebf2c170b88212e173d64213104956d4678fb"},
+ {file = "gmpy2-2.2.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:90d03a1be1b1ad3944013fae5250316c3f4e6aec45ecdf189a5c7422d640004d"},
+ {file = "gmpy2-2.2.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:bd09dd43d199908c1d1d501c5de842b3bf754f99b94af5b5ef0e26e3b716d2d5"},
+ {file = "gmpy2-2.2.1-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3232859fda3e96fd1aecd6235ae20476ed4506562bcdef6796a629b78bb96acd"},
+ {file = "gmpy2-2.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:30fba6f7cf43fb7f8474216701b5aaddfa5e6a06d560e88a67f814062934e863"},
+ {file = "gmpy2-2.2.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:9b33cae533ede8173bc7d4bb855b388c5b636ca9f22a32c949f2eb7e0cc531b2"},
+ {file = "gmpy2-2.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:954e7e1936c26e370ca31bbd49729ebeeb2006a8f9866b1e778ebb89add2e941"},
+ {file = "gmpy2-2.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:c929870137b20d9c3f7dd97f43615b2d2c1a2470e50bafd9a5eea2e844f462e9"},
+ {file = "gmpy2-2.2.1-cp313-cp313-macosx_10_9_x86_64.whl", hash = "sha256:a3859ef1706bc631ee7fbdf3ae0367da1709fae1e2538b0e1bc6c53fa3ee7ef4"},
+ {file = "gmpy2-2.2.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:6468fc604d5a322fe037b8880848eef2fef7e9f843872645c4c11eef276896ad"},
+ {file = "gmpy2-2.2.1-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a845a7701217da4ff81a2e4ae8df479e904621b7953d3a6b4ca0ff139f1fa71f"},
+ {file = "gmpy2-2.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0b1e14ef1793a1e0176e7b54b29b44c1d93cf8699ca8e4a93ed53fdd16e2c52"},
+ {file = "gmpy2-2.2.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:13b0e00170c14ed4cd1e007cc6f1bcb3417b5677d2ef964d46959a1833aa84ab"},
+ {file = "gmpy2-2.2.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:831280e3943897ae6bf69ebd868dc6de2a46c078230b9f2a9f66b4ad793d0440"},
+ {file = "gmpy2-2.2.1-cp313-cp313-win_amd64.whl", hash = "sha256:74235fcce8a1bee207bf8d43955cb04563f71ba8231a3bbafc6dd7869503d05c"},
+ {file = "gmpy2-2.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:67aa03a50ad85687193174875a72e145114946fc3aa64b1c9d4a724b70afc18d"},
+ {file = "gmpy2-2.2.1-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1854e35312088608880139d06326683a56d7547d68a5817f472ac9046920b7c8"},
+ {file = "gmpy2-2.2.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3c35081bc42741fe5d491cffcff2c71107970b85b6687e6b0001db5fcc70d644"},
+ {file = "gmpy2-2.2.1-cp37-cp37m-musllinux_1_2_i686.whl", hash = "sha256:152e8aaec5046fd4887e45719ab5ea5fac90df0077574c79fc124dc93fd237c0"},
+ {file = "gmpy2-2.2.1-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:31826f502cd575898ef1fd5959b48114b3e91540385491ab9303ffa04d88a6eb"},
+ {file = "gmpy2-2.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:98f5c85177225f91b93caf64e1876e081108c5dd1d53f0b79f917561935fb389"},
+ {file = "gmpy2-2.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:235f69d2e83d7418252871f1950bf8fb8e80bf2e572c30859c85d7ee14196f3d"},
+ {file = "gmpy2-2.2.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:5079db302762e2669e0d664ea8fb56f46509514dd0387d98951e399838d9bb07"},
+ {file = "gmpy2-2.2.1-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e387faa6e860424a934ac23152803202980bd0c30605d8bd180bb015d8b09f75"},
+ {file = "gmpy2-2.2.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:887471cf563c5fc96456c404c805fb4a09c7e834123d7725b22f5394a48cff46"},
+ {file = "gmpy2-2.2.1-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:1adf779213b9bbf4b0270d1dea1822e3865c433ae02d4b97d20db8be8532e2f8"},
+ {file = "gmpy2-2.2.1-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:2ef74ffffbb16a84243098b51672b584f83baaa53535209639174244863aea8c"},
+ {file = "gmpy2-2.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:6699b88068c2af9abaf28cd078c876892a917750d8bee6734d8dfa708312fdf3"},
+ {file = "gmpy2-2.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:623e0f701dc74690d15037951b550160d24d75bf66213fc6642a51ac6a2e055e"},
+ {file = "gmpy2-2.2.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:31b9bfde30478d3b9c85641b4b7146554af16d60320962d79c3e45d724d1281d"},
+ {file = "gmpy2-2.2.1-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:674da3d7aeb7dbde52abc0adc0a285bf1b2f3d142779dad15acdbdb819fe9bc2"},
+ {file = "gmpy2-2.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23505c2ab66734f8a1b1fc5c4c1f8bbbd489bb02eef5940bbd974de69f2ddc2d"},
+ {file = "gmpy2-2.2.1-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:99f515dbd242cb07bf06e71c93e69c99a703ad55a22f5deac198256fd1c305ed"},
+ {file = "gmpy2-2.2.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:1c2daa0bb603734e6bee6245e275e57ed305a08da50dc3ce7b48eedece61216c"},
+ {file = "gmpy2-2.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:fbe36fcc45a591d4ef30fe38ac8db0afa35edfafdf325dbe4fe9162ceb264c0d"},
+ {file = "gmpy2-2.2.1.tar.gz", hash = "sha256:e83e07567441b78cb87544910cb3cc4fe94e7da987e93ef7622e76fb96650432"},
]
+[package.extras]
+docs = ["sphinx (>=4)", "sphinx-rtd-theme (>=1)"]
+tests = ["cython", "hypothesis", "mpmath", "pytest", "setuptools"]
+
[[package]]
name = "google-ai-generativelanguage"
version = "0.6.1"
@@ -3021,13 +2993,13 @@ grpc = ["grpcio (>=1.38.0,<2.0dev)", "grpcio-status (>=1.38.0,<2.0.dev0)"]
[[package]]
name = "google-cloud-resource-manager"
-version = "1.12.4"
+version = "1.12.5"
description = "Google Cloud Resource Manager API client library"
optional = false
python-versions = ">=3.7"
files = [
- {file = "google-cloud-resource-manager-1.12.4.tar.gz", hash = "sha256:3eda914a925e92465ef80faaab7e0f7a9312d486dd4e123d2c76e04bac688ff0"},
- {file = "google_cloud_resource_manager-1.12.4-py2.py3-none-any.whl", hash = "sha256:0b6663585f7f862166c0fb4c55fdda721fce4dc2dc1d5b52d03ee4bf2653a85f"},
+ {file = "google_cloud_resource_manager-1.12.5-py2.py3-none-any.whl", hash = "sha256:2708a718b45c79464b7b21559c701b5c92e6b0b1ab2146d0a256277a623dc175"},
+ {file = "google_cloud_resource_manager-1.12.5.tar.gz", hash = "sha256:b7af4254401ed4efa3aba3a929cb3ddb803fa6baf91a78485e45583597de5891"},
]
[package.dependencies]
@@ -3287,133 +3259,137 @@ protobuf = ">=3.20.2,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4
[[package]]
name = "grpcio"
-version = "1.58.0"
+version = "1.63.0"
description = "HTTP/2-based RPC framework"
optional = false
-python-versions = ">=3.7"
+python-versions = ">=3.8"
files = [
- {file = "grpcio-1.58.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:3e6bebf1dfdbeb22afd95650e4f019219fef3ab86d3fca8ebade52e4bc39389a"},
- {file = "grpcio-1.58.0-cp310-cp310-macosx_12_0_universal2.whl", hash = "sha256:cde11577d5b6fd73a00e6bfa3cf5f428f3f33c2d2878982369b5372bbc4acc60"},
- {file = "grpcio-1.58.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:a2d67ff99e70e86b2be46c1017ae40b4840d09467d5455b2708de6d4c127e143"},
- {file = "grpcio-1.58.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1ed979b273a81de36fc9c6716d9fb09dd3443efa18dcc8652501df11da9583e9"},
- {file = "grpcio-1.58.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:458899d2ebd55d5ca2350fd3826dfd8fcb11fe0f79828ae75e2b1e6051d50a29"},
- {file = "grpcio-1.58.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:bc7ffef430b80345729ff0a6825e9d96ac87efe39216e87ac58c6c4ef400de93"},
- {file = "grpcio-1.58.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5b23d75e5173faa3d1296a7bedffb25afd2fddb607ef292dfc651490c7b53c3d"},
- {file = "grpcio-1.58.0-cp310-cp310-win32.whl", hash = "sha256:fad9295fe02455d4f158ad72c90ef8b4bcaadfdb5efb5795f7ab0786ad67dd58"},
- {file = "grpcio-1.58.0-cp310-cp310-win_amd64.whl", hash = "sha256:bc325fed4d074367bebd465a20763586e5e1ed5b943e9d8bc7c162b1f44fd602"},
- {file = "grpcio-1.58.0-cp311-cp311-linux_armv7l.whl", hash = "sha256:652978551af02373a5a313e07bfef368f406b5929cf2d50fa7e4027f913dbdb4"},
- {file = "grpcio-1.58.0-cp311-cp311-macosx_10_10_universal2.whl", hash = "sha256:9f13a171281ebb4d7b1ba9f06574bce2455dcd3f2f6d1fbe0fd0d84615c74045"},
- {file = "grpcio-1.58.0-cp311-cp311-manylinux_2_17_aarch64.whl", hash = "sha256:8774219e21b05f750eef8adc416e9431cf31b98f6ce9def288e4cea1548cbd22"},
- {file = "grpcio-1.58.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:09206106848462763f7f273ca93d2d2d4d26cab475089e0de830bb76be04e9e8"},
- {file = "grpcio-1.58.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:62831d5e251dd7561d9d9e83a0b8655084b2a1f8ea91e4bd6b3cedfefd32c9d2"},
- {file = "grpcio-1.58.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:212f38c6a156862098f6bdc9a79bf850760a751d259d8f8f249fc6d645105855"},
- {file = "grpcio-1.58.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:4b12754af201bb993e6e2efd7812085ddaaef21d0a6f0ff128b97de1ef55aa4a"},
- {file = "grpcio-1.58.0-cp311-cp311-win32.whl", hash = "sha256:3886b4d56bd4afeac518dbc05933926198aa967a7d1d237a318e6fbc47141577"},
- {file = "grpcio-1.58.0-cp311-cp311-win_amd64.whl", hash = "sha256:002f228d197fea12797a14e152447044e14fb4fdb2eb5d6cfa496f29ddbf79ef"},
- {file = "grpcio-1.58.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:b5e8db0aff0a4819946215f156bd722b6f6c8320eb8419567ffc74850c9fd205"},
- {file = "grpcio-1.58.0-cp37-cp37m-macosx_10_10_universal2.whl", hash = "sha256:201e550b7e2ede113b63e718e7ece93cef5b0fbf3c45e8fe4541a5a4305acd15"},
- {file = "grpcio-1.58.0-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:d79b660681eb9bc66cc7cbf78d1b1b9e335ee56f6ea1755d34a31108b80bd3c8"},
- {file = "grpcio-1.58.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2ef8d4a76d2c7d8065aba829f8d0bc0055495c998dce1964ca5b302d02514fb3"},
- {file = "grpcio-1.58.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6cba491c638c76d3dc6c191d9c75041ca5b8f5c6de4b8327ecdcab527f130bb4"},
- {file = "grpcio-1.58.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:6801ff6652ecd2aae08ef994a3e49ff53de29e69e9cd0fd604a79ae4e545a95c"},
- {file = "grpcio-1.58.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:24edec346e69e672daf12b2c88e95c6f737f3792d08866101d8c5f34370c54fd"},
- {file = "grpcio-1.58.0-cp37-cp37m-win_amd64.whl", hash = "sha256:7e473a7abad9af48e3ab5f3b5d237d18208024d28ead65a459bd720401bd2f8f"},
- {file = "grpcio-1.58.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:4891bbb4bba58acd1d620759b3be11245bfe715eb67a4864c8937b855b7ed7fa"},
- {file = "grpcio-1.58.0-cp38-cp38-macosx_10_10_universal2.whl", hash = "sha256:e9f995a8a421405958ff30599b4d0eec244f28edc760de82f0412c71c61763d2"},
- {file = "grpcio-1.58.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:2f85f87e2f087d9f632c085b37440a3169fda9cdde80cb84057c2fc292f8cbdf"},
- {file = "grpcio-1.58.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb6b92036ff312d5b4182fa72e8735d17aceca74d0d908a7f08e375456f03e07"},
- {file = "grpcio-1.58.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d81c2b2b24c32139dd2536972f1060678c6b9fbd106842a9fcdecf07b233eccd"},
- {file = "grpcio-1.58.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:fbcecb6aedd5c1891db1d70efbfbdc126c986645b5dd616a045c07d6bd2dfa86"},
- {file = "grpcio-1.58.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:92ae871a902cf19833328bd6498ec007b265aabf2fda845ab5bd10abcaf4c8c6"},
- {file = "grpcio-1.58.0-cp38-cp38-win32.whl", hash = "sha256:dc72e04620d49d3007771c0e0348deb23ca341c0245d610605dddb4ac65a37cb"},
- {file = "grpcio-1.58.0-cp38-cp38-win_amd64.whl", hash = "sha256:1c1c5238c6072470c7f1614bf7c774ffde6b346a100521de9ce791d1e4453afe"},
- {file = "grpcio-1.58.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:fe643af248442221db027da43ed43e53b73e11f40c9043738de9a2b4b6ca7697"},
- {file = "grpcio-1.58.0-cp39-cp39-macosx_10_10_universal2.whl", hash = "sha256:128eb1f8e70676d05b1b0c8e6600320fc222b3f8c985a92224248b1367122188"},
- {file = "grpcio-1.58.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:039003a5e0ae7d41c86c768ef8b3ee2c558aa0a23cf04bf3c23567f37befa092"},
- {file = "grpcio-1.58.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8f061722cad3f9aabb3fbb27f3484ec9d4667b7328d1a7800c3c691a98f16bb0"},
- {file = "grpcio-1.58.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba0af11938acf8cd4cf815c46156bcde36fa5850518120920d52620cc3ec1830"},
- {file = "grpcio-1.58.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:d4cef77ad2fed42b1ba9143465856d7e737279854e444925d5ba45fc1f3ba727"},
- {file = "grpcio-1.58.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:24765a627eb4d9288ace32d5104161c3654128fe27f2808ecd6e9b0cfa7fc8b9"},
- {file = "grpcio-1.58.0-cp39-cp39-win32.whl", hash = "sha256:f0241f7eb0d2303a545136c59bc565a35c4fc3b924ccbd69cb482f4828d6f31c"},
- {file = "grpcio-1.58.0-cp39-cp39-win_amd64.whl", hash = "sha256:dcfba7befe3a55dab6fe1eb7fc9359dc0c7f7272b30a70ae0af5d5b063842f28"},
- {file = "grpcio-1.58.0.tar.gz", hash = "sha256:532410c51ccd851b706d1fbc00a87be0f5312bd6f8e5dbf89d4e99c7f79d7499"},
+ {file = "grpcio-1.63.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:2e93aca840c29d4ab5db93f94ed0a0ca899e241f2e8aec6334ab3575dc46125c"},
+ {file = "grpcio-1.63.0-cp310-cp310-macosx_12_0_universal2.whl", hash = "sha256:91b73d3f1340fefa1e1716c8c1ec9930c676d6b10a3513ab6c26004cb02d8b3f"},
+ {file = "grpcio-1.63.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:b3afbd9d6827fa6f475a4f91db55e441113f6d3eb9b7ebb8fb806e5bb6d6bd0d"},
+ {file = "grpcio-1.63.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8f3f6883ce54a7a5f47db43289a0a4c776487912de1a0e2cc83fdaec9685cc9f"},
+ {file = "grpcio-1.63.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cf8dae9cc0412cb86c8de5a8f3be395c5119a370f3ce2e69c8b7d46bb9872c8d"},
+ {file = "grpcio-1.63.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:08e1559fd3b3b4468486b26b0af64a3904a8dbc78d8d936af9c1cf9636eb3e8b"},
+ {file = "grpcio-1.63.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5c039ef01516039fa39da8a8a43a95b64e288f79f42a17e6c2904a02a319b357"},
+ {file = "grpcio-1.63.0-cp310-cp310-win32.whl", hash = "sha256:ad2ac8903b2eae071055a927ef74121ed52d69468e91d9bcbd028bd0e554be6d"},
+ {file = "grpcio-1.63.0-cp310-cp310-win_amd64.whl", hash = "sha256:b2e44f59316716532a993ca2966636df6fbe7be4ab6f099de6815570ebe4383a"},
+ {file = "grpcio-1.63.0-cp311-cp311-linux_armv7l.whl", hash = "sha256:f28f8b2db7b86c77916829d64ab21ff49a9d8289ea1564a2b2a3a8ed9ffcccd3"},
+ {file = "grpcio-1.63.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:65bf975639a1f93bee63ca60d2e4951f1b543f498d581869922910a476ead2f5"},
+ {file = "grpcio-1.63.0-cp311-cp311-manylinux_2_17_aarch64.whl", hash = "sha256:b5194775fec7dc3dbd6a935102bb156cd2c35efe1685b0a46c67b927c74f0cfb"},
+ {file = "grpcio-1.63.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e4cbb2100ee46d024c45920d16e888ee5d3cf47c66e316210bc236d5bebc42b3"},
+ {file = "grpcio-1.63.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1ff737cf29b5b801619f10e59b581869e32f400159e8b12d7a97e7e3bdeee6a2"},
+ {file = "grpcio-1.63.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:cd1e68776262dd44dedd7381b1a0ad09d9930ffb405f737d64f505eb7f77d6c7"},
+ {file = "grpcio-1.63.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:93f45f27f516548e23e4ec3fbab21b060416007dbe768a111fc4611464cc773f"},
+ {file = "grpcio-1.63.0-cp311-cp311-win32.whl", hash = "sha256:878b1d88d0137df60e6b09b74cdb73db123f9579232c8456f53e9abc4f62eb3c"},
+ {file = "grpcio-1.63.0-cp311-cp311-win_amd64.whl", hash = "sha256:756fed02dacd24e8f488f295a913f250b56b98fb793f41d5b2de6c44fb762434"},
+ {file = "grpcio-1.63.0-cp312-cp312-linux_armv7l.whl", hash = "sha256:93a46794cc96c3a674cdfb59ef9ce84d46185fe9421baf2268ccb556f8f81f57"},
+ {file = "grpcio-1.63.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:a7b19dfc74d0be7032ca1eda0ed545e582ee46cd65c162f9e9fc6b26ef827dc6"},
+ {file = "grpcio-1.63.0-cp312-cp312-manylinux_2_17_aarch64.whl", hash = "sha256:8064d986d3a64ba21e498b9a376cbc5d6ab2e8ab0e288d39f266f0fca169b90d"},
+ {file = "grpcio-1.63.0-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:219bb1848cd2c90348c79ed0a6b0ea51866bc7e72fa6e205e459fedab5770172"},
+ {file = "grpcio-1.63.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a2d60cd1d58817bc5985fae6168d8b5655c4981d448d0f5b6194bbcc038090d2"},
+ {file = "grpcio-1.63.0-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:9e350cb096e5c67832e9b6e018cf8a0d2a53b2a958f6251615173165269a91b0"},
+ {file = "grpcio-1.63.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:56cdf96ff82e3cc90dbe8bac260352993f23e8e256e063c327b6cf9c88daf7a9"},
+ {file = "grpcio-1.63.0-cp312-cp312-win32.whl", hash = "sha256:3a6d1f9ea965e750db7b4ee6f9fdef5fdf135abe8a249e75d84b0a3e0c668a1b"},
+ {file = "grpcio-1.63.0-cp312-cp312-win_amd64.whl", hash = "sha256:d2497769895bb03efe3187fb1888fc20e98a5f18b3d14b606167dacda5789434"},
+ {file = "grpcio-1.63.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:fdf348ae69c6ff484402cfdb14e18c1b0054ac2420079d575c53a60b9b2853ae"},
+ {file = "grpcio-1.63.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a3abfe0b0f6798dedd2e9e92e881d9acd0fdb62ae27dcbbfa7654a57e24060c0"},
+ {file = "grpcio-1.63.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:6ef0ad92873672a2a3767cb827b64741c363ebaa27e7f21659e4e31f4d750280"},
+ {file = "grpcio-1.63.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b416252ac5588d9dfb8a30a191451adbf534e9ce5f56bb02cd193f12d8845b7f"},
+ {file = "grpcio-1.63.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e3b77eaefc74d7eb861d3ffbdf91b50a1bb1639514ebe764c47773b833fa2d91"},
+ {file = "grpcio-1.63.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:b005292369d9c1f80bf70c1db1c17c6c342da7576f1c689e8eee4fb0c256af85"},
+ {file = "grpcio-1.63.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:cdcda1156dcc41e042d1e899ba1f5c2e9f3cd7625b3d6ebfa619806a4c1aadda"},
+ {file = "grpcio-1.63.0-cp38-cp38-win32.whl", hash = "sha256:01799e8649f9e94ba7db1aeb3452188048b0019dc37696b0f5ce212c87c560c3"},
+ {file = "grpcio-1.63.0-cp38-cp38-win_amd64.whl", hash = "sha256:6a1a3642d76f887aa4009d92f71eb37809abceb3b7b5a1eec9c554a246f20e3a"},
+ {file = "grpcio-1.63.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:75f701ff645858a2b16bc8c9fc68af215a8bb2d5a9b647448129de6e85d52bce"},
+ {file = "grpcio-1.63.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:cacdef0348a08e475a721967f48206a2254a1b26ee7637638d9e081761a5ba86"},
+ {file = "grpcio-1.63.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:0697563d1d84d6985e40ec5ec596ff41b52abb3fd91ec240e8cb44a63b895094"},
+ {file = "grpcio-1.63.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6426e1fb92d006e47476d42b8f240c1d916a6d4423c5258ccc5b105e43438f61"},
+ {file = "grpcio-1.63.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e48cee31bc5f5a31fb2f3b573764bd563aaa5472342860edcc7039525b53e46a"},
+ {file = "grpcio-1.63.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:50344663068041b34a992c19c600236e7abb42d6ec32567916b87b4c8b8833b3"},
+ {file = "grpcio-1.63.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:259e11932230d70ef24a21b9fb5bb947eb4703f57865a404054400ee92f42f5d"},
+ {file = "grpcio-1.63.0-cp39-cp39-win32.whl", hash = "sha256:a44624aad77bf8ca198c55af811fd28f2b3eaf0a50ec5b57b06c034416ef2d0a"},
+ {file = "grpcio-1.63.0-cp39-cp39-win_amd64.whl", hash = "sha256:166e5c460e5d7d4656ff9e63b13e1f6029b122104c1633d5f37eaea348d7356d"},
+ {file = "grpcio-1.63.0.tar.gz", hash = "sha256:f3023e14805c61bc439fb40ca545ac3d5740ce66120a678a3c6c2c55b70343d1"},
]
[package.extras]
-protobuf = ["grpcio-tools (>=1.58.0)"]
+protobuf = ["grpcio-tools (>=1.63.0)"]
[[package]]
name = "grpcio-status"
-version = "1.58.0"
+version = "1.62.3"
description = "Status proto mapping for gRPC"
optional = false
python-versions = ">=3.6"
files = [
- {file = "grpcio-status-1.58.0.tar.gz", hash = "sha256:0b42e70c0405a66a82d9e9867fa255fe59e618964a6099b20568c31dd9099766"},
- {file = "grpcio_status-1.58.0-py3-none-any.whl", hash = "sha256:36d46072b71a00147709ebce49344ac59b4b8960942acf0f813a8a7d6c1c28e0"},
+ {file = "grpcio-status-1.62.3.tar.gz", hash = "sha256:289bdd7b2459794a12cf95dc0cb727bd4a1742c37bd823f760236c937e53a485"},
+ {file = "grpcio_status-1.62.3-py3-none-any.whl", hash = "sha256:f9049b762ba8de6b1086789d8315846e094edac2c50beaf462338b301a8fd4b8"},
]
[package.dependencies]
googleapis-common-protos = ">=1.5.5"
-grpcio = ">=1.58.0"
+grpcio = ">=1.62.3"
protobuf = ">=4.21.6"
[[package]]
name = "grpcio-tools"
-version = "1.58.0"
+version = "1.62.3"
description = "Protobuf code generator for gRPC"
optional = false
python-versions = ">=3.7"
files = [
- {file = "grpcio-tools-1.58.0.tar.gz", hash = "sha256:6f4d80ceb591e31ca4dceec747dbe56132e1392a0a9bb1c8fe001d1b5cac898a"},
- {file = "grpcio_tools-1.58.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:60c874908f3b40f32f1bb0221f7b3ab65ecb53a4d0a9f0a394f031f1b292c177"},
- {file = "grpcio_tools-1.58.0-cp310-cp310-macosx_12_0_universal2.whl", hash = "sha256:1852e798f31e5437ca7b37abc910e028b34732fb19364862cedb87b1dab66fad"},
- {file = "grpcio_tools-1.58.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:149fb48f53cb691a6328f68bed8e4036c730f7106b7f98e92c2c0403f0b9e93c"},
- {file = "grpcio_tools-1.58.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ba3d383e5ca93826038b70f326fce8e8d12dd9b2f64d363a3d612f7475f12dd2"},
- {file = "grpcio_tools-1.58.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6997511e9d2979f7a2389479682dbb06823f21a904e8fb0a5c6baaf1b4b4a863"},
- {file = "grpcio_tools-1.58.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:8de0b701da479643f71fad71fe66885cddd89441ae16e2c724939b47742dc72e"},
- {file = "grpcio_tools-1.58.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:43cc23908b63fcaefe690b10f68a2d8652c994b5b36ab77d2271d9608c895320"},
- {file = "grpcio_tools-1.58.0-cp310-cp310-win32.whl", hash = "sha256:2c2221123d010dc6231799e63a37f2f4786bf614ef65b23009c387cd20d8b193"},
- {file = "grpcio_tools-1.58.0-cp310-cp310-win_amd64.whl", hash = "sha256:df2788736bdf58abe7b0e4d6b1ff806f7686c98c5ad900da312252e3322d91c4"},
- {file = "grpcio_tools-1.58.0-cp311-cp311-linux_armv7l.whl", hash = "sha256:b6ea5578712cdb29b0ff60bfc6405bf0e8d681b9c71d106dd1cda54fe7fe4e55"},
- {file = "grpcio_tools-1.58.0-cp311-cp311-macosx_10_10_universal2.whl", hash = "sha256:c29880f491581c83181c0a84a4d11402af2b13166a5266f64e246adf1da7aa66"},
- {file = "grpcio_tools-1.58.0-cp311-cp311-manylinux_2_17_aarch64.whl", hash = "sha256:32d51e933c3565414dd0835f930bb28a1cdeba435d9d2c87fa3cf8b1d284db3c"},
- {file = "grpcio_tools-1.58.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ad9d77f25514584b1ddc981d70c9e50dfcfc388aa5ba943eee67520c5267ed9"},
- {file = "grpcio_tools-1.58.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4882382631e6352819059278a5c878ce0b067008dd490911d16d5616e8a36d85"},
- {file = "grpcio_tools-1.58.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:d84091a189d848d94645b7c48b61734c12ec03b0d46e5fc0049343a26989ac5c"},
- {file = "grpcio_tools-1.58.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:85ac28a9621e9b92a3fc416288c4ce45542db0b4c31b3e23031dd8e0a0ec5590"},
- {file = "grpcio_tools-1.58.0-cp311-cp311-win32.whl", hash = "sha256:7371d8ea80234b29affec145e25569523f549520ed7e53b2aa92bed412cdecfd"},
- {file = "grpcio_tools-1.58.0-cp311-cp311-win_amd64.whl", hash = "sha256:6997df6e7c5cf4d3ddc764240c1ff6a04b45d70ec28913b38fbc6396ef743e12"},
- {file = "grpcio_tools-1.58.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:ac65b8d6e3acaf88b815edf9af88ff844b6600ff3d2591c05ba4f655b45d5fb4"},
- {file = "grpcio_tools-1.58.0-cp37-cp37m-macosx_10_10_universal2.whl", hash = "sha256:88e8191d0dd789bebf42533808728f5ce75d2c51e2a72bdf20abe5b5e3fbec42"},
- {file = "grpcio_tools-1.58.0-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:a3dbece2a121761499a659b799979d4b738586d1065439053de553773eee11ca"},
- {file = "grpcio_tools-1.58.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1086fe240c4c879b9721952b47d46996deb283c2d9355a8dc24a804811aacf70"},
- {file = "grpcio_tools-1.58.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a7ae3dca059d5b358dd03fb63277428fa7d771605d4074a019138dd38d70719a"},
- {file = "grpcio_tools-1.58.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:3f8904ac7fc3da2e874f00b3a986e8b7e004f499344a8e7eb213c26dfb025041"},
- {file = "grpcio_tools-1.58.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:aadbd8393ae332e49731adb31e741f2e689989150569b7acc939f5ea43124e2d"},
- {file = "grpcio_tools-1.58.0-cp37-cp37m-win_amd64.whl", hash = "sha256:1cb6e24194786687d4f23c64de1f0ce553af51de22746911bc37340f85f9783e"},
- {file = "grpcio_tools-1.58.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:6ec43909095c630df3e479e77469bdad367067431f4af602f6ccb978a3b78afd"},
- {file = "grpcio_tools-1.58.0-cp38-cp38-macosx_10_10_universal2.whl", hash = "sha256:4be49ed320b0ebcbc21d19ef555fbf229c1c452105522b728e1171ee2052078e"},
- {file = "grpcio_tools-1.58.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:28eefebddec3d3adf19baca78f8b82a2287d358e1b1575ae018cdca8eacc6269"},
- {file = "grpcio_tools-1.58.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2ef8c696e9d78676cc3f583a92bbbf2c84e94e350f7ad22f150a52559f4599d1"},
- {file = "grpcio_tools-1.58.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9aeb5949e46558d21c51fd3ec3eeecc59c94dbca76c67c0a80d3da6b7437930c"},
- {file = "grpcio_tools-1.58.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:6f7144aad9396d35fb1b80429600a970b559c2ad4d07020eeb180fe83cea2bee"},
- {file = "grpcio_tools-1.58.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:4ee26e9253a721fff355737649678535f76cf5d642aa3ac0cd937832559b90af"},
- {file = "grpcio_tools-1.58.0-cp38-cp38-win32.whl", hash = "sha256:343f572312039059a8797d6e29a7fc62196e73131ab01755660a9d48202267c1"},
- {file = "grpcio_tools-1.58.0-cp38-cp38-win_amd64.whl", hash = "sha256:cd7acfbb43b7338a78cf4a67528d05530d574d92b7c829d185b78dfc451d158f"},
- {file = "grpcio_tools-1.58.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:46628247fbce86d18232eead24bd22ed0826c79f3fe2fc2fbdbde45971361049"},
- {file = "grpcio_tools-1.58.0-cp39-cp39-macosx_10_10_universal2.whl", hash = "sha256:51587842a54e025a3d0d37afcf4ef2b7ac1def9a5d17448665cb424b53d6c287"},
- {file = "grpcio_tools-1.58.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:a062ae3072a2a39a3c057f4d68b57b021f1dd2956cd09aab39709f6af494e1de"},
- {file = "grpcio_tools-1.58.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eec3c93a08df11c80ef1c29a616bcbb0d83dbc6ea41b48306fcacc720416dfa7"},
- {file = "grpcio_tools-1.58.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b63f823ac991ff77104da614d2a2485a59d37d57830eb2e387a6e2a3edc7fa2b"},
- {file = "grpcio_tools-1.58.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:579c11a9f198847ed48dbc4f211c67fe96a73320b87c81f01b044b72e24a7d77"},
- {file = "grpcio_tools-1.58.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:6ca2fc1dd8049d417a5034d944c9df05cee76f855b3e431627ab4292e7c01c47"},
- {file = "grpcio_tools-1.58.0-cp39-cp39-win32.whl", hash = "sha256:453023120114c35d3d9d6717ea0820e5d5c140f51f9d0b621de4397ff854471b"},
- {file = "grpcio_tools-1.58.0-cp39-cp39-win_amd64.whl", hash = "sha256:b6c896f1df99c35cf062d4803c15663ff00a33ff09add28baa6e475cf6b5e258"},
+ {file = "grpcio-tools-1.62.3.tar.gz", hash = "sha256:7c7136015c3d62c3eef493efabaf9e3380e3e66d24ee8e94c01cb71377f57833"},
+ {file = "grpcio_tools-1.62.3-cp310-cp310-macosx_12_0_universal2.whl", hash = "sha256:2f968b049c2849540751ec2100ab05e8086c24bead769ca734fdab58698408c1"},
+ {file = "grpcio_tools-1.62.3-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:0a8c0c4724ae9c2181b7dbc9b186df46e4f62cb18dc184e46d06c0ebeccf569e"},
+ {file = "grpcio_tools-1.62.3-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5782883a27d3fae8c425b29a9d3dcf5f47d992848a1b76970da3b5a28d424b26"},
+ {file = "grpcio_tools-1.62.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f3d812daffd0c2d2794756bd45a353f89e55dc8f91eb2fc840c51b9f6be62667"},
+ {file = "grpcio_tools-1.62.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:b47d0dda1bdb0a0ba7a9a6de88e5a1ed61f07fad613964879954961e36d49193"},
+ {file = "grpcio_tools-1.62.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:ca246dffeca0498be9b4e1ee169b62e64694b0f92e6d0be2573e65522f39eea9"},
+ {file = "grpcio_tools-1.62.3-cp310-cp310-win32.whl", hash = "sha256:6a56d344b0bab30bf342a67e33d386b0b3c4e65868ffe93c341c51e1a8853ca5"},
+ {file = "grpcio_tools-1.62.3-cp310-cp310-win_amd64.whl", hash = "sha256:710fecf6a171dcbfa263a0a3e7070e0df65ba73158d4c539cec50978f11dad5d"},
+ {file = "grpcio_tools-1.62.3-cp311-cp311-macosx_10_10_universal2.whl", hash = "sha256:703f46e0012af83a36082b5f30341113474ed0d91e36640da713355cd0ea5d23"},
+ {file = "grpcio_tools-1.62.3-cp311-cp311-manylinux_2_17_aarch64.whl", hash = "sha256:7cc83023acd8bc72cf74c2edbe85b52098501d5b74d8377bfa06f3e929803492"},
+ {file = "grpcio_tools-1.62.3-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7ff7d58a45b75df67d25f8f144936a3e44aabd91afec833ee06826bd02b7fbe7"},
+ {file = "grpcio_tools-1.62.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7f2483ea232bd72d98a6dc6d7aefd97e5bc80b15cd909b9e356d6f3e326b6e43"},
+ {file = "grpcio_tools-1.62.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:962c84b4da0f3b14b3cdb10bc3837ebc5f136b67d919aea8d7bb3fd3df39528a"},
+ {file = "grpcio_tools-1.62.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:8ad0473af5544f89fc5a1ece8676dd03bdf160fb3230f967e05d0f4bf89620e3"},
+ {file = "grpcio_tools-1.62.3-cp311-cp311-win32.whl", hash = "sha256:db3bc9fa39afc5e4e2767da4459df82b095ef0cab2f257707be06c44a1c2c3e5"},
+ {file = "grpcio_tools-1.62.3-cp311-cp311-win_amd64.whl", hash = "sha256:e0898d412a434e768a0c7e365acabe13ff1558b767e400936e26b5b6ed1ee51f"},
+ {file = "grpcio_tools-1.62.3-cp312-cp312-macosx_10_10_universal2.whl", hash = "sha256:d102b9b21c4e1e40af9a2ab3c6d41afba6bd29c0aa50ca013bf85c99cdc44ac5"},
+ {file = "grpcio_tools-1.62.3-cp312-cp312-manylinux_2_17_aarch64.whl", hash = "sha256:0a52cc9444df978438b8d2332c0ca99000521895229934a59f94f37ed896b133"},
+ {file = "grpcio_tools-1.62.3-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:141d028bf5762d4a97f981c501da873589df3f7e02f4c1260e1921e565b376fa"},
+ {file = "grpcio_tools-1.62.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47a5c093ab256dec5714a7a345f8cc89315cb57c298b276fa244f37a0ba507f0"},
+ {file = "grpcio_tools-1.62.3-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:f6831fdec2b853c9daa3358535c55eed3694325889aa714070528cf8f92d7d6d"},
+ {file = "grpcio_tools-1.62.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:e02d7c1a02e3814c94ba0cfe43d93e872c758bd8fd5c2797f894d0c49b4a1dfc"},
+ {file = "grpcio_tools-1.62.3-cp312-cp312-win32.whl", hash = "sha256:b881fd9505a84457e9f7e99362eeedd86497b659030cf57c6f0070df6d9c2b9b"},
+ {file = "grpcio_tools-1.62.3-cp312-cp312-win_amd64.whl", hash = "sha256:11c625eebefd1fd40a228fc8bae385e448c7e32a6ae134e43cf13bbc23f902b7"},
+ {file = "grpcio_tools-1.62.3-cp37-cp37m-macosx_10_10_universal2.whl", hash = "sha256:ec6fbded0c61afe6f84e3c2a43e6d656791d95747d6d28b73eff1af64108c434"},
+ {file = "grpcio_tools-1.62.3-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:bfda6ee8990997a9df95c5606f3096dae65f09af7ca03a1e9ca28f088caca5cf"},
+ {file = "grpcio_tools-1.62.3-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b77f9f9cee87cd798f0fe26b7024344d1b03a7cd2d2cba7035f8433b13986325"},
+ {file = "grpcio_tools-1.62.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2e02d3b96f2d0e4bab9ceaa30f37d4f75571e40c6272e95364bff3125a64d184"},
+ {file = "grpcio_tools-1.62.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:1da38070738da53556a4b35ab67c1b9884a5dd48fa2f243db35dc14079ea3d0c"},
+ {file = "grpcio_tools-1.62.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:ace43b26d88a58dcff16c20d23ff72b04d0a415f64d2820f4ff06b1166f50557"},
+ {file = "grpcio_tools-1.62.3-cp37-cp37m-win_amd64.whl", hash = "sha256:350a80485e302daaa95d335a931f97b693e170e02d43767ab06552c708808950"},
+ {file = "grpcio_tools-1.62.3-cp38-cp38-macosx_10_10_universal2.whl", hash = "sha256:c3a1ac9d394f8e229eb28eec2e04b9a6f5433fa19c9d32f1cb6066e3c5114a1d"},
+ {file = "grpcio_tools-1.62.3-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:11f363570dea661dde99e04a51bd108a5807b5df32a6f8bdf4860e34e94a4dbf"},
+ {file = "grpcio_tools-1.62.3-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dc9ad9950119d8ae27634e68b7663cc8d340ae535a0f80d85a55e56a6973ab1f"},
+ {file = "grpcio_tools-1.62.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c5d22b252dcef11dd1e0fbbe5bbfb9b4ae048e8880d33338215e8ccbdb03edc"},
+ {file = "grpcio_tools-1.62.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:27cd9ef5c5d68d5ed104b6dcb96fe9c66b82050e546c9e255716903c3d8f0373"},
+ {file = "grpcio_tools-1.62.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:f4b1615adf67bd8bb71f3464146a6f9949972d06d21a4f5e87e73f6464d97f57"},
+ {file = "grpcio_tools-1.62.3-cp38-cp38-win32.whl", hash = "sha256:e18e15287c31baf574fcdf8251fb7f997d64e96c6ecf467906e576da0a079af6"},
+ {file = "grpcio_tools-1.62.3-cp38-cp38-win_amd64.whl", hash = "sha256:6c3064610826f50bd69410c63101954676edc703e03f9e8f978a135f1aaf97c1"},
+ {file = "grpcio_tools-1.62.3-cp39-cp39-macosx_10_10_universal2.whl", hash = "sha256:8e62cc7164b0b7c5128e637e394eb2ef3db0e61fc798e80c301de3b2379203ed"},
+ {file = "grpcio_tools-1.62.3-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:c8ad5cce554e2fcaf8842dee5d9462583b601a3a78f8b76a153c38c963f58c10"},
+ {file = "grpcio_tools-1.62.3-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ec279dcf3518201fc592c65002754f58a6b542798cd7f3ecd4af086422f33f29"},
+ {file = "grpcio_tools-1.62.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1c989246c2aebc13253f08be32538a4039a64e12d9c18f6d662d7aee641dc8b5"},
+ {file = "grpcio_tools-1.62.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:ca4f5eeadbb57cf03317d6a2857823239a63a59cc935f5bd6cf6e8b7af7a7ecc"},
+ {file = "grpcio_tools-1.62.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0cb3a3436ac119cbd37a7d3331d9bdf85dad21a6ac233a3411dff716dcbf401e"},
+ {file = "grpcio_tools-1.62.3-cp39-cp39-win32.whl", hash = "sha256:3eae6ea76d62fcac091e1f15c2dcedf1dc3f114f8df1a972a8a0745e89f4cf61"},
+ {file = "grpcio_tools-1.62.3-cp39-cp39-win_amd64.whl", hash = "sha256:eec73a005443061f4759b71a056f745e3b000dc0dc125c9f20560232dfbcbd14"},
]
[package.dependencies]
-grpcio = ">=1.58.0"
+grpcio = ">=1.62.3"
protobuf = ">=4.21.6,<5.0dev"
setuptools = "*"
@@ -3778,22 +3754,22 @@ files = [
[[package]]
name = "importlib-metadata"
-version = "7.1.0"
+version = "8.0.0"
description = "Read metadata from Python packages"
optional = false
python-versions = ">=3.8"
files = [
- {file = "importlib_metadata-7.1.0-py3-none-any.whl", hash = "sha256:30962b96c0c223483ed6cc7280e7f0199feb01a0e40cfae4d4450fc6fab1f570"},
- {file = "importlib_metadata-7.1.0.tar.gz", hash = "sha256:b78938b926ee8d5f020fc4772d487045805a55ddbad2ecf21c6d60938dc7fcd2"},
+ {file = "importlib_metadata-8.0.0-py3-none-any.whl", hash = "sha256:15584cf2b1bf449d98ff8a6ff1abef57bf20f3ac6454f431736cd3e660921b2f"},
+ {file = "importlib_metadata-8.0.0.tar.gz", hash = "sha256:188bd24e4c346d3f0a933f275c2fec67050326a856b9a359881d7c2a697e8812"},
]
[package.dependencies]
zipp = ">=0.5"
[package.extras]
-docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
+doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
perf = ["ipython"]
-testing = ["flufl.flake8", "importlib-resources (>=1.3)", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-perf (>=0.9.2)", "pytest-ruff (>=0.2.1)"]
+test = ["flufl.flake8", "importlib-resources (>=1.3)", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-perf (>=0.9.2)", "pytest-ruff (>=0.2.1)"]
[[package]]
name = "importlib-resources"
@@ -4049,28 +4025,28 @@ files = [
[[package]]
name = "kombu"
-version = "5.3.7"
+version = "5.4.0"
description = "Messaging library for Python."
optional = false
python-versions = ">=3.8"
files = [
- {file = "kombu-5.3.7-py3-none-any.whl", hash = "sha256:5634c511926309c7f9789f1433e9ed402616b56836ef9878f01bd59267b4c7a9"},
- {file = "kombu-5.3.7.tar.gz", hash = "sha256:011c4cd9a355c14a1de8d35d257314a1d2456d52b7140388561acac3cf1a97bf"},
+ {file = "kombu-5.4.0-py3-none-any.whl", hash = "sha256:c8dd99820467610b4febbc7a9e8a0d3d7da2d35116b67184418b51cc520ea6b6"},
+ {file = "kombu-5.4.0.tar.gz", hash = "sha256:ad200a8dbdaaa2bbc5f26d2ee7d707d9a1fded353a0f4bd751ce8c7d9f449c60"},
]
[package.dependencies]
amqp = ">=5.1.1,<6.0.0"
-vine = "*"
+vine = "5.1.0"
[package.extras]
azureservicebus = ["azure-servicebus (>=7.10.0)"]
azurestoragequeues = ["azure-identity (>=1.12.0)", "azure-storage-queue (>=12.6.0)"]
confluentkafka = ["confluent-kafka (>=2.2.0)"]
-consul = ["python-consul2"]
+consul = ["python-consul2 (==0.1.5)"]
librabbitmq = ["librabbitmq (>=2.0.0)"]
mongodb = ["pymongo (>=4.1.1)"]
-msgpack = ["msgpack"]
-pyro = ["pyro4"]
+msgpack = ["msgpack (==1.0.8)"]
+pyro = ["pyro4 (==4.82)"]
qpid = ["qpid-python (>=0.26)", "qpid-tools (>=0.26)"]
redis = ["redis (>=4.5.2,!=4.5.5,!=5.0.2)"]
slmq = ["softlayer-messaging (>=1.0.3)"]
@@ -4121,13 +4097,13 @@ six = "*"
[[package]]
name = "langfuse"
-version = "2.39.3"
+version = "2.42.1"
description = "A client library for accessing langfuse"
optional = false
python-versions = "<4.0,>=3.8.1"
files = [
- {file = "langfuse-2.39.3-py3-none-any.whl", hash = "sha256:24b12cbb23f866b22706c1ea9631781f99fe37b0b15889d241198c4d1c07516b"},
- {file = "langfuse-2.39.3.tar.gz", hash = "sha256:4d2df8f9344572370703db103ddf97176df518699593254e6d6c2b8ca3bf2f12"},
+ {file = "langfuse-2.42.1-py3-none-any.whl", hash = "sha256:8895d9645aea91815db51565f90e110a76d5e157a7b12eaf1cd6959e7aaa2263"},
+ {file = "langfuse-2.42.1.tar.gz", hash = "sha256:f89faf1c14308d488c90f8b7d0368fff3d259f80ffe34d169b9cfc3f0dbfab82"},
]
[package.dependencies]
@@ -4146,13 +4122,13 @@ openai = ["openai (>=0.27.8)"]
[[package]]
name = "langsmith"
-version = "0.1.93"
+version = "0.1.98"
description = "Client library to connect to the LangSmith LLM Tracing and Evaluation Platform."
optional = false
python-versions = "<4.0,>=3.8.1"
files = [
- {file = "langsmith-0.1.93-py3-none-any.whl", hash = "sha256:811210b9d5f108f36431bd7b997eb9476a9ecf5a2abd7ddbb606c1cdcf0f43ce"},
- {file = "langsmith-0.1.93.tar.gz", hash = "sha256:285b6ad3a54f50fa8eb97b5f600acc57d0e37e139dd8cf2111a117d0435ba9b4"},
+ {file = "langsmith-0.1.98-py3-none-any.whl", hash = "sha256:f79e8a128652bbcee4606d10acb6236973b5cd7dde76e3741186d3b97b5698e9"},
+ {file = "langsmith-0.1.98.tar.gz", hash = "sha256:e07678219a0502e8f26d35294e72127a39d25e32fafd091af5a7bb661e9a6bd1"},
]
[package.dependencies]
@@ -4195,96 +4171,161 @@ files = [
[[package]]
name = "lxml"
-version = "5.1.0"
+version = "5.2.2"
description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API."
optional = false
python-versions = ">=3.6"
files = [
- {file = "lxml-5.1.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:704f5572ff473a5f897745abebc6df40f22d4133c1e0a1f124e4f2bd3330ff7e"},
- {file = "lxml-5.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9d3c0f8567ffe7502d969c2c1b809892dc793b5d0665f602aad19895f8d508da"},
- {file = "lxml-5.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5fcfbebdb0c5d8d18b84118842f31965d59ee3e66996ac842e21f957eb76138c"},
- {file = "lxml-5.1.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2f37c6d7106a9d6f0708d4e164b707037b7380fcd0b04c5bd9cae1fb46a856fb"},
- {file = "lxml-5.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2befa20a13f1a75c751f47e00929fb3433d67eb9923c2c0b364de449121f447c"},
- {file = "lxml-5.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:22b7ee4c35f374e2c20337a95502057964d7e35b996b1c667b5c65c567d2252a"},
- {file = "lxml-5.1.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:bf8443781533b8d37b295016a4b53c1494fa9a03573c09ca5104550c138d5c05"},
- {file = "lxml-5.1.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:82bddf0e72cb2af3cbba7cec1d2fd11fda0de6be8f4492223d4a268713ef2147"},
- {file = "lxml-5.1.0-cp310-cp310-win32.whl", hash = "sha256:b66aa6357b265670bb574f050ffceefb98549c721cf28351b748be1ef9577d93"},
- {file = "lxml-5.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:4946e7f59b7b6a9e27bef34422f645e9a368cb2be11bf1ef3cafc39a1f6ba68d"},
- {file = "lxml-5.1.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:14deca1460b4b0f6b01f1ddc9557704e8b365f55c63070463f6c18619ebf964f"},
- {file = "lxml-5.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ed8c3d2cd329bf779b7ed38db176738f3f8be637bb395ce9629fc76f78afe3d4"},
- {file = "lxml-5.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:436a943c2900bb98123b06437cdd30580a61340fbdb7b28aaf345a459c19046a"},
- {file = "lxml-5.1.0-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:acb6b2f96f60f70e7f34efe0c3ea34ca63f19ca63ce90019c6cbca6b676e81fa"},
- {file = "lxml-5.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:af8920ce4a55ff41167ddbc20077f5698c2e710ad3353d32a07d3264f3a2021e"},
- {file = "lxml-5.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7cfced4a069003d8913408e10ca8ed092c49a7f6cefee9bb74b6b3e860683b45"},
- {file = "lxml-5.1.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:9e5ac3437746189a9b4121db2a7b86056ac8786b12e88838696899328fc44bb2"},
- {file = "lxml-5.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:f4c9bda132ad108b387c33fabfea47866af87f4ea6ffb79418004f0521e63204"},
- {file = "lxml-5.1.0-cp311-cp311-win32.whl", hash = "sha256:bc64d1b1dab08f679fb89c368f4c05693f58a9faf744c4d390d7ed1d8223869b"},
- {file = "lxml-5.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:a5ab722ae5a873d8dcee1f5f45ddd93c34210aed44ff2dc643b5025981908cda"},
- {file = "lxml-5.1.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:9aa543980ab1fbf1720969af1d99095a548ea42e00361e727c58a40832439114"},
- {file = "lxml-5.1.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:6f11b77ec0979f7e4dc5ae081325a2946f1fe424148d3945f943ceaede98adb8"},
- {file = "lxml-5.1.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a36c506e5f8aeb40680491d39ed94670487ce6614b9d27cabe45d94cd5d63e1e"},
- {file = "lxml-5.1.0-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f643ffd2669ffd4b5a3e9b41c909b72b2a1d5e4915da90a77e119b8d48ce867a"},
- {file = "lxml-5.1.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:16dd953fb719f0ffc5bc067428fc9e88f599e15723a85618c45847c96f11f431"},
- {file = "lxml-5.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16018f7099245157564d7148165132c70adb272fb5a17c048ba70d9cc542a1a1"},
- {file = "lxml-5.1.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:82cd34f1081ae4ea2ede3d52f71b7be313756e99b4b5f829f89b12da552d3aa3"},
- {file = "lxml-5.1.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:19a1bc898ae9f06bccb7c3e1dfd73897ecbbd2c96afe9095a6026016e5ca97b8"},
- {file = "lxml-5.1.0-cp312-cp312-win32.whl", hash = "sha256:13521a321a25c641b9ea127ef478b580b5ec82aa2e9fc076c86169d161798b01"},
- {file = "lxml-5.1.0-cp312-cp312-win_amd64.whl", hash = "sha256:1ad17c20e3666c035db502c78b86e58ff6b5991906e55bdbef94977700c72623"},
- {file = "lxml-5.1.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:24ef5a4631c0b6cceaf2dbca21687e29725b7c4e171f33a8f8ce23c12558ded1"},
- {file = "lxml-5.1.0-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8d2900b7f5318bc7ad8631d3d40190b95ef2aa8cc59473b73b294e4a55e9f30f"},
- {file = "lxml-5.1.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:601f4a75797d7a770daed8b42b97cd1bb1ba18bd51a9382077a6a247a12aa38d"},
- {file = "lxml-5.1.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b4b68c961b5cc402cbd99cca5eb2547e46ce77260eb705f4d117fd9c3f932b95"},
- {file = "lxml-5.1.0-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:afd825e30f8d1f521713a5669b63657bcfe5980a916c95855060048b88e1adb7"},
- {file = "lxml-5.1.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:262bc5f512a66b527d026518507e78c2f9c2bd9eb5c8aeeb9f0eb43fcb69dc67"},
- {file = "lxml-5.1.0-cp36-cp36m-win32.whl", hash = "sha256:e856c1c7255c739434489ec9c8aa9cdf5179785d10ff20add308b5d673bed5cd"},
- {file = "lxml-5.1.0-cp36-cp36m-win_amd64.whl", hash = "sha256:c7257171bb8d4432fe9d6fdde4d55fdbe663a63636a17f7f9aaba9bcb3153ad7"},
- {file = "lxml-5.1.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b9e240ae0ba96477682aa87899d94ddec1cc7926f9df29b1dd57b39e797d5ab5"},
- {file = "lxml-5.1.0-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a96f02ba1bcd330807fc060ed91d1f7a20853da6dd449e5da4b09bfcc08fdcf5"},
- {file = "lxml-5.1.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3e3898ae2b58eeafedfe99e542a17859017d72d7f6a63de0f04f99c2cb125936"},
- {file = "lxml-5.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:61c5a7edbd7c695e54fca029ceb351fc45cd8860119a0f83e48be44e1c464862"},
- {file = "lxml-5.1.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:3aeca824b38ca78d9ee2ab82bd9883083d0492d9d17df065ba3b94e88e4d7ee6"},
- {file = "lxml-5.1.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8f52fe6859b9db71ee609b0c0a70fea5f1e71c3462ecf144ca800d3f434f0764"},
- {file = "lxml-5.1.0-cp37-cp37m-win32.whl", hash = "sha256:d42e3a3fc18acc88b838efded0e6ec3edf3e328a58c68fbd36a7263a874906c8"},
- {file = "lxml-5.1.0-cp37-cp37m-win_amd64.whl", hash = "sha256:eac68f96539b32fce2c9b47eb7c25bb2582bdaf1bbb360d25f564ee9e04c542b"},
- {file = "lxml-5.1.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:ae15347a88cf8af0949a9872b57a320d2605ae069bcdf047677318bc0bba45b1"},
- {file = "lxml-5.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c26aab6ea9c54d3bed716b8851c8bfc40cb249b8e9880e250d1eddde9f709bf5"},
- {file = "lxml-5.1.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:342e95bddec3a698ac24378d61996b3ee5ba9acfeb253986002ac53c9a5f6f84"},
- {file = "lxml-5.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:725e171e0b99a66ec8605ac77fa12239dbe061482ac854d25720e2294652eeaa"},
- {file = "lxml-5.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d184e0d5c918cff04cdde9dbdf9600e960161d773666958c9d7b565ccc60c45"},
- {file = "lxml-5.1.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:98f3f020a2b736566c707c8e034945c02aa94e124c24f77ca097c446f81b01f1"},
- {file = "lxml-5.1.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6d48fc57e7c1e3df57be5ae8614bab6d4e7b60f65c5457915c26892c41afc59e"},
- {file = "lxml-5.1.0-cp38-cp38-win32.whl", hash = "sha256:7ec465e6549ed97e9f1e5ed51c657c9ede767bc1c11552f7f4d022c4df4a977a"},
- {file = "lxml-5.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:b21b4031b53d25b0858d4e124f2f9131ffc1530431c6d1321805c90da78388d1"},
- {file = "lxml-5.1.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:52427a7eadc98f9e62cb1368a5079ae826f94f05755d2d567d93ee1bc3ceb354"},
- {file = "lxml-5.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6a2a2c724d97c1eb8cf966b16ca2915566a4904b9aad2ed9a09c748ffe14f969"},
- {file = "lxml-5.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:843b9c835580d52828d8f69ea4302537337a21e6b4f1ec711a52241ba4a824f3"},
- {file = "lxml-5.1.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9b99f564659cfa704a2dd82d0684207b1aadf7d02d33e54845f9fc78e06b7581"},
- {file = "lxml-5.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4f8b0c78e7aac24979ef09b7f50da871c2de2def043d468c4b41f512d831e912"},
- {file = "lxml-5.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9bcf86dfc8ff3e992fed847c077bd875d9e0ba2fa25d859c3a0f0f76f07f0c8d"},
- {file = "lxml-5.1.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:49a9b4af45e8b925e1cd6f3b15bbba2c81e7dba6dce170c677c9cda547411e14"},
- {file = "lxml-5.1.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:280f3edf15c2a967d923bcfb1f8f15337ad36f93525828b40a0f9d6c2ad24890"},
- {file = "lxml-5.1.0-cp39-cp39-win32.whl", hash = "sha256:ed7326563024b6e91fef6b6c7a1a2ff0a71b97793ac33dbbcf38f6005e51ff6e"},
- {file = "lxml-5.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:8d7b4beebb178e9183138f552238f7e6613162a42164233e2bda00cb3afac58f"},
- {file = "lxml-5.1.0-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9bd0ae7cc2b85320abd5e0abad5ccee5564ed5f0cc90245d2f9a8ef330a8deae"},
- {file = "lxml-5.1.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d8c1d679df4361408b628f42b26a5d62bd3e9ba7f0c0e7969f925021554755aa"},
- {file = "lxml-5.1.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:2ad3a8ce9e8a767131061a22cd28fdffa3cd2dc193f399ff7b81777f3520e372"},
- {file = "lxml-5.1.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:304128394c9c22b6569eba2a6d98392b56fbdfbad58f83ea702530be80d0f9df"},
- {file = "lxml-5.1.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d74fcaf87132ffc0447b3c685a9f862ffb5b43e70ea6beec2fb8057d5d2a1fea"},
- {file = "lxml-5.1.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:8cf5877f7ed384dabfdcc37922c3191bf27e55b498fecece9fd5c2c7aaa34c33"},
- {file = "lxml-5.1.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:877efb968c3d7eb2dad540b6cabf2f1d3c0fbf4b2d309a3c141f79c7e0061324"},
- {file = "lxml-5.1.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3f14a4fb1c1c402a22e6a341a24c1341b4a3def81b41cd354386dcb795f83897"},
- {file = "lxml-5.1.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:25663d6e99659544ee8fe1b89b1a8c0aaa5e34b103fab124b17fa958c4a324a6"},
- {file = "lxml-5.1.0-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:8b9f19df998761babaa7f09e6bc169294eefafd6149aaa272081cbddc7ba4ca3"},
- {file = "lxml-5.1.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5e53d7e6a98b64fe54775d23a7c669763451340c3d44ad5e3a3b48a1efbdc96f"},
- {file = "lxml-5.1.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:c3cd1fc1dc7c376c54440aeaaa0dcc803d2126732ff5c6b68ccd619f2e64be4f"},
- {file = "lxml-5.1.0.tar.gz", hash = "sha256:3eea6ed6e6c918e468e693c41ef07f3c3acc310b70ddd9cc72d9ef84bc9564ca"},
+ {file = "lxml-5.2.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:364d03207f3e603922d0d3932ef363d55bbf48e3647395765f9bfcbdf6d23632"},
+ {file = "lxml-5.2.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:50127c186f191b8917ea2fb8b206fbebe87fd414a6084d15568c27d0a21d60db"},
+ {file = "lxml-5.2.2-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:74e4f025ef3db1c6da4460dd27c118d8cd136d0391da4e387a15e48e5c975147"},
+ {file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:981a06a3076997adf7c743dcd0d7a0415582661e2517c7d961493572e909aa1d"},
+ {file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:aef5474d913d3b05e613906ba4090433c515e13ea49c837aca18bde190853dff"},
+ {file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1e275ea572389e41e8b039ac076a46cb87ee6b8542df3fff26f5baab43713bca"},
+ {file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f5b65529bb2f21ac7861a0e94fdbf5dc0daab41497d18223b46ee8515e5ad297"},
+ {file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:bcc98f911f10278d1daf14b87d65325851a1d29153caaf146877ec37031d5f36"},
+ {file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:b47633251727c8fe279f34025844b3b3a3e40cd1b198356d003aa146258d13a2"},
+ {file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:fbc9d316552f9ef7bba39f4edfad4a734d3d6f93341232a9dddadec4f15d425f"},
+ {file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:13e69be35391ce72712184f69000cda04fc89689429179bc4c0ae5f0b7a8c21b"},
+ {file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:3b6a30a9ab040b3f545b697cb3adbf3696c05a3a68aad172e3fd7ca73ab3c835"},
+ {file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:a233bb68625a85126ac9f1fc66d24337d6e8a0f9207b688eec2e7c880f012ec0"},
+ {file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:dfa7c241073d8f2b8e8dbc7803c434f57dbb83ae2a3d7892dd068d99e96efe2c"},
+ {file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:1a7aca7964ac4bb07680d5c9d63b9d7028cace3e2d43175cb50bba8c5ad33316"},
+ {file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:ae4073a60ab98529ab8a72ebf429f2a8cc612619a8c04e08bed27450d52103c0"},
+ {file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:ffb2be176fed4457e445fe540617f0252a72a8bc56208fd65a690fdb1f57660b"},
+ {file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:e290d79a4107d7d794634ce3e985b9ae4f920380a813717adf61804904dc4393"},
+ {file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:96e85aa09274955bb6bd483eaf5b12abadade01010478154b0ec70284c1b1526"},
+ {file = "lxml-5.2.2-cp310-cp310-win32.whl", hash = "sha256:f956196ef61369f1685d14dad80611488d8dc1ef00be57c0c5a03064005b0f30"},
+ {file = "lxml-5.2.2-cp310-cp310-win_amd64.whl", hash = "sha256:875a3f90d7eb5c5d77e529080d95140eacb3c6d13ad5b616ee8095447b1d22e7"},
+ {file = "lxml-5.2.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:45f9494613160d0405682f9eee781c7e6d1bf45f819654eb249f8f46a2c22545"},
+ {file = "lxml-5.2.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b0b3f2df149efb242cee2ffdeb6674b7f30d23c9a7af26595099afaf46ef4e88"},
+ {file = "lxml-5.2.2-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d28cb356f119a437cc58a13f8135ab8a4c8ece18159eb9194b0d269ec4e28083"},
+ {file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:657a972f46bbefdbba2d4f14413c0d079f9ae243bd68193cb5061b9732fa54c1"},
+ {file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b74b9ea10063efb77a965a8d5f4182806fbf59ed068b3c3fd6f30d2ac7bee734"},
+ {file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:07542787f86112d46d07d4f3c4e7c760282011b354d012dc4141cc12a68cef5f"},
+ {file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:303f540ad2dddd35b92415b74b900c749ec2010e703ab3bfd6660979d01fd4ed"},
+ {file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:2eb2227ce1ff998faf0cd7fe85bbf086aa41dfc5af3b1d80867ecfe75fb68df3"},
+ {file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:1d8a701774dfc42a2f0b8ccdfe7dbc140500d1049e0632a611985d943fcf12df"},
+ {file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:56793b7a1a091a7c286b5f4aa1fe4ae5d1446fe742d00cdf2ffb1077865db10d"},
+ {file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:eb00b549b13bd6d884c863554566095bf6fa9c3cecb2e7b399c4bc7904cb33b5"},
+ {file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1a2569a1f15ae6c8c64108a2cd2b4a858fc1e13d25846be0666fc144715e32ab"},
+ {file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:8cf85a6e40ff1f37fe0f25719aadf443686b1ac7652593dc53c7ef9b8492b115"},
+ {file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:d237ba6664b8e60fd90b8549a149a74fcc675272e0e95539a00522e4ca688b04"},
+ {file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:0b3f5016e00ae7630a4b83d0868fca1e3d494c78a75b1c7252606a3a1c5fc2ad"},
+ {file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:23441e2b5339bc54dc949e9e675fa35efe858108404ef9aa92f0456929ef6fe8"},
+ {file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:2fb0ba3e8566548d6c8e7dd82a8229ff47bd8fb8c2da237607ac8e5a1b8312e5"},
+ {file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:79d1fb9252e7e2cfe4de6e9a6610c7cbb99b9708e2c3e29057f487de5a9eaefa"},
+ {file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6dcc3d17eac1df7859ae01202e9bb11ffa8c98949dcbeb1069c8b9a75917e01b"},
+ {file = "lxml-5.2.2-cp311-cp311-win32.whl", hash = "sha256:4c30a2f83677876465f44c018830f608fa3c6a8a466eb223535035fbc16f3438"},
+ {file = "lxml-5.2.2-cp311-cp311-win_amd64.whl", hash = "sha256:49095a38eb333aaf44c06052fd2ec3b8f23e19747ca7ec6f6c954ffea6dbf7be"},
+ {file = "lxml-5.2.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:7429e7faa1a60cad26ae4227f4dd0459efde239e494c7312624ce228e04f6391"},
+ {file = "lxml-5.2.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:50ccb5d355961c0f12f6cf24b7187dbabd5433f29e15147a67995474f27d1776"},
+ {file = "lxml-5.2.2-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dc911208b18842a3a57266d8e51fc3cfaccee90a5351b92079beed912a7914c2"},
+ {file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:33ce9e786753743159799fdf8e92a5da351158c4bfb6f2db0bf31e7892a1feb5"},
+ {file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ec87c44f619380878bd49ca109669c9f221d9ae6883a5bcb3616785fa8f94c97"},
+ {file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:08ea0f606808354eb8f2dfaac095963cb25d9d28e27edcc375d7b30ab01abbf6"},
+ {file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75a9632f1d4f698b2e6e2e1ada40e71f369b15d69baddb8968dcc8e683839b18"},
+ {file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:74da9f97daec6928567b48c90ea2c82a106b2d500f397eeb8941e47d30b1ca85"},
+ {file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:0969e92af09c5687d769731e3f39ed62427cc72176cebb54b7a9d52cc4fa3b73"},
+ {file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:9164361769b6ca7769079f4d426a41df6164879f7f3568be9086e15baca61466"},
+ {file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:d26a618ae1766279f2660aca0081b2220aca6bd1aa06b2cf73f07383faf48927"},
+ {file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:ab67ed772c584b7ef2379797bf14b82df9aa5f7438c5b9a09624dd834c1c1aaf"},
+ {file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:3d1e35572a56941b32c239774d7e9ad724074d37f90c7a7d499ab98761bd80cf"},
+ {file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:8268cbcd48c5375f46e000adb1390572c98879eb4f77910c6053d25cc3ac2c67"},
+ {file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:e282aedd63c639c07c3857097fc0e236f984ceb4089a8b284da1c526491e3f3d"},
+ {file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6dfdc2bfe69e9adf0df4915949c22a25b39d175d599bf98e7ddf620a13678585"},
+ {file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:4aefd911793b5d2d7a921233a54c90329bf3d4a6817dc465f12ffdfe4fc7b8fe"},
+ {file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:8b8df03a9e995b6211dafa63b32f9d405881518ff1ddd775db4e7b98fb545e1c"},
+ {file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:f11ae142f3a322d44513de1018b50f474f8f736bc3cd91d969f464b5bfef8836"},
+ {file = "lxml-5.2.2-cp312-cp312-win32.whl", hash = "sha256:16a8326e51fcdffc886294c1e70b11ddccec836516a343f9ed0f82aac043c24a"},
+ {file = "lxml-5.2.2-cp312-cp312-win_amd64.whl", hash = "sha256:bbc4b80af581e18568ff07f6395c02114d05f4865c2812a1f02f2eaecf0bfd48"},
+ {file = "lxml-5.2.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e3d9d13603410b72787579769469af730c38f2f25505573a5888a94b62b920f8"},
+ {file = "lxml-5.2.2-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:38b67afb0a06b8575948641c1d6d68e41b83a3abeae2ca9eed2ac59892b36706"},
+ {file = "lxml-5.2.2-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c689d0d5381f56de7bd6966a4541bff6e08bf8d3871bbd89a0c6ab18aa699573"},
+ {file = "lxml-5.2.2-cp36-cp36m-manylinux_2_28_x86_64.whl", hash = "sha256:cf2a978c795b54c539f47964ec05e35c05bd045db5ca1e8366988c7f2fe6b3ce"},
+ {file = "lxml-5.2.2-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:739e36ef7412b2bd940f75b278749106e6d025e40027c0b94a17ef7968d55d56"},
+ {file = "lxml-5.2.2-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d8bbcd21769594dbba9c37d3c819e2d5847656ca99c747ddb31ac1701d0c0ed9"},
+ {file = "lxml-5.2.2-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:2304d3c93f2258ccf2cf7a6ba8c761d76ef84948d87bf9664e14d203da2cd264"},
+ {file = "lxml-5.2.2-cp36-cp36m-win32.whl", hash = "sha256:02437fb7308386867c8b7b0e5bc4cd4b04548b1c5d089ffb8e7b31009b961dc3"},
+ {file = "lxml-5.2.2-cp36-cp36m-win_amd64.whl", hash = "sha256:edcfa83e03370032a489430215c1e7783128808fd3e2e0a3225deee278585196"},
+ {file = "lxml-5.2.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:28bf95177400066596cdbcfc933312493799382879da504633d16cf60bba735b"},
+ {file = "lxml-5.2.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3a745cc98d504d5bd2c19b10c79c61c7c3df9222629f1b6210c0368177589fb8"},
+ {file = "lxml-5.2.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1b590b39ef90c6b22ec0be925b211298e810b4856909c8ca60d27ffbca6c12e6"},
+ {file = "lxml-5.2.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b336b0416828022bfd5a2e3083e7f5ba54b96242159f83c7e3eebaec752f1716"},
+ {file = "lxml-5.2.2-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:c2faf60c583af0d135e853c86ac2735ce178f0e338a3c7f9ae8f622fd2eb788c"},
+ {file = "lxml-5.2.2-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:4bc6cb140a7a0ad1f7bc37e018d0ed690b7b6520ade518285dc3171f7a117905"},
+ {file = "lxml-5.2.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7ff762670cada8e05b32bf1e4dc50b140790909caa8303cfddc4d702b71ea184"},
+ {file = "lxml-5.2.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:57f0a0bbc9868e10ebe874e9f129d2917750adf008fe7b9c1598c0fbbfdde6a6"},
+ {file = "lxml-5.2.2-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:a6d2092797b388342c1bc932077ad232f914351932353e2e8706851c870bca1f"},
+ {file = "lxml-5.2.2-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:60499fe961b21264e17a471ec296dcbf4365fbea611bf9e303ab69db7159ce61"},
+ {file = "lxml-5.2.2-cp37-cp37m-win32.whl", hash = "sha256:d9b342c76003c6b9336a80efcc766748a333573abf9350f4094ee46b006ec18f"},
+ {file = "lxml-5.2.2-cp37-cp37m-win_amd64.whl", hash = "sha256:b16db2770517b8799c79aa80f4053cd6f8b716f21f8aca962725a9565ce3ee40"},
+ {file = "lxml-5.2.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7ed07b3062b055d7a7f9d6557a251cc655eed0b3152b76de619516621c56f5d3"},
+ {file = "lxml-5.2.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f60fdd125d85bf9c279ffb8e94c78c51b3b6a37711464e1f5f31078b45002421"},
+ {file = "lxml-5.2.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8a7e24cb69ee5f32e003f50e016d5fde438010c1022c96738b04fc2423e61706"},
+ {file = "lxml-5.2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23cfafd56887eaed93d07bc4547abd5e09d837a002b791e9767765492a75883f"},
+ {file = "lxml-5.2.2-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:19b4e485cd07b7d83e3fe3b72132e7df70bfac22b14fe4bf7a23822c3a35bff5"},
+ {file = "lxml-5.2.2-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:7ce7ad8abebe737ad6143d9d3bf94b88b93365ea30a5b81f6877ec9c0dee0a48"},
+ {file = "lxml-5.2.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:e49b052b768bb74f58c7dda4e0bdf7b79d43a9204ca584ffe1fb48a6f3c84c66"},
+ {file = "lxml-5.2.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d14a0d029a4e176795cef99c056d58067c06195e0c7e2dbb293bf95c08f772a3"},
+ {file = "lxml-5.2.2-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:be49ad33819d7dcc28a309b86d4ed98e1a65f3075c6acd3cd4fe32103235222b"},
+ {file = "lxml-5.2.2-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:a6d17e0370d2516d5bb9062c7b4cb731cff921fc875644c3d751ad857ba9c5b1"},
+ {file = "lxml-5.2.2-cp38-cp38-win32.whl", hash = "sha256:5b8c041b6265e08eac8a724b74b655404070b636a8dd6d7a13c3adc07882ef30"},
+ {file = "lxml-5.2.2-cp38-cp38-win_amd64.whl", hash = "sha256:f61efaf4bed1cc0860e567d2ecb2363974d414f7f1f124b1df368bbf183453a6"},
+ {file = "lxml-5.2.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:fb91819461b1b56d06fa4bcf86617fac795f6a99d12239fb0c68dbeba41a0a30"},
+ {file = "lxml-5.2.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d4ed0c7cbecde7194cd3228c044e86bf73e30a23505af852857c09c24e77ec5d"},
+ {file = "lxml-5.2.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:54401c77a63cc7d6dc4b4e173bb484f28a5607f3df71484709fe037c92d4f0ed"},
+ {file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:625e3ef310e7fa3a761d48ca7ea1f9d8718a32b1542e727d584d82f4453d5eeb"},
+ {file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:519895c99c815a1a24a926d5b60627ce5ea48e9f639a5cd328bda0515ea0f10c"},
+ {file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c7079d5eb1c1315a858bbf180000757db8ad904a89476653232db835c3114001"},
+ {file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:343ab62e9ca78094f2306aefed67dcfad61c4683f87eee48ff2fd74902447726"},
+ {file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:cd9e78285da6c9ba2d5c769628f43ef66d96ac3085e59b10ad4f3707980710d3"},
+ {file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_ppc64le.whl", hash = "sha256:546cf886f6242dff9ec206331209db9c8e1643ae642dea5fdbecae2453cb50fd"},
+ {file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_s390x.whl", hash = "sha256:02f6a8eb6512fdc2fd4ca10a49c341c4e109aa6e9448cc4859af5b949622715a"},
+ {file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:339ee4a4704bc724757cd5dd9dc8cf4d00980f5d3e6e06d5847c1b594ace68ab"},
+ {file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0a028b61a2e357ace98b1615fc03f76eb517cc028993964fe08ad514b1e8892d"},
+ {file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:f90e552ecbad426eab352e7b2933091f2be77115bb16f09f78404861c8322981"},
+ {file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:d83e2d94b69bf31ead2fa45f0acdef0757fa0458a129734f59f67f3d2eb7ef32"},
+ {file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a02d3c48f9bb1e10c7788d92c0c7db6f2002d024ab6e74d6f45ae33e3d0288a3"},
+ {file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:6d68ce8e7b2075390e8ac1e1d3a99e8b6372c694bbe612632606d1d546794207"},
+ {file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:453d037e09a5176d92ec0fd282e934ed26d806331a8b70ab431a81e2fbabf56d"},
+ {file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:3b019d4ee84b683342af793b56bb35034bd749e4cbdd3d33f7d1107790f8c472"},
+ {file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:cb3942960f0beb9f46e2a71a3aca220d1ca32feb5a398656be934320804c0df9"},
+ {file = "lxml-5.2.2-cp39-cp39-win32.whl", hash = "sha256:ac6540c9fff6e3813d29d0403ee7a81897f1d8ecc09a8ff84d2eea70ede1cdbf"},
+ {file = "lxml-5.2.2-cp39-cp39-win_amd64.whl", hash = "sha256:610b5c77428a50269f38a534057444c249976433f40f53e3b47e68349cca1425"},
+ {file = "lxml-5.2.2-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:b537bd04d7ccd7c6350cdaaaad911f6312cbd61e6e6045542f781c7f8b2e99d2"},
+ {file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4820c02195d6dfb7b8508ff276752f6b2ff8b64ae5d13ebe02e7667e035000b9"},
+ {file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2a09f6184f17a80897172863a655467da2b11151ec98ba8d7af89f17bf63dae"},
+ {file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:76acba4c66c47d27c8365e7c10b3d8016a7da83d3191d053a58382311a8bf4e1"},
+ {file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b128092c927eaf485928cec0c28f6b8bead277e28acf56800e972aa2c2abd7a2"},
+ {file = "lxml-5.2.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:ae791f6bd43305aade8c0e22f816b34f3b72b6c820477aab4d18473a37e8090b"},
+ {file = "lxml-5.2.2-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a2f6a1bc2460e643785a2cde17293bd7a8f990884b822f7bca47bee0a82fc66b"},
+ {file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e8d351ff44c1638cb6e980623d517abd9f580d2e53bfcd18d8941c052a5a009"},
+ {file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bec4bd9133420c5c52d562469c754f27c5c9e36ee06abc169612c959bd7dbb07"},
+ {file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:55ce6b6d803890bd3cc89975fca9de1dff39729b43b73cb15ddd933b8bc20484"},
+ {file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:8ab6a358d1286498d80fe67bd3d69fcbc7d1359b45b41e74c4a26964ca99c3f8"},
+ {file = "lxml-5.2.2-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:06668e39e1f3c065349c51ac27ae430719d7806c026fec462e5693b08b95696b"},
+ {file = "lxml-5.2.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9cd5323344d8ebb9fb5e96da5de5ad4ebab993bbf51674259dbe9d7a18049525"},
+ {file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:89feb82ca055af0fe797a2323ec9043b26bc371365847dbe83c7fd2e2f181c34"},
+ {file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e481bba1e11ba585fb06db666bfc23dbe181dbafc7b25776156120bf12e0d5a6"},
+ {file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:9d6c6ea6a11ca0ff9cd0390b885984ed31157c168565702959c25e2191674a14"},
+ {file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:3d98de734abee23e61f6b8c2e08a88453ada7d6486dc7cdc82922a03968928db"},
+ {file = "lxml-5.2.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:69ab77a1373f1e7563e0fb5a29a8440367dec051da6c7405333699d07444f511"},
+ {file = "lxml-5.2.2-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:34e17913c431f5ae01d8658dbf792fdc457073dcdfbb31dc0cc6ab256e664a8d"},
+ {file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:05f8757b03208c3f50097761be2dea0aba02e94f0dc7023ed73a7bb14ff11eb0"},
+ {file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a520b4f9974b0a0a6ed73c2154de57cdfd0c8800f4f15ab2b73238ffed0b36e"},
+ {file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5e097646944b66207023bc3c634827de858aebc226d5d4d6d16f0b77566ea182"},
+ {file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b5e4ef22ff25bfd4ede5f8fb30f7b24446345f3e79d9b7455aef2836437bc38a"},
+ {file = "lxml-5.2.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:ff69a9a0b4b17d78170c73abe2ab12084bdf1691550c5629ad1fe7849433f324"},
+ {file = "lxml-5.2.2.tar.gz", hash = "sha256:bb2dc4898180bea79863d5487e5f9c7c34297414bad54bcd0f0852aee9cfdb87"},
]
[package.extras]
cssselect = ["cssselect (>=0.7)"]
+html-clean = ["lxml-html-clean"]
html5 = ["html5lib"]
htmlsoup = ["BeautifulSoup4"]
-source = ["Cython (>=3.0.7)"]
+source = ["Cython (>=3.0.10)"]
[[package]]
name = "lz4"
@@ -5222,42 +5263,42 @@ kerberos = ["requests-kerberos"]
[[package]]
name = "opentelemetry-api"
-version = "1.25.0"
+version = "1.26.0"
description = "OpenTelemetry Python API"
optional = false
python-versions = ">=3.8"
files = [
- {file = "opentelemetry_api-1.25.0-py3-none-any.whl", hash = "sha256:757fa1aa020a0f8fa139f8959e53dec2051cc26b832e76fa839a6d76ecefd737"},
- {file = "opentelemetry_api-1.25.0.tar.gz", hash = "sha256:77c4985f62f2614e42ce77ee4c9da5fa5f0bc1e1821085e9a47533a9323ae869"},
+ {file = "opentelemetry_api-1.26.0-py3-none-any.whl", hash = "sha256:7d7ea33adf2ceda2dd680b18b1677e4152000b37ca76e679da71ff103b943064"},
+ {file = "opentelemetry_api-1.26.0.tar.gz", hash = "sha256:2bd639e4bed5b18486fef0b5a520aaffde5a18fc225e808a1ac4df363f43a1ce"},
]
[package.dependencies]
deprecated = ">=1.2.6"
-importlib-metadata = ">=6.0,<=7.1"
+importlib-metadata = ">=6.0,<=8.0.0"
[[package]]
name = "opentelemetry-exporter-otlp-proto-common"
-version = "1.25.0"
+version = "1.26.0"
description = "OpenTelemetry Protobuf encoding"
optional = false
python-versions = ">=3.8"
files = [
- {file = "opentelemetry_exporter_otlp_proto_common-1.25.0-py3-none-any.whl", hash = "sha256:15637b7d580c2675f70246563363775b4e6de947871e01d0f4e3881d1848d693"},
- {file = "opentelemetry_exporter_otlp_proto_common-1.25.0.tar.gz", hash = "sha256:c93f4e30da4eee02bacd1e004eb82ce4da143a2f8e15b987a9f603e0a85407d3"},
+ {file = "opentelemetry_exporter_otlp_proto_common-1.26.0-py3-none-any.whl", hash = "sha256:ee4d8f8891a1b9c372abf8d109409e5b81947cf66423fd998e56880057afbc71"},
+ {file = "opentelemetry_exporter_otlp_proto_common-1.26.0.tar.gz", hash = "sha256:bdbe50e2e22a1c71acaa0c8ba6efaadd58882e5a5978737a44a4c4b10d304c92"},
]
[package.dependencies]
-opentelemetry-proto = "1.25.0"
+opentelemetry-proto = "1.26.0"
[[package]]
name = "opentelemetry-exporter-otlp-proto-grpc"
-version = "1.25.0"
+version = "1.26.0"
description = "OpenTelemetry Collector Protobuf over gRPC Exporter"
optional = false
python-versions = ">=3.8"
files = [
- {file = "opentelemetry_exporter_otlp_proto_grpc-1.25.0-py3-none-any.whl", hash = "sha256:3131028f0c0a155a64c430ca600fd658e8e37043cb13209f0109db5c1a3e4eb4"},
- {file = "opentelemetry_exporter_otlp_proto_grpc-1.25.0.tar.gz", hash = "sha256:c0b1661415acec5af87625587efa1ccab68b873745ca0ee96b69bb1042087eac"},
+ {file = "opentelemetry_exporter_otlp_proto_grpc-1.26.0-py3-none-any.whl", hash = "sha256:e2be5eff72ebcb010675b818e8d7c2e7d61ec451755b8de67a140bc49b9b0280"},
+ {file = "opentelemetry_exporter_otlp_proto_grpc-1.26.0.tar.gz", hash = "sha256:a65b67a9a6b06ba1ec406114568e21afe88c1cdb29c464f2507d529eb906d8ae"},
]
[package.dependencies]
@@ -5265,19 +5306,19 @@ deprecated = ">=1.2.6"
googleapis-common-protos = ">=1.52,<2.0"
grpcio = ">=1.0.0,<2.0.0"
opentelemetry-api = ">=1.15,<2.0"
-opentelemetry-exporter-otlp-proto-common = "1.25.0"
-opentelemetry-proto = "1.25.0"
-opentelemetry-sdk = ">=1.25.0,<1.26.0"
+opentelemetry-exporter-otlp-proto-common = "1.26.0"
+opentelemetry-proto = "1.26.0"
+opentelemetry-sdk = ">=1.26.0,<1.27.0"
[[package]]
name = "opentelemetry-instrumentation"
-version = "0.46b0"
+version = "0.47b0"
description = "Instrumentation Tools & Auto Instrumentation for OpenTelemetry Python"
optional = false
python-versions = ">=3.8"
files = [
- {file = "opentelemetry_instrumentation-0.46b0-py3-none-any.whl", hash = "sha256:89cd721b9c18c014ca848ccd11181e6b3fd3f6c7669e35d59c48dc527408c18b"},
- {file = "opentelemetry_instrumentation-0.46b0.tar.gz", hash = "sha256:974e0888fb2a1e01c38fbacc9483d024bb1132aad92d6d24e2e5543887a7adda"},
+ {file = "opentelemetry_instrumentation-0.47b0-py3-none-any.whl", hash = "sha256:88974ee52b1db08fc298334b51c19d47e53099c33740e48c4f084bd1afd052d5"},
+ {file = "opentelemetry_instrumentation-0.47b0.tar.gz", hash = "sha256:96f9885e450c35e3f16a4f33145f2ebf620aea910c9fd74a392bbc0f807a350f"},
]
[package.dependencies]
@@ -5287,55 +5328,55 @@ wrapt = ">=1.0.0,<2.0.0"
[[package]]
name = "opentelemetry-instrumentation-asgi"
-version = "0.46b0"
+version = "0.47b0"
description = "ASGI instrumentation for OpenTelemetry"
optional = false
python-versions = ">=3.8"
files = [
- {file = "opentelemetry_instrumentation_asgi-0.46b0-py3-none-any.whl", hash = "sha256:f13c55c852689573057837a9500aeeffc010c4ba59933c322e8f866573374759"},
- {file = "opentelemetry_instrumentation_asgi-0.46b0.tar.gz", hash = "sha256:02559f30cf4b7e2a737ab17eb52aa0779bcf4cc06573064f3e2cb4dcc7d3040a"},
+ {file = "opentelemetry_instrumentation_asgi-0.47b0-py3-none-any.whl", hash = "sha256:b798dc4957b3edc9dfecb47a4c05809036a4b762234c5071212fda39ead80ade"},
+ {file = "opentelemetry_instrumentation_asgi-0.47b0.tar.gz", hash = "sha256:e78b7822c1bca0511e5e9610ec484b8994a81670375e570c76f06f69af7c506a"},
]
[package.dependencies]
asgiref = ">=3.0,<4.0"
opentelemetry-api = ">=1.12,<2.0"
-opentelemetry-instrumentation = "0.46b0"
-opentelemetry-semantic-conventions = "0.46b0"
-opentelemetry-util-http = "0.46b0"
+opentelemetry-instrumentation = "0.47b0"
+opentelemetry-semantic-conventions = "0.47b0"
+opentelemetry-util-http = "0.47b0"
[package.extras]
instruments = ["asgiref (>=3.0,<4.0)"]
[[package]]
name = "opentelemetry-instrumentation-fastapi"
-version = "0.46b0"
+version = "0.47b0"
description = "OpenTelemetry FastAPI Instrumentation"
optional = false
python-versions = ">=3.8"
files = [
- {file = "opentelemetry_instrumentation_fastapi-0.46b0-py3-none-any.whl", hash = "sha256:e0f5d150c6c36833dd011f0e6ef5ede6d7406c1aed0c7c98b2d3b38a018d1b33"},
- {file = "opentelemetry_instrumentation_fastapi-0.46b0.tar.gz", hash = "sha256:928a883a36fc89f9702f15edce43d1a7104da93d740281e32d50ffd03dbb4365"},
+ {file = "opentelemetry_instrumentation_fastapi-0.47b0-py3-none-any.whl", hash = "sha256:5ac28dd401160b02e4f544a85a9e4f61a8cbe5b077ea0379d411615376a2bd21"},
+ {file = "opentelemetry_instrumentation_fastapi-0.47b0.tar.gz", hash = "sha256:0c7c10b5d971e99a420678ffd16c5b1ea4f0db3b31b62faf305fbb03b4ebee36"},
]
[package.dependencies]
opentelemetry-api = ">=1.12,<2.0"
-opentelemetry-instrumentation = "0.46b0"
-opentelemetry-instrumentation-asgi = "0.46b0"
-opentelemetry-semantic-conventions = "0.46b0"
-opentelemetry-util-http = "0.46b0"
+opentelemetry-instrumentation = "0.47b0"
+opentelemetry-instrumentation-asgi = "0.47b0"
+opentelemetry-semantic-conventions = "0.47b0"
+opentelemetry-util-http = "0.47b0"
[package.extras]
-instruments = ["fastapi (>=0.58,<1.0)"]
+instruments = ["fastapi (>=0.58,<1.0)", "fastapi-slim (>=0.111.0,<0.112.0)"]
[[package]]
name = "opentelemetry-proto"
-version = "1.25.0"
+version = "1.26.0"
description = "OpenTelemetry Python Proto"
optional = false
python-versions = ">=3.8"
files = [
- {file = "opentelemetry_proto-1.25.0-py3-none-any.whl", hash = "sha256:f07e3341c78d835d9b86665903b199893befa5e98866f63d22b00d0b7ca4972f"},
- {file = "opentelemetry_proto-1.25.0.tar.gz", hash = "sha256:35b6ef9dc4a9f7853ecc5006738ad40443701e52c26099e197895cbda8b815a3"},
+ {file = "opentelemetry_proto-1.26.0-py3-none-any.whl", hash = "sha256:6c4d7b4d4d9c88543bcf8c28ae3f8f0448a753dc291c18c5390444c90b76a725"},
+ {file = "opentelemetry_proto-1.26.0.tar.gz", hash = "sha256:c5c18796c0cab3751fc3b98dee53855835e90c0422924b484432ac852d93dc1e"},
]
[package.dependencies]
@@ -5343,43 +5384,44 @@ protobuf = ">=3.19,<5.0"
[[package]]
name = "opentelemetry-sdk"
-version = "1.25.0"
+version = "1.26.0"
description = "OpenTelemetry Python SDK"
optional = false
python-versions = ">=3.8"
files = [
- {file = "opentelemetry_sdk-1.25.0-py3-none-any.whl", hash = "sha256:d97ff7ec4b351692e9d5a15af570c693b8715ad78b8aafbec5c7100fe966b4c9"},
- {file = "opentelemetry_sdk-1.25.0.tar.gz", hash = "sha256:ce7fc319c57707ef5bf8b74fb9f8ebdb8bfafbe11898410e0d2a761d08a98ec7"},
+ {file = "opentelemetry_sdk-1.26.0-py3-none-any.whl", hash = "sha256:feb5056a84a88670c041ea0ded9921fca559efec03905dddeb3885525e0af897"},
+ {file = "opentelemetry_sdk-1.26.0.tar.gz", hash = "sha256:c90d2868f8805619535c05562d699e2f4fb1f00dbd55a86dcefca4da6fa02f85"},
]
[package.dependencies]
-opentelemetry-api = "1.25.0"
-opentelemetry-semantic-conventions = "0.46b0"
+opentelemetry-api = "1.26.0"
+opentelemetry-semantic-conventions = "0.47b0"
typing-extensions = ">=3.7.4"
[[package]]
name = "opentelemetry-semantic-conventions"
-version = "0.46b0"
+version = "0.47b0"
description = "OpenTelemetry Semantic Conventions"
optional = false
python-versions = ">=3.8"
files = [
- {file = "opentelemetry_semantic_conventions-0.46b0-py3-none-any.whl", hash = "sha256:6daef4ef9fa51d51855d9f8e0ccd3a1bd59e0e545abe99ac6203804e36ab3e07"},
- {file = "opentelemetry_semantic_conventions-0.46b0.tar.gz", hash = "sha256:fbc982ecbb6a6e90869b15c1673be90bd18c8a56ff1cffc0864e38e2edffaefa"},
+ {file = "opentelemetry_semantic_conventions-0.47b0-py3-none-any.whl", hash = "sha256:4ff9d595b85a59c1c1413f02bba320ce7ea6bf9e2ead2b0913c4395c7bbc1063"},
+ {file = "opentelemetry_semantic_conventions-0.47b0.tar.gz", hash = "sha256:a8d57999bbe3495ffd4d510de26a97dadc1dace53e0275001b2c1b2f67992a7e"},
]
[package.dependencies]
-opentelemetry-api = "1.25.0"
+deprecated = ">=1.2.6"
+opentelemetry-api = "1.26.0"
[[package]]
name = "opentelemetry-util-http"
-version = "0.46b0"
+version = "0.47b0"
description = "Web util for OpenTelemetry"
optional = false
python-versions = ">=3.8"
files = [
- {file = "opentelemetry_util_http-0.46b0-py3-none-any.whl", hash = "sha256:8dc1949ce63caef08db84ae977fdc1848fe6dc38e6bbaad0ae3e6ecd0d451629"},
- {file = "opentelemetry_util_http-0.46b0.tar.gz", hash = "sha256:03b6e222642f9c7eae58d9132343e045b50aca9761fcb53709bd2b663571fdf6"},
+ {file = "opentelemetry_util_http-0.47b0-py3-none-any.whl", hash = "sha256:3d3215e09c4a723b12da6d0233a31395aeb2bb33a64d7b15a1500690ba250f19"},
+ {file = "opentelemetry_util_http-0.47b0.tar.gz", hash = "sha256:352a07664c18eef827eb8ddcbd64c64a7284a39dd1655e2f16f577eb046ccb32"},
]
[[package]]
@@ -5619,23 +5661,25 @@ files = [
[[package]]
name = "pgvecto-rs"
-version = "0.1.4"
+version = "0.2.1"
description = "Python binding for pgvecto.rs"
optional = false
-python-versions = ">=3.8"
+python-versions = "<3.13,>=3.8"
files = [
- {file = "pgvecto_rs-0.1.4-py3-none-any.whl", hash = "sha256:9b08a9e612f0cd65d1cc6e17a35b9bb5956187e0e3981bf6e997ff9e615c6116"},
- {file = "pgvecto_rs-0.1.4.tar.gz", hash = "sha256:078b96cff1f3d417169ad46cacef7fc4d644978bbd6725a5c24c0675de5030ab"},
+ {file = "pgvecto_rs-0.2.1-py3-none-any.whl", hash = "sha256:b3ee2c465219469ad537b3efea2916477c6c576b3d6fd4298980d0733d12bb27"},
+ {file = "pgvecto_rs-0.2.1.tar.gz", hash = "sha256:07046eaad2c4f75745f76de9ba483541909f1c595aced8d3434224a4f933daca"},
]
[package.dependencies]
numpy = ">=1.23"
+SQLAlchemy = {version = ">=2.0.23", optional = true, markers = "extra == \"sqlalchemy\""}
toml = ">=0.10"
[package.extras]
+django = ["Django (>=4.2)"]
psycopg3 = ["psycopg[binary] (>=3.1.12)"]
sdk = ["openai (>=1.2.2)", "pgvecto_rs[sqlalchemy]"]
-sqlalchemy = ["SQLAlchemy (>=2.0.23)", "pgvecto_rs[psycopg3]"]
+sqlalchemy = ["SQLAlchemy (>=2.0.23)"]
[[package]]
name = "pgvector"
@@ -5846,6 +5890,26 @@ dev = ["black", "flake8", "flake8-print", "isort", "pre-commit"]
sentry = ["django", "sentry-sdk"]
test = ["coverage", "flake8", "freezegun (==0.3.15)", "mock (>=2.0.0)", "pylint", "pytest", "pytest-timeout"]
+[[package]]
+name = "primp"
+version = "0.5.5"
+description = "HTTP client that can impersonate web browsers, mimicking their headers and `TLS/JA3/JA4/HTTP2` fingerprints"
+optional = false
+python-versions = ">=3.8"
+files = [
+ {file = "primp-0.5.5-cp38-abi3-macosx_10_12_x86_64.whl", hash = "sha256:cff9792e8422424528c23574b5364882d68134ee2743f4a2ae6a765746fb3028"},
+ {file = "primp-0.5.5-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:78e13fc5d4d90d44a005dbd5dda116981828c803c86cf85816b3bb5363b045c8"},
+ {file = "primp-0.5.5-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3714abfda79d3f5c90a5363db58994afbdbacc4b94fe14e9e5f8ab97e7b82577"},
+ {file = "primp-0.5.5-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e54765900ee40eceb6bde43676d7e0b2e16ca1f77c0753981fe5e40afc0c2010"},
+ {file = "primp-0.5.5-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:66c7eecc5a55225c42cfb99af857df04f994f3dd0d327c016d3af5414c1a2242"},
+ {file = "primp-0.5.5-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:df262271cc1a41f4bf80d68396e967a27d7d3d3de355a3d016f953130e7a20be"},
+ {file = "primp-0.5.5-cp38-abi3-win_amd64.whl", hash = "sha256:8b424118d6bab6f9d4980d0f35d5ccc1213ab9f1042497c6ee11730f2f94a876"},
+ {file = "primp-0.5.5.tar.gz", hash = "sha256:8623e8a25fd686785296b12175f4173250a08db1de9ee4063282e262b94bf3f2"},
+]
+
+[package.extras]
+dev = ["pytest (>=8.1.1)"]
+
[[package]]
name = "prompt-toolkit"
version = "3.0.47"
@@ -5879,22 +5943,22 @@ testing = ["google-api-core (>=1.31.5)"]
[[package]]
name = "protobuf"
-version = "4.25.3"
+version = "4.25.4"
description = ""
optional = false
python-versions = ">=3.8"
files = [
- {file = "protobuf-4.25.3-cp310-abi3-win32.whl", hash = "sha256:d4198877797a83cbfe9bffa3803602bbe1625dc30d8a097365dbc762e5790faa"},
- {file = "protobuf-4.25.3-cp310-abi3-win_amd64.whl", hash = "sha256:209ba4cc916bab46f64e56b85b090607a676f66b473e6b762e6f1d9d591eb2e8"},
- {file = "protobuf-4.25.3-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:f1279ab38ecbfae7e456a108c5c0681e4956d5b1090027c1de0f934dfdb4b35c"},
- {file = "protobuf-4.25.3-cp37-abi3-manylinux2014_aarch64.whl", hash = "sha256:e7cb0ae90dd83727f0c0718634ed56837bfeeee29a5f82a7514c03ee1364c019"},
- {file = "protobuf-4.25.3-cp37-abi3-manylinux2014_x86_64.whl", hash = "sha256:7c8daa26095f82482307bc717364e7c13f4f1c99659be82890dcfc215194554d"},
- {file = "protobuf-4.25.3-cp38-cp38-win32.whl", hash = "sha256:f4f118245c4a087776e0a8408be33cf09f6c547442c00395fbfb116fac2f8ac2"},
- {file = "protobuf-4.25.3-cp38-cp38-win_amd64.whl", hash = "sha256:c053062984e61144385022e53678fbded7aea14ebb3e0305ae3592fb219ccfa4"},
- {file = "protobuf-4.25.3-cp39-cp39-win32.whl", hash = "sha256:19b270aeaa0099f16d3ca02628546b8baefe2955bbe23224aaf856134eccf1e4"},
- {file = "protobuf-4.25.3-cp39-cp39-win_amd64.whl", hash = "sha256:e3c97a1555fd6388f857770ff8b9703083de6bf1f9274a002a332d65fbb56c8c"},
- {file = "protobuf-4.25.3-py3-none-any.whl", hash = "sha256:f0700d54bcf45424477e46a9f0944155b46fb0639d69728739c0e47bab83f2b9"},
- {file = "protobuf-4.25.3.tar.gz", hash = "sha256:25b5d0b42fd000320bd7830b349e3b696435f3b329810427a6bcce6a5492cc5c"},
+ {file = "protobuf-4.25.4-cp310-abi3-win32.whl", hash = "sha256:db9fd45183e1a67722cafa5c1da3e85c6492a5383f127c86c4c4aa4845867dc4"},
+ {file = "protobuf-4.25.4-cp310-abi3-win_amd64.whl", hash = "sha256:ba3d8504116a921af46499471c63a85260c1a5fc23333154a427a310e015d26d"},
+ {file = "protobuf-4.25.4-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:eecd41bfc0e4b1bd3fa7909ed93dd14dd5567b98c941d6c1ad08fdcab3d6884b"},
+ {file = "protobuf-4.25.4-cp37-abi3-manylinux2014_aarch64.whl", hash = "sha256:4c8a70fdcb995dcf6c8966cfa3a29101916f7225e9afe3ced4395359955d3835"},
+ {file = "protobuf-4.25.4-cp37-abi3-manylinux2014_x86_64.whl", hash = "sha256:3319e073562e2515c6ddc643eb92ce20809f5d8f10fead3332f71c63be6a7040"},
+ {file = "protobuf-4.25.4-cp38-cp38-win32.whl", hash = "sha256:7e372cbbda66a63ebca18f8ffaa6948455dfecc4e9c1029312f6c2edcd86c4e1"},
+ {file = "protobuf-4.25.4-cp38-cp38-win_amd64.whl", hash = "sha256:051e97ce9fa6067a4546e75cb14f90cf0232dcb3e3d508c448b8d0e4265b61c1"},
+ {file = "protobuf-4.25.4-cp39-cp39-win32.whl", hash = "sha256:90bf6fd378494eb698805bbbe7afe6c5d12c8e17fca817a646cd6a1818c696ca"},
+ {file = "protobuf-4.25.4-cp39-cp39-win_amd64.whl", hash = "sha256:ac79a48d6b99dfed2729ccccee547b34a1d3d63289c71cef056653a846a2240f"},
+ {file = "protobuf-4.25.4-py3-none-any.whl", hash = "sha256:bfbebc1c8e4793cfd58589acfb8a1026be0003e852b9da7db5a4285bde996978"},
+ {file = "protobuf-4.25.4.tar.gz", hash = "sha256:0dc4a62cc4052a036ee2204d26fe4d835c62827c855c8a03f29fe6da146b380d"},
]
[[package]]
@@ -6131,10 +6195,7 @@ files = [
[package.dependencies]
annotated-types = ">=0.4.0"
pydantic-core = "2.20.1"
-typing-extensions = [
- {version = ">=4.6.1", markers = "python_version < \"3.13\""},
- {version = ">=4.12.2", markers = "python_version >= \"3.13\""},
-]
+typing-extensions = {version = ">=4.6.1", markers = "python_version < \"3.13\""}
[package.extras]
email = ["email-validator (>=2.0.0)"]
@@ -6281,17 +6342,6 @@ python-dotenv = ">=0.21.0"
toml = ["tomli (>=2.0.1)"]
yaml = ["pyyaml (>=6.0.1)"]
-[[package]]
-name = "pydub"
-version = "0.25.1"
-description = "Manipulate audio with an simple and easy high level interface"
-optional = false
-python-versions = "*"
-files = [
- {file = "pydub-0.25.1-py2.py3-none-any.whl", hash = "sha256:65617e33033874b59d87db603aa1ed450633288aefead953b30bded59cb599a6"},
- {file = "pydub-0.25.1.tar.gz", hash = "sha256:980a33ce9949cab2a569606b65674d748ecbca4f0796887fd6f46173a7b0d30f"},
-]
-
[[package]]
name = "pygments"
version = "2.18.0"
@@ -6455,26 +6505,6 @@ files = [
{file = "pyreadline3-3.4.1.tar.gz", hash = "sha256:6f3d1f7b8a31ba32b73917cefc1f28cc660562f39aea8646d30bd6eff21f7bae"},
]
-[[package]]
-name = "pyreqwest-impersonate"
-version = "0.5.3"
-description = "HTTP client that can impersonate web browsers, mimicking their headers and `TLS/JA3/JA4/HTTP2` fingerprints"
-optional = false
-python-versions = ">=3.8"
-files = [
- {file = "pyreqwest_impersonate-0.5.3-cp38-abi3-macosx_10_12_x86_64.whl", hash = "sha256:f15922496f728769fb9e1b116d5d9d7ba5525d0f2f7a76a41a1daef8b2e0c6c3"},
- {file = "pyreqwest_impersonate-0.5.3-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:77533133ae73020e59bc56d776eea3fe3af4ac41d763a89f39c495436da0f4cf"},
- {file = "pyreqwest_impersonate-0.5.3-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:436055fa3eeb3e01e2e8efd42a9f6c4ab62fd643eddc7c66d0e671b71605f273"},
- {file = "pyreqwest_impersonate-0.5.3-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e9d2e981a525fb72c1521f454e5581d2c7a3b1fcf1c97c0acfcb7a923d8cf3e"},
- {file = "pyreqwest_impersonate-0.5.3-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:a6bf986d4a165f6976b3e862111e2a46091883cb55e9e6325150f5aea2644229"},
- {file = "pyreqwest_impersonate-0.5.3-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:b7397f6dad3d5ae158e0b272cb3eafe8382e71775d829b286ae9c21cb5a879ff"},
- {file = "pyreqwest_impersonate-0.5.3-cp38-abi3-win_amd64.whl", hash = "sha256:6026e4751b5912aec1e45238c07daf1e2c9126b3b32b33396b72885021e8990c"},
- {file = "pyreqwest_impersonate-0.5.3.tar.gz", hash = "sha256:f21c10609958ff5be18df0c329eed42d2b3ba8a339b65dc5f96ab74537231692"},
-]
-
-[package.extras]
-dev = ["pytest (>=8.1.1)"]
-
[[package]]
name = "pytest"
version = "8.1.2"
@@ -6729,20 +6759,6 @@ files = [
{file = "python_magic-0.4.27-py2.py3-none-any.whl", hash = "sha256:c212960ad306f700aa0d01e5d7a325d20548ff97eb9920dcd29513174f0294d3"},
]
-[[package]]
-name = "python-multipart"
-version = "0.0.9"
-description = "A streaming multipart parser for Python"
-optional = false
-python-versions = ">=3.8"
-files = [
- {file = "python_multipart-0.0.9-py3-none-any.whl", hash = "sha256:97ca7b8ea7b05f977dc3849c3ba99d51689822fab725c3703af7c866a0c2b215"},
- {file = "python_multipart-0.0.9.tar.gz", hash = "sha256:03f54688c663f1b7977105f021043b0793151e4cb1c1a9d4a11fc13d622c4026"},
-]
-
-[package.extras]
-dev = ["atomicwrites (==1.4.1)", "attrs (==23.2.0)", "coverage (==7.4.1)", "hatch", "invoke (==2.2.0)", "more-itertools (==10.2.0)", "pbr (==6.0.0)", "pluggy (==1.4.0)", "py (==1.11.0)", "pytest (==8.0.0)", "pytest-cov (==4.1.0)", "pytest-timeout (==2.2.0)", "pyyaml (==6.0.1)", "ruff (==0.2.1)"]
-
[[package]]
name = "python-pptx"
version = "0.6.23"
@@ -6806,62 +6822,64 @@ files = [
[[package]]
name = "pyyaml"
-version = "6.0.1"
+version = "6.0.2"
description = "YAML parser and emitter for Python"
optional = false
-python-versions = ">=3.6"
+python-versions = ">=3.8"
files = [
- {file = "PyYAML-6.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d858aa552c999bc8a8d57426ed01e40bef403cd8ccdd0fc5f6f04a00414cac2a"},
- {file = "PyYAML-6.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fd66fc5d0da6d9815ba2cebeb4205f95818ff4b79c3ebe268e75d961704af52f"},
- {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"},
- {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"},
- {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"},
- {file = "PyYAML-6.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:326c013efe8048858a6d312ddd31d56e468118ad4cdeda36c719bf5bb6192290"},
- {file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"},
- {file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"},
- {file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"},
- {file = "PyYAML-6.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f003ed9ad21d6a4713f0a9b5a7a0a79e08dd0f221aff4525a2be4c346ee60aab"},
- {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"},
- {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"},
- {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"},
- {file = "PyYAML-6.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e7d73685e87afe9f3b36c799222440d6cf362062f78be1013661b00c5c6f678b"},
- {file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"},
- {file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"},
- {file = "PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28"},
- {file = "PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9"},
- {file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a08c6f0fe150303c1c6b71ebcd7213c2858041a7e01975da3a99aed1e7a378ef"},
- {file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0"},
- {file = "PyYAML-6.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4"},
- {file = "PyYAML-6.0.1-cp312-cp312-win32.whl", hash = "sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54"},
- {file = "PyYAML-6.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:0d3304d8c0adc42be59c5f8a4d9e3d7379e6955ad754aa9d6ab7a398b59dd1df"},
- {file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"},
- {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"},
- {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"},
- {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:afd7e57eddb1a54f0f1a974bc4391af8bcce0b444685d936840f125cf046d5bd"},
- {file = "PyYAML-6.0.1-cp36-cp36m-win32.whl", hash = "sha256:fca0e3a251908a499833aa292323f32437106001d436eca0e6e7833256674585"},
- {file = "PyYAML-6.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:f22ac1c3cac4dbc50079e965eba2c1058622631e526bd9afd45fedd49ba781fa"},
- {file = "PyYAML-6.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b1275ad35a5d18c62a7220633c913e1b42d44b46ee12554e5fd39c70a243d6a3"},
- {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:18aeb1bf9a78867dc38b259769503436b7c72f7a1f1f4c93ff9a17de54319b27"},
- {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:596106435fa6ad000c2991a98fa58eeb8656ef2325d7e158344fb33864ed87e3"},
- {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:baa90d3f661d43131ca170712d903e6295d1f7a0f595074f151c0aed377c9b9c"},
- {file = "PyYAML-6.0.1-cp37-cp37m-win32.whl", hash = "sha256:9046c58c4395dff28dd494285c82ba00b546adfc7ef001486fbf0324bc174fba"},
- {file = "PyYAML-6.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:4fb147e7a67ef577a588a0e2c17b6db51dda102c71de36f8549b6816a96e1867"},
- {file = "PyYAML-6.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1d4c7e777c441b20e32f52bd377e0c409713e8bb1386e1099c2415f26e479595"},
- {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"},
- {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"},
- {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"},
- {file = "PyYAML-6.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:49a183be227561de579b4a36efbb21b3eab9651dd81b1858589f796549873dd6"},
- {file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"},
- {file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"},
- {file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"},
- {file = "PyYAML-6.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c8098ddcc2a85b61647b2590f825f3db38891662cfc2fc776415143f599bb859"},
- {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"},
- {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"},
- {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"},
- {file = "PyYAML-6.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04ac92ad1925b2cff1db0cfebffb6ffc43457495c9b3c39d3fcae417d7125dc5"},
- {file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"},
- {file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"},
- {file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"},
+ {file = "PyYAML-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0a9a2848a5b7feac301353437eb7d5957887edbf81d56e903999a75a3d743086"},
+ {file = "PyYAML-6.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:29717114e51c84ddfba879543fb232a6ed60086602313ca38cce623c1d62cfbf"},
+ {file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8824b5a04a04a047e72eea5cec3bc266db09e35de6bdfe34c9436ac5ee27d237"},
+ {file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7c36280e6fb8385e520936c3cb3b8042851904eba0e58d277dca80a5cfed590b"},
+ {file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec031d5d2feb36d1d1a24380e4db6d43695f3748343d99434e6f5f9156aaa2ed"},
+ {file = "PyYAML-6.0.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:936d68689298c36b53b29f23c6dbb74de12b4ac12ca6cfe0e047bedceea56180"},
+ {file = "PyYAML-6.0.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:23502f431948090f597378482b4812b0caae32c22213aecf3b55325e049a6c68"},
+ {file = "PyYAML-6.0.2-cp310-cp310-win32.whl", hash = "sha256:2e99c6826ffa974fe6e27cdb5ed0021786b03fc98e5ee3c5bfe1fd5015f42b99"},
+ {file = "PyYAML-6.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:a4d3091415f010369ae4ed1fc6b79def9416358877534caf6a0fdd2146c87a3e"},
+ {file = "PyYAML-6.0.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:cc1c1159b3d456576af7a3e4d1ba7e6924cb39de8f67111c735f6fc832082774"},
+ {file = "PyYAML-6.0.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1e2120ef853f59c7419231f3bf4e7021f1b936f6ebd222406c3b60212205d2ee"},
+ {file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5d225db5a45f21e78dd9358e58a98702a0302f2659a3c6cd320564b75b86f47c"},
+ {file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5ac9328ec4831237bec75defaf839f7d4564be1e6b25ac710bd1a96321cc8317"},
+ {file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3ad2a3decf9aaba3d29c8f537ac4b243e36bef957511b4766cb0057d32b0be85"},
+ {file = "PyYAML-6.0.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ff3824dc5261f50c9b0dfb3be22b4567a6f938ccce4587b38952d85fd9e9afe4"},
+ {file = "PyYAML-6.0.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:797b4f722ffa07cc8d62053e4cff1486fa6dc094105d13fea7b1de7d8bf71c9e"},
+ {file = "PyYAML-6.0.2-cp311-cp311-win32.whl", hash = "sha256:11d8f3dd2b9c1207dcaf2ee0bbbfd5991f571186ec9cc78427ba5bd32afae4b5"},
+ {file = "PyYAML-6.0.2-cp311-cp311-win_amd64.whl", hash = "sha256:e10ce637b18caea04431ce14fabcf5c64a1c61ec9c56b071a4b7ca131ca52d44"},
+ {file = "PyYAML-6.0.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:c70c95198c015b85feafc136515252a261a84561b7b1d51e3384e0655ddf25ab"},
+ {file = "PyYAML-6.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ce826d6ef20b1bc864f0a68340c8b3287705cae2f8b4b1d932177dcc76721725"},
+ {file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1f71ea527786de97d1a0cc0eacd1defc0985dcf6b3f17bb77dcfc8c34bec4dc5"},
+ {file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9b22676e8097e9e22e36d6b7bda33190d0d400f345f23d4065d48f4ca7ae0425"},
+ {file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80bab7bfc629882493af4aa31a4cfa43a4c57c83813253626916b8c7ada83476"},
+ {file = "PyYAML-6.0.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:0833f8694549e586547b576dcfaba4a6b55b9e96098b36cdc7ebefe667dfed48"},
+ {file = "PyYAML-6.0.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8b9c7197f7cb2738065c481a0461e50ad02f18c78cd75775628afb4d7137fb3b"},
+ {file = "PyYAML-6.0.2-cp312-cp312-win32.whl", hash = "sha256:ef6107725bd54b262d6dedcc2af448a266975032bc85ef0172c5f059da6325b4"},
+ {file = "PyYAML-6.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:7e7401d0de89a9a855c839bc697c079a4af81cf878373abd7dc625847d25cbd8"},
+ {file = "PyYAML-6.0.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:efdca5630322a10774e8e98e1af481aad470dd62c3170801852d752aa7a783ba"},
+ {file = "PyYAML-6.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:50187695423ffe49e2deacb8cd10510bc361faac997de9efef88badc3bb9e2d1"},
+ {file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0ffe8360bab4910ef1b9e87fb812d8bc0a308b0d0eef8c8f44e0254ab3b07133"},
+ {file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:17e311b6c678207928d649faa7cb0d7b4c26a0ba73d41e99c4fff6b6c3276484"},
+ {file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70b189594dbe54f75ab3a1acec5f1e3faa7e8cf2f1e08d9b561cb41b845f69d5"},
+ {file = "PyYAML-6.0.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:41e4e3953a79407c794916fa277a82531dd93aad34e29c2a514c2c0c5fe971cc"},
+ {file = "PyYAML-6.0.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:68ccc6023a3400877818152ad9a1033e3db8625d899c72eacb5a668902e4d652"},
+ {file = "PyYAML-6.0.2-cp313-cp313-win32.whl", hash = "sha256:bc2fa7c6b47d6bc618dd7fb02ef6fdedb1090ec036abab80d4681424b84c1183"},
+ {file = "PyYAML-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563"},
+ {file = "PyYAML-6.0.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:24471b829b3bf607e04e88d79542a9d48bb037c2267d7927a874e6c205ca7e9a"},
+ {file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d7fded462629cfa4b685c5416b949ebad6cec74af5e2d42905d41e257e0869f5"},
+ {file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d84a1718ee396f54f3a086ea0a66d8e552b2ab2017ef8b420e92edbc841c352d"},
+ {file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9056c1ecd25795207ad294bcf39f2db3d845767be0ea6e6a34d856f006006083"},
+ {file = "PyYAML-6.0.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:82d09873e40955485746739bcb8b4586983670466c23382c19cffecbf1fd8706"},
+ {file = "PyYAML-6.0.2-cp38-cp38-win32.whl", hash = "sha256:43fa96a3ca0d6b1812e01ced1044a003533c47f6ee8aca31724f78e93ccc089a"},
+ {file = "PyYAML-6.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:01179a4a8559ab5de078078f37e5c1a30d76bb88519906844fd7bdea1b7729ff"},
+ {file = "PyYAML-6.0.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:688ba32a1cffef67fd2e9398a2efebaea461578b0923624778664cc1c914db5d"},
+ {file = "PyYAML-6.0.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a8786accb172bd8afb8be14490a16625cbc387036876ab6ba70912730faf8e1f"},
+ {file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d8e03406cac8513435335dbab54c0d385e4a49e4945d2909a581c83647ca0290"},
+ {file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f753120cb8181e736c57ef7636e83f31b9c0d1722c516f7e86cf15b7aa57ff12"},
+ {file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3b1fdb9dc17f5a7677423d508ab4f243a726dea51fa5e70992e59a7411c89d19"},
+ {file = "PyYAML-6.0.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0b69e4ce7a131fe56b7e4d770c67429700908fc0752af059838b1cfb41960e4e"},
+ {file = "PyYAML-6.0.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a9f8c2e67970f13b16084e04f134610fd1d374bf477b17ec1599185cf611d725"},
+ {file = "PyYAML-6.0.2-cp39-cp39-win32.whl", hash = "sha256:6395c297d42274772abc367baaa79683958044e5d3835486c16da75d2a694631"},
+ {file = "PyYAML-6.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:39693e1f8320ae4f43943590b49779ffb98acb81f788220ea932a6b6c51004d8"},
+ {file = "pyyaml-6.0.2.tar.gz", hash = "sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e"},
]
[[package]]
@@ -6932,104 +6950,119 @@ dev = ["pytest"]
[[package]]
name = "rapidfuzz"
-version = "3.9.4"
+version = "3.9.6"
description = "rapid fuzzy string matching"
optional = false
python-versions = ">=3.8"
files = [
- {file = "rapidfuzz-3.9.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c9b9793c19bdf38656c8eaefbcf4549d798572dadd70581379e666035c9df781"},
- {file = "rapidfuzz-3.9.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:015b5080b999404fe06ec2cb4f40b0be62f0710c926ab41e82dfbc28e80675b4"},
- {file = "rapidfuzz-3.9.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:acc5ceca9c1e1663f3e6c23fb89a311f69b7615a40ddd7645e3435bf3082688a"},
- {file = "rapidfuzz-3.9.4-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1424e238bc3f20e1759db1e0afb48a988a9ece183724bef91ea2a291c0b92a95"},
- {file = "rapidfuzz-3.9.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ed01378f605aa1f449bee82cd9c83772883120d6483e90aa6c5a4ce95dc5c3aa"},
- {file = "rapidfuzz-3.9.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:eb26d412271e5a76cdee1c2d6bf9881310665d3fe43b882d0ed24edfcb891a84"},
- {file = "rapidfuzz-3.9.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f37e9e1f17be193c41a31c864ad4cd3ebd2b40780db11cd5c04abf2bcf4201b"},
- {file = "rapidfuzz-3.9.4-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:d070ec5cf96b927c4dc5133c598c7ff6db3b833b363b2919b13417f1002560bc"},
- {file = "rapidfuzz-3.9.4-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:10e61bb7bc807968cef09a0e32ce253711a2d450a4dce7841d21d45330ffdb24"},
- {file = "rapidfuzz-3.9.4-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:31a2fc60bb2c7face4140010a7aeeafed18b4f9cdfa495cc644a68a8c60d1ff7"},
- {file = "rapidfuzz-3.9.4-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:fbebf1791a71a2e89f5c12b78abddc018354d5859e305ec3372fdae14f80a826"},
- {file = "rapidfuzz-3.9.4-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:aee9fc9e3bb488d040afc590c0a7904597bf4ccd50d1491c3f4a5e7e67e6cd2c"},
- {file = "rapidfuzz-3.9.4-cp310-cp310-win32.whl", hash = "sha256:005a02688a51c7d2451a2d41c79d737aa326ff54167211b78a383fc2aace2c2c"},
- {file = "rapidfuzz-3.9.4-cp310-cp310-win_amd64.whl", hash = "sha256:3a2e75e41ee3274754d3b2163cc6c82cd95b892a85ab031f57112e09da36455f"},
- {file = "rapidfuzz-3.9.4-cp310-cp310-win_arm64.whl", hash = "sha256:2c99d355f37f2b289e978e761f2f8efeedc2b14f4751d9ff7ee344a9a5ca98d9"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:07141aa6099e39d48637ce72a25b893fc1e433c50b3e837c75d8edf99e0c63e1"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:db1664eaff5d7d0f2542dd9c25d272478deaf2c8412e4ad93770e2e2d828e175"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc01a223f6605737bec3202e94dcb1a449b6c76d46082cfc4aa980f2a60fd40e"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1869c42e73e2a8910b479be204fa736418741b63ea2325f9cc583c30f2ded41a"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:62ea7007941fb2795fff305ac858f3521ec694c829d5126e8f52a3e92ae75526"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:698e992436bf7f0afc750690c301215a36ff952a6dcd62882ec13b9a1ebf7a39"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b76f611935f15a209d3730c360c56b6df8911a9e81e6a38022efbfb96e433bab"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:129627d730db2e11f76169344a032f4e3883d34f20829419916df31d6d1338b1"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:90a82143c14e9a14b723a118c9ef8d1bbc0c5a16b1ac622a1e6c916caff44dd8"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:ded58612fe3b0e0d06e935eaeaf5a9fd27da8ba9ed3e2596307f40351923bf72"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:f16f5d1c4f02fab18366f2d703391fcdbd87c944ea10736ca1dc3d70d8bd2d8b"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:26aa7eece23e0df55fb75fbc2a8fb678322e07c77d1fd0e9540496e6e2b5f03e"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-win32.whl", hash = "sha256:f187a9c3b940ce1ee324710626daf72c05599946bd6748abe9e289f1daa9a077"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-win_amd64.whl", hash = "sha256:d8e9130fe5d7c9182990b366ad78fd632f744097e753e08ace573877d67c32f8"},
- {file = "rapidfuzz-3.9.4-cp311-cp311-win_arm64.whl", hash = "sha256:40419e98b10cd6a00ce26e4837a67362f658fc3cd7a71bd8bd25c99f7ee8fea5"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:b5d5072b548db1b313a07d62d88fe0b037bd2783c16607c647e01b070f6cf9e5"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:cf5bcf22e1f0fd273354462631d443ef78d677f7d2fc292de2aec72ae1473e66"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0c8fc973adde8ed52810f590410e03fb6f0b541bbaeb04c38d77e63442b2df4c"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f2464bb120f135293e9a712e342c43695d3d83168907df05f8c4ead1612310c7"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8d9d58689aca22057cf1a5851677b8a3ccc9b535ca008c7ed06dc6e1899f7844"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:167e745f98baa0f3034c13583e6302fb69249a01239f1483d68c27abb841e0a1"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db0bf0663b4b6da1507869722420ea9356b6195aa907228d6201303e69837af9"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:cd6ac61b74fdb9e23f04d5f068e6cf554f47e77228ca28aa2347a6ca8903972f"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:60ff67c690acecf381759c16cb06c878328fe2361ddf77b25d0e434ea48a29da"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:cb934363380c60f3a57d14af94325125cd8cded9822611a9f78220444034e36e"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:fe833493fb5cc5682c823ea3e2f7066b07612ee8f61ecdf03e1268f262106cdd"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:2797fb847d89e04040d281cb1902cbeffbc4b5131a5c53fc0db490fd76b2a547"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-win32.whl", hash = "sha256:52e3d89377744dae68ed7c84ad0ddd3f5e891c82d48d26423b9e066fc835cc7c"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-win_amd64.whl", hash = "sha256:c76da20481c906e08400ee9be230f9e611d5931a33707d9df40337c2655c84b5"},
- {file = "rapidfuzz-3.9.4-cp312-cp312-win_arm64.whl", hash = "sha256:f2d2846f3980445864c7e8b8818a29707fcaff2f0261159ef6b7bd27ba139296"},
- {file = "rapidfuzz-3.9.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:355fc4a268ffa07bab88d9adee173783ec8d20136059e028d2a9135c623c44e6"},
- {file = "rapidfuzz-3.9.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4d81a78f90269190b568a8353d4ea86015289c36d7e525cd4d43176c88eff429"},
- {file = "rapidfuzz-3.9.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e618625ffc4660b26dc8e56225f8b966d5842fa190e70c60db6cd393e25b86e"},
- {file = "rapidfuzz-3.9.4-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b712336ad6f2bacdbc9f1452556e8942269ef71f60a9e6883ef1726b52d9228a"},
- {file = "rapidfuzz-3.9.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:84fc1ee19fdad05770c897e793836c002344524301501d71ef2e832847425707"},
- {file = "rapidfuzz-3.9.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1950f8597890c0c707cb7e0416c62a1cf03dcdb0384bc0b2dbda7e05efe738ec"},
- {file = "rapidfuzz-3.9.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4a6c35f272ec9c430568dc8c1c30cb873f6bc96be2c79795e0bce6db4e0e101d"},
- {file = "rapidfuzz-3.9.4-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:1df0f9e9239132a231c86ae4f545ec2b55409fa44470692fcfb36b1bd00157ad"},
- {file = "rapidfuzz-3.9.4-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:d2c51955329bfccf99ae26f63d5928bf5be9fcfcd9f458f6847fd4b7e2b8986c"},
- {file = "rapidfuzz-3.9.4-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:3c522f462d9fc504f2ea8d82e44aa580e60566acc754422c829ad75c752fbf8d"},
- {file = "rapidfuzz-3.9.4-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:d8a52fc50ded60d81117d7647f262c529659fb21d23e14ebfd0b35efa4f1b83d"},
- {file = "rapidfuzz-3.9.4-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:04dbdfb0f0bfd3f99cf1e9e24fadc6ded2736d7933f32f1151b0f2abb38f9a25"},
- {file = "rapidfuzz-3.9.4-cp38-cp38-win32.whl", hash = "sha256:4968c8bd1df84b42f382549e6226710ad3476f976389839168db3e68fd373298"},
- {file = "rapidfuzz-3.9.4-cp38-cp38-win_amd64.whl", hash = "sha256:3fe4545f89f8d6c27b6bbbabfe40839624873c08bd6700f63ac36970a179f8f5"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9f256c8fb8f3125574c8c0c919ab0a1f75d7cba4d053dda2e762dcc36357969d"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f5fdc09cf6e9d8eac3ce48a4615b3a3ee332ea84ac9657dbbefef913b13e632f"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d395d46b80063d3b5d13c0af43d2c2cedf3ab48c6a0c2aeec715aa5455b0c632"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7fa714fb96ce9e70c37e64c83b62fe8307030081a0bfae74a76fac7ba0f91715"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1bc1a0f29f9119be7a8d3c720f1d2068317ae532e39e4f7f948607c3a6de8396"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6022674aa1747d6300f699cd7c54d7dae89bfe1f84556de699c4ac5df0838082"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dcb72e5f9762fd469701a7e12e94b924af9004954f8c739f925cb19c00862e38"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:ad04ae301129f0eb5b350a333accd375ce155a0c1cec85ab0ec01f770214e2e4"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:f46a22506f17c0433e349f2d1dc11907c393d9b3601b91d4e334fa9a439a6a4d"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:01b42a8728c36011718da409aa86b84984396bf0ca3bfb6e62624f2014f6022c"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:e590d5d5443cf56f83a51d3c4867bd1f6be8ef8cfcc44279522bcef3845b2a51"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:4c72078b5fdce34ba5753f9299ae304e282420e6455e043ad08e4488ca13a2b0"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-win32.whl", hash = "sha256:f75639277304e9b75e6a7b3c07042d2264e16740a11e449645689ed28e9c2124"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-win_amd64.whl", hash = "sha256:e81e27e8c32a1e1278a4bb1ce31401bfaa8c2cc697a053b985a6f8d013df83ec"},
- {file = "rapidfuzz-3.9.4-cp39-cp39-win_arm64.whl", hash = "sha256:15bc397ee9a3ed1210b629b9f5f1da809244adc51ce620c504138c6e7095b7bd"},
- {file = "rapidfuzz-3.9.4-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:20488ade4e1ddba3cfad04f400da7a9c1b91eff5b7bd3d1c50b385d78b587f4f"},
- {file = "rapidfuzz-3.9.4-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:e61b03509b1a6eb31bc5582694f6df837d340535da7eba7bedb8ae42a2fcd0b9"},
- {file = "rapidfuzz-3.9.4-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:098d231d4e51644d421a641f4a5f2f151f856f53c252b03516e01389b2bfef99"},
- {file = "rapidfuzz-3.9.4-pp310-pypy310_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:17ab8b7d10fde8dd763ad428aa961c0f30a1b44426e675186af8903b5d134fb0"},
- {file = "rapidfuzz-3.9.4-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e272df61bee0a056a3daf99f9b1bd82cf73ace7d668894788139c868fdf37d6f"},
- {file = "rapidfuzz-3.9.4-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:d6481e099ff8c4edda85b8b9b5174c200540fd23c8f38120016c765a86fa01f5"},
- {file = "rapidfuzz-3.9.4-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:ad61676e9bdae677d577fe80ec1c2cea1d150c86be647e652551dcfe505b1113"},
- {file = "rapidfuzz-3.9.4-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:af65020c0dd48d0d8ae405e7e69b9d8ae306eb9b6249ca8bf511a13f465fad85"},
- {file = "rapidfuzz-3.9.4-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4d38b4e026fcd580e0bda6c0ae941e0e9a52c6bc66cdce0b8b0da61e1959f5f8"},
- {file = "rapidfuzz-3.9.4-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f74ed072c2b9dc6743fb19994319d443a4330b0e64aeba0aa9105406c7c5b9c2"},
- {file = "rapidfuzz-3.9.4-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aee5f6b8321f90615c184bd8a4c676e9becda69b8e4e451a90923db719d6857c"},
- {file = "rapidfuzz-3.9.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:3a555e3c841d6efa350f862204bb0a3fea0c006b8acc9b152b374fa36518a1c6"},
- {file = "rapidfuzz-3.9.4-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0772150d37bf018110351c01d032bf9ab25127b966a29830faa8ad69b7e2f651"},
- {file = "rapidfuzz-3.9.4-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:addcdd3c3deef1bd54075bd7aba0a6ea9f1d01764a08620074b7a7b1e5447cb9"},
- {file = "rapidfuzz-3.9.4-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3fe86b82b776554add8f900b6af202b74eb5efe8f25acdb8680a5c977608727f"},
- {file = "rapidfuzz-3.9.4-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b0fc91ac59f4414d8542454dfd6287a154b8e6f1256718c898f695bdbb993467"},
- {file = "rapidfuzz-3.9.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a944e546a296a5fdcaabb537b01459f1b14d66f74e584cb2a91448bffadc3c1"},
- {file = "rapidfuzz-3.9.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4fb96ba96d58c668a17a06b5b5e8340fedc26188e87b0d229d38104556f30cd8"},
- {file = "rapidfuzz-3.9.4.tar.gz", hash = "sha256:366bf8947b84e37f2f4cf31aaf5f37c39f620d8c0eddb8b633e6ba0129ca4a0a"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:a7ed0d0b9c85720f0ae33ac5efc8dc3f60c1489dad5c29d735fbdf2f66f0431f"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f3deff6ab7017ed21b9aec5874a07ad13e6b2a688af055837f88b743c7bfd947"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c3f9fc060160507b2704f7d1491bd58453d69689b580cbc85289335b14fe8ca"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c4e86c2b3827fa6169ad6e7d4b790ce02a20acefb8b78d92fa4249589bbc7a2c"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f982e1aafb4bd8207a5e073b1efef9e68a984e91330e1bbf364f9ed157ed83f0"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9196a51d0ec5eaaaf5bca54a85b7b1e666fc944c332f68e6427503af9fb8c49e"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fb5a514064e02585b1cc09da2fe406a6dc1a7e5f3e92dd4f27c53e5f1465ec81"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:e3a4244f65dbc3580b1275480118c3763f9dc29fc3dd96610560cb5e140a4d4a"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:f6ebb910a702e41641e1e1dada3843bc11ba9107a33c98daef6945a885a40a07"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:624fbe96115fb39addafa288d583b5493bc76dab1d34d0ebba9987d6871afdf9"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:1c59f1c1507b7a557cf3c410c76e91f097460da7d97e51c985343798e9df7a3c"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:f6f0256cb27b6a0fb2e1918477d1b56473cd04acfa245376a342e7c15806a396"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-win32.whl", hash = "sha256:24d473d00d23a30a85802b502b417a7f5126019c3beec91a6739fe7b95388b24"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-win_amd64.whl", hash = "sha256:248f6d2612e661e2b5f9a22bbd5862a1600e720da7bb6ad8a55bb1548cdfa423"},
+ {file = "rapidfuzz-3.9.6-cp310-cp310-win_arm64.whl", hash = "sha256:e03fdf0e74f346ed7e798135df5f2a0fb8d6b96582b00ebef202dcf2171e1d1d"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:52e4675f642fbc85632f691b67115a243cd4d2a47bdcc4a3d9a79e784518ff97"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1f93a2f13038700bd245b927c46a2017db3dcd4d4ff94687d74b5123689b873b"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42b70500bca460264b8141d8040caee22e9cf0418c5388104ff0c73fb69ee28f"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a1e037fb89f714a220f68f902fc6300ab7a33349f3ce8ffae668c3b3a40b0b06"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6792f66d59b86ccfad5e247f2912e255c85c575789acdbad8e7f561412ffed8a"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:68d9cffe710b67f1969cf996983608cee4490521d96ea91d16bd7ea5dc80ea98"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63daaeeea76da17fa0bbe7fb05cba8ed8064bb1a0edf8360636557f8b6511961"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d214e063bffa13e3b771520b74f674b22d309b5720d4df9918ff3e0c0f037720"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:ed443a2062460f44c0346cb9d269b586496b808c2419bbd6057f54061c9b9c75"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:5b0c9b227ee0076fb2d58301c505bb837a290ae99ee628beacdb719f0626d749"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:82c9722b7dfaa71e8b61f8c89fed0482567fb69178e139fe4151fc71ed7df782"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:c18897c95c0a288347e29537b63608a8f63a5c3cb6da258ac46fcf89155e723e"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-win32.whl", hash = "sha256:3e910cf08944da381159587709daaad9e59d8ff7bca1f788d15928f3c3d49c2a"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-win_amd64.whl", hash = "sha256:59c4a61fab676d37329fc3a671618a461bfeef53a4d0b8b12e3bc24a14e166f8"},
+ {file = "rapidfuzz-3.9.6-cp311-cp311-win_arm64.whl", hash = "sha256:8b4afea244102332973377fddbe54ce844d0916e1c67a5123432291717f32ffa"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:70591b28b218fff351b88cdd7f2359a01a71f9f7f5a2e465ce3715ed4b3c422b"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ee2d8355c7343c631a03e57540ea06e8717c19ecf5ff64ea07e0498f7f161457"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:708fb675de0f47b9635d1cc6fbbf80d52cb710d0a1abbfae5c84c46e3abbddc3"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1d66c247c2d3bb7a9b60567c395a15a929d0ebcc5f4ceedb55bfa202c38c6e0c"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:15146301b32e6e3d2b7e8146db1a26747919d8b13690c7f83a4cb5dc111b3a08"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a7a03da59b6c7c97e657dd5cd4bcaab5fe4a2affd8193958d6f4d938bee36679"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0d2c2fe19e392dbc22695b6c3b2510527e2b774647e79936bbde49db7742d6f1"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:91aaee4c94cb45930684f583ffc4e7c01a52b46610971cede33586cf8a04a12e"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:3f5702828c10768f9281180a7ff8597da1e5002803e1304e9519dd0f06d79a85"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:ccd1763b608fb4629a0b08f00b3c099d6395e67c14e619f6341b2c8429c2f310"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:cc7a0d4b2cb166bc46d02c8c9f7551cde8e2f3c9789df3827309433ee9771163"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:7496f53d40560a58964207b52586783633f371683834a8f719d6d965d223a2eb"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-win32.whl", hash = "sha256:5eb1a9272ca71bc72be5415c2fa8448a6302ea4578e181bb7da9db855b367df0"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-win_amd64.whl", hash = "sha256:0d21fc3c0ca507a1180152a6dbd129ebaef48facde3f943db5c1055b6e6be56a"},
+ {file = "rapidfuzz-3.9.6-cp312-cp312-win_arm64.whl", hash = "sha256:43bb27a57c29dc5fa754496ba6a1a508480d21ae99ac0d19597646c16407e9f3"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:83a5ac6547a9d6eedaa212975cb8f2ce2aa07e6e30833b40e54a52b9f9999aa4"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:10f06139142ecde67078ebc9a745965446132b998f9feebffd71acdf218acfcc"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:74720c3f24597f76c7c3e2c4abdff55f1664f4766ff5b28aeaa689f8ffba5fab"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ce2bce52b5c150878e558a0418c2b637fb3dbb6eb38e4eb27d24aa839920483e"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1611199f178793ca9a060c99b284e11f6d7d124998191f1cace9a0245334d219"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0308b2ad161daf502908a6e21a57c78ded0258eba9a8f5e2545e2dafca312507"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3eda91832201b86e3b70835f91522587725bec329ec68f2f7faf5124091e5ca7"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ece873c093aedd87fc07c2a7e333d52e458dc177016afa1edaf157e82b6914d8"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:d97d3c9d209d5c30172baea5966f2129e8a198fec4a1aeb2f92abb6e82a2edb1"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:6c4550d0db4931f5ebe9f0678916d1b06f06f5a99ba0b8a48b9457fd8959a7d4"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:b6b8dd4af6324fc325d9483bec75ecf9be33e590928c9202d408e4eafff6a0a6"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:16122ae448bc89e2bea9d81ce6cb0f751e4e07da39bd1e70b95cae2493857853"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-win32.whl", hash = "sha256:71cc168c305a4445109cd0d4925406f6e66bcb48fde99a1835387c58af4ecfe9"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-win_amd64.whl", hash = "sha256:59ee78f2ecd53fef8454909cda7400fe2cfcd820f62b8a5d4dfe930102268054"},
+ {file = "rapidfuzz-3.9.6-cp313-cp313-win_arm64.whl", hash = "sha256:58b4ce83f223605c358ae37e7a2d19a41b96aa65b1fede99cc664c9053af89ac"},
+ {file = "rapidfuzz-3.9.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9f469dbc9c4aeaac7dd005992af74b7dff94aa56a3ea063ce64e4b3e6736dd2f"},
+ {file = "rapidfuzz-3.9.6-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:a9ed7ad9adb68d0fe63a156fe752bbf5f1403ed66961551e749641af2874da92"},
+ {file = "rapidfuzz-3.9.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:39ffe48ffbeedf78d120ddfb9d583f2ca906712159a4e9c3c743c9f33e7b1775"},
+ {file = "rapidfuzz-3.9.6-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8502ccdea9084d54b6f737d96a3b60a84e3afed9d016686dc979b49cdac71613"},
+ {file = "rapidfuzz-3.9.6-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6a4bec4956e06b170ca896ba055d08d4c457dac745548172443982956a80e118"},
+ {file = "rapidfuzz-3.9.6-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2c0488b1c273be39e109ff885ccac0448b2fa74dea4c4dc676bcf756c15f16d6"},
+ {file = "rapidfuzz-3.9.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0542c036cb6acf24edd2c9e0411a67d7ba71e29e4d3001a082466b86fc34ff30"},
+ {file = "rapidfuzz-3.9.6-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:0a96b52c9f26857bf009e270dcd829381e7a634f7ddd585fa29b87d4c82146d9"},
+ {file = "rapidfuzz-3.9.6-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:6edd3cd7c4aa8c68c716d349f531bd5011f2ca49ddade216bb4429460151559f"},
+ {file = "rapidfuzz-3.9.6-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:50b2fb55d7ed58c66d49c9f954acd8fc4a3f0e9fd0ff708299bd8abb68238d0e"},
+ {file = "rapidfuzz-3.9.6-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:32848dfe54391636b84cda1823fd23e5a6b1dbb8be0e9a1d80e4ee9903820994"},
+ {file = "rapidfuzz-3.9.6-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:29146cb7a1bf69c87e928b31bffa54f066cb65639d073b36e1425f98cccdebc6"},
+ {file = "rapidfuzz-3.9.6-cp38-cp38-win32.whl", hash = "sha256:aed13e5edacb0ecadcc304cc66e93e7e77ff24f059c9792ee602c0381808e10c"},
+ {file = "rapidfuzz-3.9.6-cp38-cp38-win_amd64.whl", hash = "sha256:af440e36b828922256d0b4d79443bf2cbe5515fc4b0e9e96017ec789b36bb9fc"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:efa674b407424553024522159296690d99d6e6b1192cafe99ca84592faff16b4"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0b40ff76ee19b03ebf10a0a87938f86814996a822786c41c3312d251b7927849"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:16a6c7997cb5927ced6f617122eb116ba514ec6b6f60f4803e7925ef55158891"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f3f42504bdc8d770987fc3d99964766d42b2a03e4d5b0f891decdd256236bae0"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ad9462aa2be9f60b540c19a083471fdf28e7cf6434f068b631525b5e6251b35e"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1629698e68f47609a73bf9e73a6da3a4cac20bc710529215cbdf111ab603665b"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:68bc7621843d8e9a7fd1b1a32729465bf94b47b6fb307d906da168413331f8d6"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:c6254c50f15bc2fcc33cb93a95a81b702d9e6590f432a7f7822b8c7aba9ae288"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:7e535a114fa575bc143e175e4ca386a467ec8c42909eff500f5f0f13dc84e3e0"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:d50acc0e9d67e4ba7a004a14c42d1b1e8b6ca1c515692746f4f8e7948c673167"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:fa742ec60bec53c5a211632cf1d31b9eb5a3c80f1371a46a23ac25a1fa2ab209"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:c256fa95d29cbe5aa717db790b231a9a5b49e5983d50dc9df29d364a1db5e35b"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-win32.whl", hash = "sha256:89acbf728b764421036c173a10ada436ecca22999851cdc01d0aa904c70d362d"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-win_amd64.whl", hash = "sha256:c608fcba8b14d86c04cb56b203fed31a96e8a1ebb4ce99e7b70313c5bf8cf497"},
+ {file = "rapidfuzz-3.9.6-cp39-cp39-win_arm64.whl", hash = "sha256:d41c00ded0e22e9dba88ff23ebe0dc9d2a5f21ba2f88e185ea7374461e61daa9"},
+ {file = "rapidfuzz-3.9.6-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:a65c2f63218ea2dedd56fc56361035e189ca123bd9c9ce63a9bef6f99540d681"},
+ {file = "rapidfuzz-3.9.6-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:680dc78a5f889d3b89f74824b89fe357f49f88ad10d2c121e9c3ad37bac1e4eb"},
+ {file = "rapidfuzz-3.9.6-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b8ca862927a0b05bd825e46ddf82d0724ea44b07d898ef639386530bf9b40f15"},
+ {file = "rapidfuzz-3.9.6-pp310-pypy310_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2116fa1fbff21fa52cd46f3cfcb1e193ba1d65d81f8b6e123193451cd3d6c15e"},
+ {file = "rapidfuzz-3.9.6-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4dcb7d9afd740370a897c15da61d3d57a8d54738d7c764a99cedb5f746d6a003"},
+ {file = "rapidfuzz-3.9.6-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:1a5bd6401bb489e14cbb5981c378d53ede850b7cc84b2464cad606149cc4e17d"},
+ {file = "rapidfuzz-3.9.6-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:29fda70b9d03e29df6fc45cc27cbcc235534b1b0b2900e0a3ae0b43022aaeef5"},
+ {file = "rapidfuzz-3.9.6-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:88144f5f52ae977df9352029488326afadd7a7f42c6779d486d1f82d43b2b1f2"},
+ {file = "rapidfuzz-3.9.6-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:715aeaabafba2709b9dd91acb2a44bad59d60b4616ef90c08f4d4402a3bbca60"},
+ {file = "rapidfuzz-3.9.6-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:af26ebd3714224fbf9bebbc27bdbac14f334c15f5d7043699cd694635050d6ca"},
+ {file = "rapidfuzz-3.9.6-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:101bd2df438861a005ed47c032631b7857dfcdb17b82beeeb410307983aac61d"},
+ {file = "rapidfuzz-3.9.6-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:2185e8e29809b97ad22a7f99281d1669a89bdf5fa1ef4ef1feca36924e675367"},
+ {file = "rapidfuzz-3.9.6-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:9e53c72d08f0e9c6e4a369e52df5971f311305b4487690c62e8dd0846770260c"},
+ {file = "rapidfuzz-3.9.6-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:a0cb157162f0cdd62e538c7bd298ff669847fc43a96422811d5ab933f4c16c3a"},
+ {file = "rapidfuzz-3.9.6-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4bb5ff2bd48132ed5e7fbb8f619885facb2e023759f2519a448b2c18afe07e5d"},
+ {file = "rapidfuzz-3.9.6-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6dc37f601865e8407e3a8037ffbc3afe0b0f837b2146f7632bd29d087385babe"},
+ {file = "rapidfuzz-3.9.6-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a657eee4b94668faf1fa2703bdd803654303f7e468eb9ba10a664d867ed9e779"},
+ {file = "rapidfuzz-3.9.6-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:51be6ab5b1d5bb32abd39718f2a5e3835502e026a8272d139ead295c224a6f5e"},
+ {file = "rapidfuzz-3.9.6.tar.gz", hash = "sha256:5cf2a7d621e4515fee84722e93563bf77ff2cbe832a77a48b81f88f9e23b9e8d"},
]
[package.extras]
@@ -7059,109 +7092,109 @@ test = ["coveralls", "pycodestyle", "pyflakes", "pylint", "pytest", "pytest-benc
[[package]]
name = "redis"
-version = "5.0.7"
+version = "5.0.8"
description = "Python client for Redis database and key-value store"
optional = false
python-versions = ">=3.7"
files = [
- {file = "redis-5.0.7-py3-none-any.whl", hash = "sha256:0e479e24da960c690be5d9b96d21f7b918a98c0cf49af3b6fafaa0753f93a0db"},
- {file = "redis-5.0.7.tar.gz", hash = "sha256:8f611490b93c8109b50adc317b31bfd84fff31def3475b92e7e80bf39f48175b"},
+ {file = "redis-5.0.8-py3-none-any.whl", hash = "sha256:56134ee08ea909106090934adc36f65c9bcbbaecea5b21ba704ba6fb561f8eb4"},
+ {file = "redis-5.0.8.tar.gz", hash = "sha256:0c5b10d387568dfe0698c6fad6615750c24170e548ca2deac10c649d463e9870"},
]
[package.dependencies]
async-timeout = {version = ">=4.0.3", markers = "python_full_version < \"3.11.3\""}
-hiredis = {version = ">=1.0.0", optional = true, markers = "extra == \"hiredis\""}
+hiredis = {version = ">1.0.0", optional = true, markers = "extra == \"hiredis\""}
[package.extras]
-hiredis = ["hiredis (>=1.0.0)"]
+hiredis = ["hiredis (>1.0.0)"]
ocsp = ["cryptography (>=36.0.1)", "pyopenssl (==20.0.1)", "requests (>=2.26.0)"]
[[package]]
name = "regex"
-version = "2024.5.15"
+version = "2024.7.24"
description = "Alternative regular expression module, to replace re."
optional = false
python-versions = ">=3.8"
files = [
- {file = "regex-2024.5.15-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:a81e3cfbae20378d75185171587cbf756015ccb14840702944f014e0d93ea09f"},
- {file = "regex-2024.5.15-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7b59138b219ffa8979013be7bc85bb60c6f7b7575df3d56dc1e403a438c7a3f6"},
- {file = "regex-2024.5.15-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a0bd000c6e266927cb7a1bc39d55be95c4b4f65c5be53e659537537e019232b1"},
- {file = "regex-2024.5.15-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5eaa7ddaf517aa095fa8da0b5015c44d03da83f5bd49c87961e3c997daed0de7"},
- {file = "regex-2024.5.15-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ba68168daedb2c0bab7fd7e00ced5ba90aebf91024dea3c88ad5063c2a562cca"},
- {file = "regex-2024.5.15-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6e8d717bca3a6e2064fc3a08df5cbe366369f4b052dcd21b7416e6d71620dca1"},
- {file = "regex-2024.5.15-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1337b7dbef9b2f71121cdbf1e97e40de33ff114801263b275aafd75303bd62b5"},
- {file = "regex-2024.5.15-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9ebd0a36102fcad2f03696e8af4ae682793a5d30b46c647eaf280d6cfb32796"},
- {file = "regex-2024.5.15-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:9efa1a32ad3a3ea112224897cdaeb6aa00381627f567179c0314f7b65d354c62"},
- {file = "regex-2024.5.15-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:1595f2d10dff3d805e054ebdc41c124753631b6a471b976963c7b28543cf13b0"},
- {file = "regex-2024.5.15-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:b802512f3e1f480f41ab5f2cfc0e2f761f08a1f41092d6718868082fc0d27143"},
- {file = "regex-2024.5.15-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:a0981022dccabca811e8171f913de05720590c915b033b7e601f35ce4ea7019f"},
- {file = "regex-2024.5.15-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:19068a6a79cf99a19ccefa44610491e9ca02c2be3305c7760d3831d38a467a6f"},
- {file = "regex-2024.5.15-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:1b5269484f6126eee5e687785e83c6b60aad7663dafe842b34691157e5083e53"},
- {file = "regex-2024.5.15-cp310-cp310-win32.whl", hash = "sha256:ada150c5adfa8fbcbf321c30c751dc67d2f12f15bd183ffe4ec7cde351d945b3"},
- {file = "regex-2024.5.15-cp310-cp310-win_amd64.whl", hash = "sha256:ac394ff680fc46b97487941f5e6ae49a9f30ea41c6c6804832063f14b2a5a145"},
- {file = "regex-2024.5.15-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:f5b1dff3ad008dccf18e652283f5e5339d70bf8ba7c98bf848ac33db10f7bc7a"},
- {file = "regex-2024.5.15-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c6a2b494a76983df8e3d3feea9b9ffdd558b247e60b92f877f93a1ff43d26656"},
- {file = "regex-2024.5.15-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a32b96f15c8ab2e7d27655969a23895eb799de3665fa94349f3b2fbfd547236f"},
- {file = "regex-2024.5.15-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:10002e86e6068d9e1c91eae8295ef690f02f913c57db120b58fdd35a6bb1af35"},
- {file = "regex-2024.5.15-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ec54d5afa89c19c6dd8541a133be51ee1017a38b412b1321ccb8d6ddbeb4cf7d"},
- {file = "regex-2024.5.15-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:10e4ce0dca9ae7a66e6089bb29355d4432caed736acae36fef0fdd7879f0b0cb"},
- {file = "regex-2024.5.15-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3e507ff1e74373c4d3038195fdd2af30d297b4f0950eeda6f515ae3d84a1770f"},
- {file = "regex-2024.5.15-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d1f059a4d795e646e1c37665b9d06062c62d0e8cc3c511fe01315973a6542e40"},
- {file = "regex-2024.5.15-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:0721931ad5fe0dda45d07f9820b90b2148ccdd8e45bb9e9b42a146cb4f695649"},
- {file = "regex-2024.5.15-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:833616ddc75ad595dee848ad984d067f2f31be645d603e4d158bba656bbf516c"},
- {file = "regex-2024.5.15-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:287eb7f54fc81546346207c533ad3c2c51a8d61075127d7f6d79aaf96cdee890"},
- {file = "regex-2024.5.15-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:19dfb1c504781a136a80ecd1fff9f16dddf5bb43cec6871778c8a907a085bb3d"},
- {file = "regex-2024.5.15-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:119af6e56dce35e8dfb5222573b50c89e5508d94d55713c75126b753f834de68"},
- {file = "regex-2024.5.15-cp311-cp311-win32.whl", hash = "sha256:1c1c174d6ec38d6c8a7504087358ce9213d4332f6293a94fbf5249992ba54efa"},
- {file = "regex-2024.5.15-cp311-cp311-win_amd64.whl", hash = "sha256:9e717956dcfd656f5055cc70996ee2cc82ac5149517fc8e1b60261b907740201"},
- {file = "regex-2024.5.15-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:632b01153e5248c134007209b5c6348a544ce96c46005d8456de1d552455b014"},
- {file = "regex-2024.5.15-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:e64198f6b856d48192bf921421fdd8ad8eb35e179086e99e99f711957ffedd6e"},
- {file = "regex-2024.5.15-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:68811ab14087b2f6e0fc0c2bae9ad689ea3584cad6917fc57be6a48bbd012c49"},
- {file = "regex-2024.5.15-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8ec0c2fea1e886a19c3bee0cd19d862b3aa75dcdfb42ebe8ed30708df64687a"},
- {file = "regex-2024.5.15-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d0c0c0003c10f54a591d220997dd27d953cd9ccc1a7294b40a4be5312be8797b"},
- {file = "regex-2024.5.15-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2431b9e263af1953c55abbd3e2efca67ca80a3de8a0437cb58e2421f8184717a"},
- {file = "regex-2024.5.15-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4a605586358893b483976cffc1723fb0f83e526e8f14c6e6614e75919d9862cf"},
- {file = "regex-2024.5.15-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:391d7f7f1e409d192dba8bcd42d3e4cf9e598f3979cdaed6ab11288da88cb9f2"},
- {file = "regex-2024.5.15-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:9ff11639a8d98969c863d4617595eb5425fd12f7c5ef6621a4b74b71ed8726d5"},
- {file = "regex-2024.5.15-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:4eee78a04e6c67e8391edd4dad3279828dd66ac4b79570ec998e2155d2e59fd5"},
- {file = "regex-2024.5.15-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:8fe45aa3f4aa57faabbc9cb46a93363edd6197cbc43523daea044e9ff2fea83e"},
- {file = "regex-2024.5.15-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:d0a3d8d6acf0c78a1fff0e210d224b821081330b8524e3e2bc5a68ef6ab5803d"},
- {file = "regex-2024.5.15-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:c486b4106066d502495b3025a0a7251bf37ea9540433940a23419461ab9f2a80"},
- {file = "regex-2024.5.15-cp312-cp312-win32.whl", hash = "sha256:c49e15eac7c149f3670b3e27f1f28a2c1ddeccd3a2812cba953e01be2ab9b5fe"},
- {file = "regex-2024.5.15-cp312-cp312-win_amd64.whl", hash = "sha256:673b5a6da4557b975c6c90198588181029c60793835ce02f497ea817ff647cb2"},
- {file = "regex-2024.5.15-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:87e2a9c29e672fc65523fb47a90d429b70ef72b901b4e4b1bd42387caf0d6835"},
- {file = "regex-2024.5.15-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c3bea0ba8b73b71b37ac833a7f3fd53825924165da6a924aec78c13032f20850"},
- {file = "regex-2024.5.15-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:bfc4f82cabe54f1e7f206fd3d30fda143f84a63fe7d64a81558d6e5f2e5aaba9"},
- {file = "regex-2024.5.15-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e5bb9425fe881d578aeca0b2b4b3d314ec88738706f66f219c194d67179337cb"},
- {file = "regex-2024.5.15-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:64c65783e96e563103d641760664125e91bd85d8e49566ee560ded4da0d3e704"},
- {file = "regex-2024.5.15-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cf2430df4148b08fb4324b848672514b1385ae3807651f3567871f130a728cc3"},
- {file = "regex-2024.5.15-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5397de3219a8b08ae9540c48f602996aa6b0b65d5a61683e233af8605c42b0f2"},
- {file = "regex-2024.5.15-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:455705d34b4154a80ead722f4f185b04c4237e8e8e33f265cd0798d0e44825fa"},
- {file = "regex-2024.5.15-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b2b6f1b3bb6f640c1a92be3bbfbcb18657b125b99ecf141fb3310b5282c7d4ed"},
- {file = "regex-2024.5.15-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:3ad070b823ca5890cab606c940522d05d3d22395d432f4aaaf9d5b1653e47ced"},
- {file = "regex-2024.5.15-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:5b5467acbfc153847d5adb21e21e29847bcb5870e65c94c9206d20eb4e99a384"},
- {file = "regex-2024.5.15-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:e6662686aeb633ad65be2a42b4cb00178b3fbf7b91878f9446075c404ada552f"},
- {file = "regex-2024.5.15-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:2b4c884767504c0e2401babe8b5b7aea9148680d2e157fa28f01529d1f7fcf67"},
- {file = "regex-2024.5.15-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:3cd7874d57f13bf70078f1ff02b8b0aa48d5b9ed25fc48547516c6aba36f5741"},
- {file = "regex-2024.5.15-cp38-cp38-win32.whl", hash = "sha256:e4682f5ba31f475d58884045c1a97a860a007d44938c4c0895f41d64481edbc9"},
- {file = "regex-2024.5.15-cp38-cp38-win_amd64.whl", hash = "sha256:d99ceffa25ac45d150e30bd9ed14ec6039f2aad0ffa6bb87a5936f5782fc1569"},
- {file = "regex-2024.5.15-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:13cdaf31bed30a1e1c2453ef6015aa0983e1366fad2667657dbcac7b02f67133"},
- {file = "regex-2024.5.15-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:cac27dcaa821ca271855a32188aa61d12decb6fe45ffe3e722401fe61e323cd1"},
- {file = "regex-2024.5.15-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7dbe2467273b875ea2de38ded4eba86cbcbc9a1a6d0aa11dcf7bd2e67859c435"},
- {file = "regex-2024.5.15-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:64f18a9a3513a99c4bef0e3efd4c4a5b11228b48aa80743be822b71e132ae4f5"},
- {file = "regex-2024.5.15-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d347a741ea871c2e278fde6c48f85136c96b8659b632fb57a7d1ce1872547600"},
- {file = "regex-2024.5.15-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1878b8301ed011704aea4c806a3cadbd76f84dece1ec09cc9e4dc934cfa5d4da"},
- {file = "regex-2024.5.15-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4babf07ad476aaf7830d77000874d7611704a7fcf68c9c2ad151f5d94ae4bfc4"},
- {file = "regex-2024.5.15-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:35cb514e137cb3488bce23352af3e12fb0dbedd1ee6e60da053c69fb1b29cc6c"},
- {file = "regex-2024.5.15-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:cdd09d47c0b2efee9378679f8510ee6955d329424c659ab3c5e3a6edea696294"},
- {file = "regex-2024.5.15-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:72d7a99cd6b8f958e85fc6ca5b37c4303294954eac1376535b03c2a43eb72629"},
- {file = "regex-2024.5.15-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:a094801d379ab20c2135529948cb84d417a2169b9bdceda2a36f5f10977ebc16"},
- {file = "regex-2024.5.15-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:c0c18345010870e58238790a6779a1219b4d97bd2e77e1140e8ee5d14df071aa"},
- {file = "regex-2024.5.15-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:16093f563098448ff6b1fa68170e4acbef94e6b6a4e25e10eae8598bb1694b5d"},
- {file = "regex-2024.5.15-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:e38a7d4e8f633a33b4c7350fbd8bad3b70bf81439ac67ac38916c4a86b465456"},
- {file = "regex-2024.5.15-cp39-cp39-win32.whl", hash = "sha256:71a455a3c584a88f654b64feccc1e25876066c4f5ef26cd6dd711308aa538694"},
- {file = "regex-2024.5.15-cp39-cp39-win_amd64.whl", hash = "sha256:cab12877a9bdafde5500206d1020a584355a97884dfd388af3699e9137bf7388"},
- {file = "regex-2024.5.15.tar.gz", hash = "sha256:d3ee02d9e5f482cc8309134a91eeaacbdd2261ba111b0fef3748eeb4913e6a2c"},
+ {file = "regex-2024.7.24-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:228b0d3f567fafa0633aee87f08b9276c7062da9616931382993c03808bb68ce"},
+ {file = "regex-2024.7.24-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:3426de3b91d1bc73249042742f45c2148803c111d1175b283270177fdf669024"},
+ {file = "regex-2024.7.24-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f273674b445bcb6e4409bf8d1be67bc4b58e8b46fd0d560055d515b8830063cd"},
+ {file = "regex-2024.7.24-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23acc72f0f4e1a9e6e9843d6328177ae3074b4182167e34119ec7233dfeccf53"},
+ {file = "regex-2024.7.24-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:65fd3d2e228cae024c411c5ccdffae4c315271eee4a8b839291f84f796b34eca"},
+ {file = "regex-2024.7.24-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c414cbda77dbf13c3bc88b073a1a9f375c7b0cb5e115e15d4b73ec3a2fbc6f59"},
+ {file = "regex-2024.7.24-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bf7a89eef64b5455835f5ed30254ec19bf41f7541cd94f266ab7cbd463f00c41"},
+ {file = "regex-2024.7.24-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:19c65b00d42804e3fbea9708f0937d157e53429a39b7c61253ff15670ff62cb5"},
+ {file = "regex-2024.7.24-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7a5486ca56c8869070a966321d5ab416ff0f83f30e0e2da1ab48815c8d165d46"},
+ {file = "regex-2024.7.24-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:6f51f9556785e5a203713f5efd9c085b4a45aecd2a42573e2b5041881b588d1f"},
+ {file = "regex-2024.7.24-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:a4997716674d36a82eab3e86f8fa77080a5d8d96a389a61ea1d0e3a94a582cf7"},
+ {file = "regex-2024.7.24-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:c0abb5e4e8ce71a61d9446040c1e86d4e6d23f9097275c5bd49ed978755ff0fe"},
+ {file = "regex-2024.7.24-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:18300a1d78cf1290fa583cd8b7cde26ecb73e9f5916690cf9d42de569c89b1ce"},
+ {file = "regex-2024.7.24-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:416c0e4f56308f34cdb18c3f59849479dde5b19febdcd6e6fa4d04b6c31c9faa"},
+ {file = "regex-2024.7.24-cp310-cp310-win32.whl", hash = "sha256:fb168b5924bef397b5ba13aabd8cf5df7d3d93f10218d7b925e360d436863f66"},
+ {file = "regex-2024.7.24-cp310-cp310-win_amd64.whl", hash = "sha256:6b9fc7e9cc983e75e2518496ba1afc524227c163e43d706688a6bb9eca41617e"},
+ {file = "regex-2024.7.24-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:382281306e3adaaa7b8b9ebbb3ffb43358a7bbf585fa93821300a418bb975281"},
+ {file = "regex-2024.7.24-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:4fdd1384619f406ad9037fe6b6eaa3de2749e2e12084abc80169e8e075377d3b"},
+ {file = "regex-2024.7.24-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3d974d24edb231446f708c455fd08f94c41c1ff4f04bcf06e5f36df5ef50b95a"},
+ {file = "regex-2024.7.24-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a2ec4419a3fe6cf8a4795752596dfe0adb4aea40d3683a132bae9c30b81e8d73"},
+ {file = "regex-2024.7.24-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:eb563dd3aea54c797adf513eeec819c4213d7dbfc311874eb4fd28d10f2ff0f2"},
+ {file = "regex-2024.7.24-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:45104baae8b9f67569f0f1dca5e1f1ed77a54ae1cd8b0b07aba89272710db61e"},
+ {file = "regex-2024.7.24-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:994448ee01864501912abf2bad9203bffc34158e80fe8bfb5b031f4f8e16da51"},
+ {file = "regex-2024.7.24-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3fac296f99283ac232d8125be932c5cd7644084a30748fda013028c815ba3364"},
+ {file = "regex-2024.7.24-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:7e37e809b9303ec3a179085415cb5f418ecf65ec98cdfe34f6a078b46ef823ee"},
+ {file = "regex-2024.7.24-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:01b689e887f612610c869421241e075c02f2e3d1ae93a037cb14f88ab6a8934c"},
+ {file = "regex-2024.7.24-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:f6442f0f0ff81775eaa5b05af8a0ffa1dda36e9cf6ec1e0d3d245e8564b684ce"},
+ {file = "regex-2024.7.24-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:871e3ab2838fbcb4e0865a6e01233975df3a15e6fce93b6f99d75cacbd9862d1"},
+ {file = "regex-2024.7.24-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:c918b7a1e26b4ab40409820ddccc5d49871a82329640f5005f73572d5eaa9b5e"},
+ {file = "regex-2024.7.24-cp311-cp311-win32.whl", hash = "sha256:2dfbb8baf8ba2c2b9aa2807f44ed272f0913eeeba002478c4577b8d29cde215c"},
+ {file = "regex-2024.7.24-cp311-cp311-win_amd64.whl", hash = "sha256:538d30cd96ed7d1416d3956f94d54e426a8daf7c14527f6e0d6d425fcb4cca52"},
+ {file = "regex-2024.7.24-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:fe4ebef608553aff8deb845c7f4f1d0740ff76fa672c011cc0bacb2a00fbde86"},
+ {file = "regex-2024.7.24-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:74007a5b25b7a678459f06559504f1eec2f0f17bca218c9d56f6a0a12bfffdad"},
+ {file = "regex-2024.7.24-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:7df9ea48641da022c2a3c9c641650cd09f0cd15e8908bf931ad538f5ca7919c9"},
+ {file = "regex-2024.7.24-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6a1141a1dcc32904c47f6846b040275c6e5de0bf73f17d7a409035d55b76f289"},
+ {file = "regex-2024.7.24-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:80c811cfcb5c331237d9bad3bea2c391114588cf4131707e84d9493064d267f9"},
+ {file = "regex-2024.7.24-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7214477bf9bd195894cf24005b1e7b496f46833337b5dedb7b2a6e33f66d962c"},
+ {file = "regex-2024.7.24-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d55588cba7553f0b6ec33130bc3e114b355570b45785cebdc9daed8c637dd440"},
+ {file = "regex-2024.7.24-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:558a57cfc32adcf19d3f791f62b5ff564922942e389e3cfdb538a23d65a6b610"},
+ {file = "regex-2024.7.24-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a512eed9dfd4117110b1881ba9a59b31433caed0c4101b361f768e7bcbaf93c5"},
+ {file = "regex-2024.7.24-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:86b17ba823ea76256b1885652e3a141a99a5c4422f4a869189db328321b73799"},
+ {file = "regex-2024.7.24-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:5eefee9bfe23f6df09ffb6dfb23809f4d74a78acef004aa904dc7c88b9944b05"},
+ {file = "regex-2024.7.24-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:731fcd76bbdbf225e2eb85b7c38da9633ad3073822f5ab32379381e8c3c12e94"},
+ {file = "regex-2024.7.24-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:eaef80eac3b4cfbdd6de53c6e108b4c534c21ae055d1dbea2de6b3b8ff3def38"},
+ {file = "regex-2024.7.24-cp312-cp312-win32.whl", hash = "sha256:185e029368d6f89f36e526764cf12bf8d6f0e3a2a7737da625a76f594bdfcbfc"},
+ {file = "regex-2024.7.24-cp312-cp312-win_amd64.whl", hash = "sha256:2f1baff13cc2521bea83ab2528e7a80cbe0ebb2c6f0bfad15be7da3aed443908"},
+ {file = "regex-2024.7.24-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:66b4c0731a5c81921e938dcf1a88e978264e26e6ac4ec96a4d21ae0354581ae0"},
+ {file = "regex-2024.7.24-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:88ecc3afd7e776967fa16c80f974cb79399ee8dc6c96423321d6f7d4b881c92b"},
+ {file = "regex-2024.7.24-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:64bd50cf16bcc54b274e20235bf8edbb64184a30e1e53873ff8d444e7ac656b2"},
+ {file = "regex-2024.7.24-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eb462f0e346fcf41a901a126b50f8781e9a474d3927930f3490f38a6e73b6950"},
+ {file = "regex-2024.7.24-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a82465ebbc9b1c5c50738536fdfa7cab639a261a99b469c9d4c7dcbb2b3f1e57"},
+ {file = "regex-2024.7.24-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:68a8f8c046c6466ac61a36b65bb2395c74451df2ffb8458492ef49900efed293"},
+ {file = "regex-2024.7.24-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dac8e84fff5d27420f3c1e879ce9929108e873667ec87e0c8eeb413a5311adfe"},
+ {file = "regex-2024.7.24-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ba2537ef2163db9e6ccdbeb6f6424282ae4dea43177402152c67ef869cf3978b"},
+ {file = "regex-2024.7.24-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:43affe33137fcd679bdae93fb25924979517e011f9dea99163f80b82eadc7e53"},
+ {file = "regex-2024.7.24-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:c9bb87fdf2ab2370f21e4d5636e5317775e5d51ff32ebff2cf389f71b9b13750"},
+ {file = "regex-2024.7.24-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:945352286a541406f99b2655c973852da7911b3f4264e010218bbc1cc73168f2"},
+ {file = "regex-2024.7.24-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:8bc593dcce679206b60a538c302d03c29b18e3d862609317cb560e18b66d10cf"},
+ {file = "regex-2024.7.24-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:3f3b6ca8eae6d6c75a6cff525c8530c60e909a71a15e1b731723233331de4169"},
+ {file = "regex-2024.7.24-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:c51edc3541e11fbe83f0c4d9412ef6c79f664a3745fab261457e84465ec9d5a8"},
+ {file = "regex-2024.7.24-cp38-cp38-win32.whl", hash = "sha256:d0a07763776188b4db4c9c7fb1b8c494049f84659bb387b71c73bbc07f189e96"},
+ {file = "regex-2024.7.24-cp38-cp38-win_amd64.whl", hash = "sha256:8fd5afd101dcf86a270d254364e0e8dddedebe6bd1ab9d5f732f274fa00499a5"},
+ {file = "regex-2024.7.24-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:0ffe3f9d430cd37d8fa5632ff6fb36d5b24818c5c986893063b4e5bdb84cdf24"},
+ {file = "regex-2024.7.24-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:25419b70ba00a16abc90ee5fce061228206173231f004437730b67ac77323f0d"},
+ {file = "regex-2024.7.24-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:33e2614a7ce627f0cdf2ad104797d1f68342d967de3695678c0cb84f530709f8"},
+ {file = "regex-2024.7.24-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d33a0021893ede5969876052796165bab6006559ab845fd7b515a30abdd990dc"},
+ {file = "regex-2024.7.24-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:04ce29e2c5fedf296b1a1b0acc1724ba93a36fb14031f3abfb7abda2806c1535"},
+ {file = "regex-2024.7.24-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b16582783f44fbca6fcf46f61347340c787d7530d88b4d590a397a47583f31dd"},
+ {file = "regex-2024.7.24-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:836d3cc225b3e8a943d0b02633fb2f28a66e281290302a79df0e1eaa984ff7c1"},
+ {file = "regex-2024.7.24-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:438d9f0f4bc64e8dea78274caa5af971ceff0f8771e1a2333620969936ba10be"},
+ {file = "regex-2024.7.24-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:973335b1624859cb0e52f96062a28aa18f3a5fc77a96e4a3d6d76e29811a0e6e"},
+ {file = "regex-2024.7.24-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:c5e69fd3eb0b409432b537fe3c6f44ac089c458ab6b78dcec14478422879ec5f"},
+ {file = "regex-2024.7.24-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:fbf8c2f00904eaf63ff37718eb13acf8e178cb940520e47b2f05027f5bb34ce3"},
+ {file = "regex-2024.7.24-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:ae2757ace61bc4061b69af19e4689fa4416e1a04840f33b441034202b5cd02d4"},
+ {file = "regex-2024.7.24-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:44fc61b99035fd9b3b9453f1713234e5a7c92a04f3577252b45feefe1b327759"},
+ {file = "regex-2024.7.24-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:84c312cdf839e8b579f504afcd7b65f35d60b6285d892b19adea16355e8343c9"},
+ {file = "regex-2024.7.24-cp39-cp39-win32.whl", hash = "sha256:ca5b2028c2f7af4e13fb9fc29b28d0ce767c38c7facdf64f6c2cd040413055f1"},
+ {file = "regex-2024.7.24-cp39-cp39-win_amd64.whl", hash = "sha256:7c479f5ae937ec9985ecaf42e2e10631551d909f203e31308c12d703922742f9"},
+ {file = "regex-2024.7.24.tar.gz", hash = "sha256:9cfd009eed1a46b27c14039ad5bbc5e71b6367c5b2e6d5f5da0ea91600817506"},
]
[[package]]
@@ -7299,29 +7332,29 @@ pyasn1 = ">=0.1.3"
[[package]]
name = "ruff"
-version = "0.5.4"
+version = "0.5.7"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
files = [
- {file = "ruff-0.5.4-py3-none-linux_armv6l.whl", hash = "sha256:82acef724fc639699b4d3177ed5cc14c2a5aacd92edd578a9e846d5b5ec18ddf"},
- {file = "ruff-0.5.4-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:da62e87637c8838b325e65beee485f71eb36202ce8e3cdbc24b9fcb8b99a37be"},
- {file = "ruff-0.5.4-py3-none-macosx_11_0_arm64.whl", hash = "sha256:e98ad088edfe2f3b85a925ee96da652028f093d6b9b56b76fc242d8abb8e2059"},
- {file = "ruff-0.5.4-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4c55efbecc3152d614cfe6c2247a3054cfe358cefbf794f8c79c8575456efe19"},
- {file = "ruff-0.5.4-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f9b85eaa1f653abd0a70603b8b7008d9e00c9fa1bbd0bf40dad3f0c0bdd06793"},
- {file = "ruff-0.5.4-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0cf497a47751be8c883059c4613ba2f50dd06ec672692de2811f039432875278"},
- {file = "ruff-0.5.4-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:09c14ed6a72af9ccc8d2e313d7acf7037f0faff43cde4b507e66f14e812e37f7"},
- {file = "ruff-0.5.4-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:628f6b8f97b8bad2490240aa84f3e68f390e13fabc9af5c0d3b96b485921cd60"},
- {file = "ruff-0.5.4-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3520a00c0563d7a7a7c324ad7e2cde2355733dafa9592c671fb2e9e3cd8194c1"},
- {file = "ruff-0.5.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:93789f14ca2244fb91ed481456f6d0bb8af1f75a330e133b67d08f06ad85b516"},
- {file = "ruff-0.5.4-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:029454e2824eafa25b9df46882f7f7844d36fd8ce51c1b7f6d97e2615a57bbcc"},
- {file = "ruff-0.5.4-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:9492320eed573a13a0bc09a2957f17aa733fff9ce5bf00e66e6d4a88ec33813f"},
- {file = "ruff-0.5.4-py3-none-musllinux_1_2_i686.whl", hash = "sha256:a6e1f62a92c645e2919b65c02e79d1f61e78a58eddaebca6c23659e7c7cb4ac7"},
- {file = "ruff-0.5.4-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:768fa9208df2bec4b2ce61dbc7c2ddd6b1be9fb48f1f8d3b78b3332c7d71c1ff"},
- {file = "ruff-0.5.4-py3-none-win32.whl", hash = "sha256:e1e7393e9c56128e870b233c82ceb42164966f25b30f68acbb24ed69ce9c3a4e"},
- {file = "ruff-0.5.4-py3-none-win_amd64.whl", hash = "sha256:58b54459221fd3f661a7329f177f091eb35cf7a603f01d9eb3eb11cc348d38c4"},
- {file = "ruff-0.5.4-py3-none-win_arm64.whl", hash = "sha256:bd53da65f1085fb5b307c38fd3c0829e76acf7b2a912d8d79cadcdb4875c1eb7"},
- {file = "ruff-0.5.4.tar.gz", hash = "sha256:2795726d5f71c4f4e70653273d1c23a8182f07dd8e48c12de5d867bfb7557eed"},
+ {file = "ruff-0.5.7-py3-none-linux_armv6l.whl", hash = "sha256:548992d342fc404ee2e15a242cdbea4f8e39a52f2e7752d0e4cbe88d2d2f416a"},
+ {file = "ruff-0.5.7-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:00cc8872331055ee017c4f1071a8a31ca0809ccc0657da1d154a1d2abac5c0be"},
+ {file = "ruff-0.5.7-py3-none-macosx_11_0_arm64.whl", hash = "sha256:eaf3d86a1fdac1aec8a3417a63587d93f906c678bb9ed0b796da7b59c1114a1e"},
+ {file = "ruff-0.5.7-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a01c34400097b06cf8a6e61b35d6d456d5bd1ae6961542de18ec81eaf33b4cb8"},
+ {file = "ruff-0.5.7-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fcc8054f1a717e2213500edaddcf1dbb0abad40d98e1bd9d0ad364f75c763eea"},
+ {file = "ruff-0.5.7-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7f70284e73f36558ef51602254451e50dd6cc479f8b6f8413a95fcb5db4a55fc"},
+ {file = "ruff-0.5.7-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:a78ad870ae3c460394fc95437d43deb5c04b5c29297815a2a1de028903f19692"},
+ {file = "ruff-0.5.7-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9ccd078c66a8e419475174bfe60a69adb36ce04f8d4e91b006f1329d5cd44bcf"},
+ {file = "ruff-0.5.7-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7e31c9bad4ebf8fdb77b59cae75814440731060a09a0e0077d559a556453acbb"},
+ {file = "ruff-0.5.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d796327eed8e168164346b769dd9a27a70e0298d667b4ecee6877ce8095ec8e"},
+ {file = "ruff-0.5.7-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:4a09ea2c3f7778cc635e7f6edf57d566a8ee8f485f3c4454db7771efb692c499"},
+ {file = "ruff-0.5.7-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:a36d8dcf55b3a3bc353270d544fb170d75d2dff41eba5df57b4e0b67a95bb64e"},
+ {file = "ruff-0.5.7-py3-none-musllinux_1_2_i686.whl", hash = "sha256:9369c218f789eefbd1b8d82a8cf25017b523ac47d96b2f531eba73770971c9e5"},
+ {file = "ruff-0.5.7-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:b88ca3db7eb377eb24fb7c82840546fb7acef75af4a74bd36e9ceb37a890257e"},
+ {file = "ruff-0.5.7-py3-none-win32.whl", hash = "sha256:33d61fc0e902198a3e55719f4be6b375b28f860b09c281e4bdbf783c0566576a"},
+ {file = "ruff-0.5.7-py3-none-win_amd64.whl", hash = "sha256:083bbcbe6fadb93cd86709037acc510f86eed5a314203079df174c40bbbca6b3"},
+ {file = "ruff-0.5.7-py3-none-win_arm64.whl", hash = "sha256:2dca26154ff9571995107221d0aeaad0e75a77b5a682d6236cf89a58c70b76f4"},
+ {file = "ruff-0.5.7.tar.gz", hash = "sha256:8dfc0a458797f5d9fb622dd0efc52d796f23f0a1493a9527f4e49a550ae9a7e5"},
]
[[package]]
@@ -7343,111 +7376,121 @@ crt = ["botocore[crt] (>=1.33.2,<2.0a.0)"]
[[package]]
name = "safetensors"
-version = "0.4.3"
+version = "0.4.4"
description = ""
optional = false
python-versions = ">=3.7"
files = [
- {file = "safetensors-0.4.3-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:dcf5705cab159ce0130cd56057f5f3425023c407e170bca60b4868048bae64fd"},
- {file = "safetensors-0.4.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:bb4f8c5d0358a31e9a08daeebb68f5e161cdd4018855426d3f0c23bb51087055"},
- {file = "safetensors-0.4.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:70a5319ef409e7f88686a46607cbc3c428271069d8b770076feaf913664a07ac"},
- {file = "safetensors-0.4.3-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fb9c65bd82f9ef3ce4970dc19ee86be5f6f93d032159acf35e663c6bea02b237"},
- {file = "safetensors-0.4.3-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:edb5698a7bc282089f64c96c477846950358a46ede85a1c040e0230344fdde10"},
- {file = "safetensors-0.4.3-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:efcc860be094b8d19ac61b452ec635c7acb9afa77beb218b1d7784c6d41fe8ad"},
- {file = "safetensors-0.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d88b33980222085dd6001ae2cad87c6068e0991d4f5ccf44975d216db3b57376"},
- {file = "safetensors-0.4.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5fc6775529fb9f0ce2266edd3e5d3f10aab068e49f765e11f6f2a63b5367021d"},
- {file = "safetensors-0.4.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9c6ad011c1b4e3acff058d6b090f1da8e55a332fbf84695cf3100c649cc452d1"},
- {file = "safetensors-0.4.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8c496c5401c1b9c46d41a7688e8ff5b0310a3b9bae31ce0f0ae870e1ea2b8caf"},
- {file = "safetensors-0.4.3-cp310-none-win32.whl", hash = "sha256:38e2a8666178224a51cca61d3cb4c88704f696eac8f72a49a598a93bbd8a4af9"},
- {file = "safetensors-0.4.3-cp310-none-win_amd64.whl", hash = "sha256:393e6e391467d1b2b829c77e47d726f3b9b93630e6a045b1d1fca67dc78bf632"},
- {file = "safetensors-0.4.3-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:22f3b5d65e440cec0de8edaa672efa888030802e11c09b3d6203bff60ebff05a"},
- {file = "safetensors-0.4.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7c4fa560ebd4522adddb71dcd25d09bf211b5634003f015a4b815b7647d62ebe"},
- {file = "safetensors-0.4.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e9afd5358719f1b2cf425fad638fc3c887997d6782da317096877e5b15b2ce93"},
- {file = "safetensors-0.4.3-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d8c5093206ef4b198600ae484230402af6713dab1bd5b8e231905d754022bec7"},
- {file = "safetensors-0.4.3-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e0b2104df1579d6ba9052c0ae0e3137c9698b2d85b0645507e6fd1813b70931a"},
- {file = "safetensors-0.4.3-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8cf18888606dad030455d18f6c381720e57fc6a4170ee1966adb7ebc98d4d6a3"},
- {file = "safetensors-0.4.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0bf4f9d6323d9f86eef5567eabd88f070691cf031d4c0df27a40d3b4aaee755b"},
- {file = "safetensors-0.4.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:585c9ae13a205807b63bef8a37994f30c917ff800ab8a1ca9c9b5d73024f97ee"},
- {file = "safetensors-0.4.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:faefeb3b81bdfb4e5a55b9bbdf3d8d8753f65506e1d67d03f5c851a6c87150e9"},
- {file = "safetensors-0.4.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:befdf0167ad626f22f6aac6163477fcefa342224a22f11fdd05abb3995c1783c"},
- {file = "safetensors-0.4.3-cp311-none-win32.whl", hash = "sha256:a7cef55929dcbef24af3eb40bedec35d82c3c2fa46338bb13ecf3c5720af8a61"},
- {file = "safetensors-0.4.3-cp311-none-win_amd64.whl", hash = "sha256:840b7ac0eff5633e1d053cc9db12fdf56b566e9403b4950b2dc85393d9b88d67"},
- {file = "safetensors-0.4.3-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:22d21760dc6ebae42e9c058d75aa9907d9f35e38f896e3c69ba0e7b213033856"},
- {file = "safetensors-0.4.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8d22c1a10dff3f64d0d68abb8298a3fd88ccff79f408a3e15b3e7f637ef5c980"},
- {file = "safetensors-0.4.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b1648568667f820b8c48317c7006221dc40aced1869908c187f493838a1362bc"},
- {file = "safetensors-0.4.3-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:446e9fe52c051aeab12aac63d1017e0f68a02a92a027b901c4f8e931b24e5397"},
- {file = "safetensors-0.4.3-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fef5d70683643618244a4f5221053567ca3e77c2531e42ad48ae05fae909f542"},
- {file = "safetensors-0.4.3-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2a1f4430cc0c9d6afa01214a4b3919d0a029637df8e09675ceef1ca3f0dfa0df"},
- {file = "safetensors-0.4.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2d603846a8585b9432a0fd415db1d4c57c0f860eb4aea21f92559ff9902bae4d"},
- {file = "safetensors-0.4.3-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a844cdb5d7cbc22f5f16c7e2a0271170750763c4db08381b7f696dbd2c78a361"},
- {file = "safetensors-0.4.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:88887f69f7a00cf02b954cdc3034ffb383b2303bc0ab481d4716e2da51ddc10e"},
- {file = "safetensors-0.4.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:ee463219d9ec6c2be1d331ab13a8e0cd50d2f32240a81d498266d77d07b7e71e"},
- {file = "safetensors-0.4.3-cp312-none-win32.whl", hash = "sha256:d0dd4a1db09db2dba0f94d15addc7e7cd3a7b0d393aa4c7518c39ae7374623c3"},
- {file = "safetensors-0.4.3-cp312-none-win_amd64.whl", hash = "sha256:d14d30c25897b2bf19b6fb5ff7e26cc40006ad53fd4a88244fdf26517d852dd7"},
- {file = "safetensors-0.4.3-cp37-cp37m-macosx_10_12_x86_64.whl", hash = "sha256:d1456f814655b224d4bf6e7915c51ce74e389b413be791203092b7ff78c936dd"},
- {file = "safetensors-0.4.3-cp37-cp37m-macosx_11_0_arm64.whl", hash = "sha256:455d538aa1aae4a8b279344a08136d3f16334247907b18a5c3c7fa88ef0d3c46"},
- {file = "safetensors-0.4.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cf476bca34e1340ee3294ef13e2c625833f83d096cfdf69a5342475602004f95"},
- {file = "safetensors-0.4.3-cp37-cp37m-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:02ef3a24face643456020536591fbd3c717c5abaa2737ec428ccbbc86dffa7a4"},
- {file = "safetensors-0.4.3-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7de32d0d34b6623bb56ca278f90db081f85fb9c5d327e3c18fd23ac64f465768"},
- {file = "safetensors-0.4.3-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2a0deb16a1d3ea90c244ceb42d2c6c276059616be21a19ac7101aa97da448faf"},
- {file = "safetensors-0.4.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c59d51f182c729f47e841510b70b967b0752039f79f1de23bcdd86462a9b09ee"},
- {file = "safetensors-0.4.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1f598b713cc1a4eb31d3b3203557ac308acf21c8f41104cdd74bf640c6e538e3"},
- {file = "safetensors-0.4.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:5757e4688f20df083e233b47de43845d1adb7e17b6cf7da5f8444416fc53828d"},
- {file = "safetensors-0.4.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:fe746d03ed8d193674a26105e4f0fe6c726f5bb602ffc695b409eaf02f04763d"},
- {file = "safetensors-0.4.3-cp37-none-win32.whl", hash = "sha256:0d5ffc6a80f715c30af253e0e288ad1cd97a3d0086c9c87995e5093ebc075e50"},
- {file = "safetensors-0.4.3-cp37-none-win_amd64.whl", hash = "sha256:a11c374eb63a9c16c5ed146457241182f310902bd2a9c18255781bb832b6748b"},
- {file = "safetensors-0.4.3-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:b1e31be7945f66be23f4ec1682bb47faa3df34cb89fc68527de6554d3c4258a4"},
- {file = "safetensors-0.4.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:03a4447c784917c9bf01d8f2ac5080bc15c41692202cd5f406afba16629e84d6"},
- {file = "safetensors-0.4.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d244bcafeb1bc06d47cfee71727e775bca88a8efda77a13e7306aae3813fa7e4"},
- {file = "safetensors-0.4.3-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:53c4879b9c6bd7cd25d114ee0ef95420e2812e676314300624594940a8d6a91f"},
- {file = "safetensors-0.4.3-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:74707624b81f1b7f2b93f5619d4a9f00934d5948005a03f2c1845ffbfff42212"},
- {file = "safetensors-0.4.3-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0d52c958dc210265157573f81d34adf54e255bc2b59ded6218500c9b15a750eb"},
- {file = "safetensors-0.4.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f9568f380f513a60139971169c4a358b8731509cc19112369902eddb33faa4d"},
- {file = "safetensors-0.4.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0d9cd8e1560dfc514b6d7859247dc6a86ad2f83151a62c577428d5102d872721"},
- {file = "safetensors-0.4.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:89f9f17b0dacb913ed87d57afbc8aad85ea42c1085bd5de2f20d83d13e9fc4b2"},
- {file = "safetensors-0.4.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:1139eb436fd201c133d03c81209d39ac57e129f5e74e34bb9ab60f8d9b726270"},
- {file = "safetensors-0.4.3-cp38-none-win32.whl", hash = "sha256:d9c289f140a9ae4853fc2236a2ffc9a9f2d5eae0cb673167e0f1b8c18c0961ac"},
- {file = "safetensors-0.4.3-cp38-none-win_amd64.whl", hash = "sha256:622afd28968ef3e9786562d352659a37de4481a4070f4ebac883f98c5836563e"},
- {file = "safetensors-0.4.3-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:8651c7299cbd8b4161a36cd6a322fa07d39cd23535b144d02f1c1972d0c62f3c"},
- {file = "safetensors-0.4.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e375d975159ac534c7161269de24ddcd490df2157b55c1a6eeace6cbb56903f0"},
- {file = "safetensors-0.4.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:084fc436e317f83f7071fc6a62ca1c513b2103db325cd09952914b50f51cf78f"},
- {file = "safetensors-0.4.3-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:41a727a7f5e6ad9f1db6951adee21bbdadc632363d79dc434876369a17de6ad6"},
- {file = "safetensors-0.4.3-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e7dbbde64b6c534548696808a0e01276d28ea5773bc9a2dfb97a88cd3dffe3df"},
- {file = "safetensors-0.4.3-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bbae3b4b9d997971431c346edbfe6e41e98424a097860ee872721e176040a893"},
- {file = "safetensors-0.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:01e4b22e3284cd866edeabe4f4d896229495da457229408d2e1e4810c5187121"},
- {file = "safetensors-0.4.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0dd37306546b58d3043eb044c8103a02792cc024b51d1dd16bd3dd1f334cb3ed"},
- {file = "safetensors-0.4.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:d8815b5e1dac85fc534a97fd339e12404db557878c090f90442247e87c8aeaea"},
- {file = "safetensors-0.4.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:e011cc162503c19f4b1fd63dfcddf73739c7a243a17dac09b78e57a00983ab35"},
- {file = "safetensors-0.4.3-cp39-none-win32.whl", hash = "sha256:01feb3089e5932d7e662eda77c3ecc389f97c0883c4a12b5cfdc32b589a811c3"},
- {file = "safetensors-0.4.3-cp39-none-win_amd64.whl", hash = "sha256:3f9cdca09052f585e62328c1c2923c70f46814715c795be65f0b93f57ec98a02"},
- {file = "safetensors-0.4.3-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:1b89381517891a7bb7d1405d828b2bf5d75528299f8231e9346b8eba092227f9"},
- {file = "safetensors-0.4.3-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:cd6fff9e56df398abc5866b19a32124815b656613c1c5ec0f9350906fd798aac"},
- {file = "safetensors-0.4.3-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:840caf38d86aa7014fe37ade5d0d84e23dcfbc798b8078015831996ecbc206a3"},
- {file = "safetensors-0.4.3-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f9650713b2cfa9537a2baf7dd9fee458b24a0aaaa6cafcea8bdd5fb2b8efdc34"},
- {file = "safetensors-0.4.3-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e4119532cd10dba04b423e0f86aecb96cfa5a602238c0aa012f70c3a40c44b50"},
- {file = "safetensors-0.4.3-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:e066e8861eef6387b7c772344d1fe1f9a72800e04ee9a54239d460c400c72aab"},
- {file = "safetensors-0.4.3-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:90964917f5b0fa0fa07e9a051fbef100250c04d150b7026ccbf87a34a54012e0"},
- {file = "safetensors-0.4.3-pp37-pypy37_pp73-macosx_10_12_x86_64.whl", hash = "sha256:c41e1893d1206aa7054029681778d9a58b3529d4c807002c156d58426c225173"},
- {file = "safetensors-0.4.3-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ae7613a119a71a497d012ccc83775c308b9c1dab454806291427f84397d852fd"},
- {file = "safetensors-0.4.3-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f9bac020faba7f5dc481e881b14b6425265feabb5bfc552551d21189c0eddc3"},
- {file = "safetensors-0.4.3-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:420a98f593ff9930f5822560d14c395ccbc57342ddff3b463bc0b3d6b1951550"},
- {file = "safetensors-0.4.3-pp37-pypy37_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:f5e6883af9a68c0028f70a4c19d5a6ab6238a379be36ad300a22318316c00cb0"},
- {file = "safetensors-0.4.3-pp37-pypy37_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:cdd0a3b5da66e7f377474599814dbf5cbf135ff059cc73694de129b58a5e8a2c"},
- {file = "safetensors-0.4.3-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:9bfb92f82574d9e58401d79c70c716985dc049b635fef6eecbb024c79b2c46ad"},
- {file = "safetensors-0.4.3-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:3615a96dd2dcc30eb66d82bc76cda2565f4f7bfa89fcb0e31ba3cea8a1a9ecbb"},
- {file = "safetensors-0.4.3-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:868ad1b6fc41209ab6bd12f63923e8baeb1a086814cb2e81a65ed3d497e0cf8f"},
- {file = "safetensors-0.4.3-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b7ffba80aa49bd09195145a7fd233a7781173b422eeb995096f2b30591639517"},
- {file = "safetensors-0.4.3-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c0acbe31340ab150423347e5b9cc595867d814244ac14218932a5cf1dd38eb39"},
- {file = "safetensors-0.4.3-pp38-pypy38_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:19bbdf95de2cf64f25cd614c5236c8b06eb2cfa47cbf64311f4b5d80224623a3"},
- {file = "safetensors-0.4.3-pp38-pypy38_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:b852e47eb08475c2c1bd8131207b405793bfc20d6f45aff893d3baaad449ed14"},
- {file = "safetensors-0.4.3-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5d07cbca5b99babb692d76d8151bec46f461f8ad8daafbfd96b2fca40cadae65"},
- {file = "safetensors-0.4.3-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:1ab6527a20586d94291c96e00a668fa03f86189b8a9defa2cdd34a1a01acc7d5"},
- {file = "safetensors-0.4.3-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02318f01e332cc23ffb4f6716e05a492c5f18b1d13e343c49265149396284a44"},
- {file = "safetensors-0.4.3-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec4b52ce9a396260eb9731eb6aea41a7320de22ed73a1042c2230af0212758ce"},
- {file = "safetensors-0.4.3-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:018b691383026a2436a22b648873ed11444a364324e7088b99cd2503dd828400"},
- {file = "safetensors-0.4.3-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:309b10dbcab63269ecbf0e2ca10ce59223bb756ca5d431ce9c9eeabd446569da"},
- {file = "safetensors-0.4.3-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:b277482120df46e27a58082df06a15aebda4481e30a1c21eefd0921ae7e03f65"},
- {file = "safetensors-0.4.3.tar.gz", hash = "sha256:2f85fc50c4e07a21e95c24e07460fe6f7e2859d0ce88092838352b798ce711c2"},
+ {file = "safetensors-0.4.4-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2adb497ada13097f30e386e88c959c0fda855a5f6f98845710f5bb2c57e14f12"},
+ {file = "safetensors-0.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:7db7fdc2d71fd1444d85ca3f3d682ba2df7d61a637dfc6d80793f439eae264ab"},
+ {file = "safetensors-0.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8d4f0eed76b430f009fbefca1a0028ddb112891b03cb556d7440d5cd68eb89a9"},
+ {file = "safetensors-0.4.4-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:57d216fab0b5c432aabf7170883d7c11671622bde8bd1436c46d633163a703f6"},
+ {file = "safetensors-0.4.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7d9b76322e49c056bcc819f8bdca37a2daa5a6d42c07f30927b501088db03309"},
+ {file = "safetensors-0.4.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:32f0d1f6243e90ee43bc6ee3e8c30ac5b09ca63f5dd35dbc985a1fc5208c451a"},
+ {file = "safetensors-0.4.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:44d464bdc384874601a177375028012a5f177f1505279f9456fea84bbc575c7f"},
+ {file = "safetensors-0.4.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:63144e36209ad8e4e65384dbf2d52dd5b1866986079c00a72335402a38aacdc5"},
+ {file = "safetensors-0.4.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:051d5ecd490af7245258000304b812825974d5e56f14a3ff7e1b8b2ba6dc2ed4"},
+ {file = "safetensors-0.4.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:51bc8429d9376224cd3cf7e8ce4f208b4c930cd10e515b6ac6a72cbc3370f0d9"},
+ {file = "safetensors-0.4.4-cp310-none-win32.whl", hash = "sha256:fb7b54830cee8cf9923d969e2df87ce20e625b1af2fd194222ab902d3adcc29c"},
+ {file = "safetensors-0.4.4-cp310-none-win_amd64.whl", hash = "sha256:4b3e8aa8226d6560de8c2b9d5ff8555ea482599c670610758afdc97f3e021e9c"},
+ {file = "safetensors-0.4.4-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:bbaa31f2cb49013818bde319232ccd72da62ee40f7d2aa532083eda5664e85ff"},
+ {file = "safetensors-0.4.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9fdcb80f4e9fbb33b58e9bf95e7dbbedff505d1bcd1c05f7c7ce883632710006"},
+ {file = "safetensors-0.4.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:55c14c20be247b8a1aeaf3ab4476265e3ca83096bb8e09bb1a7aa806088def4f"},
+ {file = "safetensors-0.4.4-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:949aaa1118660f992dbf0968487b3e3cfdad67f948658ab08c6b5762e90cc8b6"},
+ {file = "safetensors-0.4.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c11a4ab7debc456326a2bac67f35ee0ac792bcf812c7562a4a28559a5c795e27"},
+ {file = "safetensors-0.4.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c0cea44bba5c5601b297bc8307e4075535b95163402e4906b2e9b82788a2a6df"},
+ {file = "safetensors-0.4.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a9d752c97f6bbe327352f76e5b86442d776abc789249fc5e72eacb49e6916482"},
+ {file = "safetensors-0.4.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:03f2bb92e61b055ef6cc22883ad1ae898010a95730fa988c60a23800eb742c2c"},
+ {file = "safetensors-0.4.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:87bf3f91a9328a941acc44eceffd4e1f5f89b030985b2966637e582157173b98"},
+ {file = "safetensors-0.4.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:20d218ec2b6899d29d6895419a58b6e44cc5ff8f0cc29fac8d236a8978ab702e"},
+ {file = "safetensors-0.4.4-cp311-none-win32.whl", hash = "sha256:8079486118919f600c603536e2490ca37b3dbd3280e3ad6eaacfe6264605ac8a"},
+ {file = "safetensors-0.4.4-cp311-none-win_amd64.whl", hash = "sha256:2f8c2eb0615e2e64ee27d478c7c13f51e5329d7972d9e15528d3e4cfc4a08f0d"},
+ {file = "safetensors-0.4.4-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:baec5675944b4a47749c93c01c73d826ef7d42d36ba8d0dba36336fa80c76426"},
+ {file = "safetensors-0.4.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f15117b96866401825f3e94543145028a2947d19974429246ce59403f49e77c6"},
+ {file = "safetensors-0.4.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6a13a9caea485df164c51be4eb0c87f97f790b7c3213d635eba2314d959fe929"},
+ {file = "safetensors-0.4.4-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6b54bc4ca5f9b9bba8cd4fb91c24b2446a86b5ae7f8975cf3b7a277353c3127c"},
+ {file = "safetensors-0.4.4-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:08332c22e03b651c8eb7bf5fc2de90044f3672f43403b3d9ac7e7e0f4f76495e"},
+ {file = "safetensors-0.4.4-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bb62841e839ee992c37bb75e75891c7f4904e772db3691c59daaca5b4ab960e1"},
+ {file = "safetensors-0.4.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8e5b927acc5f2f59547270b0309a46d983edc44be64e1ca27a7fcb0474d6cd67"},
+ {file = "safetensors-0.4.4-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2a69c71b1ae98a8021a09a0b43363b0143b0ce74e7c0e83cacba691b62655fb8"},
+ {file = "safetensors-0.4.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:23654ad162c02a5636f0cd520a0310902c4421aab1d91a0b667722a4937cc445"},
+ {file = "safetensors-0.4.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:0677c109d949cf53756859160b955b2e75b0eefe952189c184d7be30ecf7e858"},
+ {file = "safetensors-0.4.4-cp312-none-win32.whl", hash = "sha256:a51d0ddd4deb8871c6de15a772ef40b3dbd26a3c0451bb9e66bc76fc5a784e5b"},
+ {file = "safetensors-0.4.4-cp312-none-win_amd64.whl", hash = "sha256:2d065059e75a798bc1933c293b68d04d79b586bb7f8c921e0ca1e82759d0dbb1"},
+ {file = "safetensors-0.4.4-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:9d625692578dd40a112df30c02a1adf068027566abd8e6a74893bb13d441c150"},
+ {file = "safetensors-0.4.4-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:7cabcf39c81e5b988d0adefdaea2eb9b4fd9bd62d5ed6559988c62f36bfa9a89"},
+ {file = "safetensors-0.4.4-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8359bef65f49d51476e9811d59c015f0ddae618ee0e44144f5595278c9f8268c"},
+ {file = "safetensors-0.4.4-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:1a32c662e7df9226fd850f054a3ead0e4213a96a70b5ce37b2d26ba27004e013"},
+ {file = "safetensors-0.4.4-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c329a4dcc395364a1c0d2d1574d725fe81a840783dda64c31c5a60fc7d41472c"},
+ {file = "safetensors-0.4.4-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:239ee093b1db877c9f8fe2d71331a97f3b9c7c0d3ab9f09c4851004a11f44b65"},
+ {file = "safetensors-0.4.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd574145d930cf9405a64f9923600879a5ce51d9f315443a5f706374841327b6"},
+ {file = "safetensors-0.4.4-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f6784eed29f9e036acb0b7769d9e78a0dc2c72c2d8ba7903005350d817e287a4"},
+ {file = "safetensors-0.4.4-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:65a4a6072436bf0a4825b1c295d248cc17e5f4651e60ee62427a5bcaa8622a7a"},
+ {file = "safetensors-0.4.4-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:df81e3407630de060ae8313da49509c3caa33b1a9415562284eaf3d0c7705f9f"},
+ {file = "safetensors-0.4.4-cp37-cp37m-macosx_10_12_x86_64.whl", hash = "sha256:e4a0f374200e8443d9746e947ebb346c40f83a3970e75a685ade0adbba5c48d9"},
+ {file = "safetensors-0.4.4-cp37-cp37m-macosx_11_0_arm64.whl", hash = "sha256:181fb5f3dee78dae7fd7ec57d02e58f7936498d587c6b7c1c8049ef448c8d285"},
+ {file = "safetensors-0.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2cb4ac1d8f6b65ec84ddfacd275079e89d9df7c92f95675ba96c4f790a64df6e"},
+ {file = "safetensors-0.4.4-cp37-cp37m-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:76897944cd9239e8a70955679b531b9a0619f76e25476e57ed373322d9c2075d"},
+ {file = "safetensors-0.4.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2a9e9d1a27e51a0f69e761a3d581c3af46729ec1c988fa1f839e04743026ae35"},
+ {file = "safetensors-0.4.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:005ef9fc0f47cb9821c40793eb029f712e97278dae84de91cb2b4809b856685d"},
+ {file = "safetensors-0.4.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:26987dac3752688c696c77c3576f951dbbdb8c57f0957a41fb6f933cf84c0b62"},
+ {file = "safetensors-0.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c05270b290acd8d249739f40d272a64dd597d5a4b90f27d830e538bc2549303c"},
+ {file = "safetensors-0.4.4-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:068d3a33711fc4d93659c825a04480ff5a3854e1d78632cdc8f37fee917e8a60"},
+ {file = "safetensors-0.4.4-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:063421ef08ca1021feea8b46951251b90ae91f899234dd78297cbe7c1db73b99"},
+ {file = "safetensors-0.4.4-cp37-none-win32.whl", hash = "sha256:d52f5d0615ea83fd853d4e1d8acf93cc2e0223ad4568ba1e1f6ca72e94ea7b9d"},
+ {file = "safetensors-0.4.4-cp37-none-win_amd64.whl", hash = "sha256:88a5ac3280232d4ed8e994cbc03b46a1807ce0aa123867b40c4a41f226c61f94"},
+ {file = "safetensors-0.4.4-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:3467ab511bfe3360967d7dc53b49f272d59309e57a067dd2405b4d35e7dcf9dc"},
+ {file = "safetensors-0.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:2ab4c96d922e53670ce25fbb9b63d5ea972e244de4fa1dd97b590d9fd66aacef"},
+ {file = "safetensors-0.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:87df18fce4440477c3ef1fd7ae17c704a69a74a77e705a12be135ee0651a0c2d"},
+ {file = "safetensors-0.4.4-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:0e5fe345b2bc7d88587149ac11def1f629d2671c4c34f5df38aed0ba59dc37f8"},
+ {file = "safetensors-0.4.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9f1a3e01dce3cd54060791e7e24588417c98b941baa5974700eeb0b8eb65b0a0"},
+ {file = "safetensors-0.4.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1c6bf35e9a8998d8339fd9a05ac4ce465a4d2a2956cc0d837b67c4642ed9e947"},
+ {file = "safetensors-0.4.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:166c0c52f6488b8538b2a9f3fbc6aad61a7261e170698779b371e81b45f0440d"},
+ {file = "safetensors-0.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:87e9903b8668a16ef02c08ba4ebc91e57a49c481e9b5866e31d798632805014b"},
+ {file = "safetensors-0.4.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:a9c421153aa23c323bd8483d4155b4eee82c9a50ac11cccd83539104a8279c64"},
+ {file = "safetensors-0.4.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:a4b8617499b2371c7353302c5116a7e0a3a12da66389ce53140e607d3bf7b3d3"},
+ {file = "safetensors-0.4.4-cp38-none-win32.whl", hash = "sha256:c6280f5aeafa1731f0a3709463ab33d8e0624321593951aefada5472f0b313fd"},
+ {file = "safetensors-0.4.4-cp38-none-win_amd64.whl", hash = "sha256:6ceed6247fc2d33b2a7b7d25d8a0fe645b68798856e0bc7a9800c5fd945eb80f"},
+ {file = "safetensors-0.4.4-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:5cf6c6f6193797372adf50c91d0171743d16299491c75acad8650107dffa9269"},
+ {file = "safetensors-0.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:419010156b914a3e5da4e4adf992bee050924d0fe423c4b329e523e2c14c3547"},
+ {file = "safetensors-0.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:88f6fd5a5c1302ce79993cc5feeadcc795a70f953c762544d01fb02b2db4ea33"},
+ {file = "safetensors-0.4.4-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d468cffb82d90789696d5b4d8b6ab8843052cba58a15296691a7a3df55143cd2"},
+ {file = "safetensors-0.4.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9353c2af2dd467333d4850a16edb66855e795561cd170685178f706c80d2c71e"},
+ {file = "safetensors-0.4.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:83c155b4a33368d9b9c2543e78f2452090fb030c52401ca608ef16fa58c98353"},
+ {file = "safetensors-0.4.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9850754c434e636ce3dc586f534bb23bcbd78940c304775bee9005bf610e98f1"},
+ {file = "safetensors-0.4.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:275f500b4d26f67b6ec05629a4600645231bd75e4ed42087a7c1801bff04f4b3"},
+ {file = "safetensors-0.4.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:5c2308de665b7130cd0e40a2329278226e4cf083f7400c51ca7e19ccfb3886f3"},
+ {file = "safetensors-0.4.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:e06a9ebc8656e030ccfe44634f2a541b4b1801cd52e390a53ad8bacbd65f8518"},
+ {file = "safetensors-0.4.4-cp39-none-win32.whl", hash = "sha256:ef73df487b7c14b477016947c92708c2d929e1dee2bacdd6fff5a82ed4539537"},
+ {file = "safetensors-0.4.4-cp39-none-win_amd64.whl", hash = "sha256:83d054818a8d1198d8bd8bc3ea2aac112a2c19def2bf73758321976788706398"},
+ {file = "safetensors-0.4.4-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:1d1f34c71371f0e034004a0b583284b45d233dd0b5f64a9125e16b8a01d15067"},
+ {file = "safetensors-0.4.4-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:1a8043a33d58bc9b30dfac90f75712134ca34733ec3d8267b1bd682afe7194f5"},
+ {file = "safetensors-0.4.4-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8db8f0c59c84792c12661f8efa85de160f80efe16b87a9d5de91b93f9e0bce3c"},
+ {file = "safetensors-0.4.4-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cfc1fc38e37630dd12d519bdec9dcd4b345aec9930bb9ce0ed04461f49e58b52"},
+ {file = "safetensors-0.4.4-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e5c9d86d9b13b18aafa88303e2cd21e677f5da2a14c828d2c460fe513af2e9a5"},
+ {file = "safetensors-0.4.4-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:43251d7f29a59120a26f5a0d9583b9e112999e500afabcfdcb91606d3c5c89e3"},
+ {file = "safetensors-0.4.4-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:2c42e9b277513b81cf507e6121c7b432b3235f980cac04f39f435b7902857f91"},
+ {file = "safetensors-0.4.4-pp37-pypy37_pp73-macosx_10_12_x86_64.whl", hash = "sha256:3daacc9a4e3f428a84dd56bf31f20b768eb0b204af891ed68e1f06db9edf546f"},
+ {file = "safetensors-0.4.4-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:218bbb9b883596715fc9997bb42470bf9f21bb832c3b34c2bf744d6fa8f2bbba"},
+ {file = "safetensors-0.4.4-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bd5efc26b39f7fc82d4ab1d86a7f0644c8e34f3699c33f85bfa9a717a030e1b"},
+ {file = "safetensors-0.4.4-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:56ad9776b65d8743f86698a1973292c966cf3abff627efc44ed60e66cc538ddd"},
+ {file = "safetensors-0.4.4-pp37-pypy37_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:30f23e6253c5f43a809dea02dc28a9f5fa747735dc819f10c073fe1b605e97d4"},
+ {file = "safetensors-0.4.4-pp37-pypy37_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:5512078d00263de6cb04e9d26c9ae17611098f52357fea856213e38dc462f81f"},
+ {file = "safetensors-0.4.4-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:b96c3d9266439d17f35fc2173111d93afc1162f168e95aed122c1ca517b1f8f1"},
+ {file = "safetensors-0.4.4-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:08d464aa72a9a13826946b4fb9094bb4b16554bbea2e069e20bd903289b6ced9"},
+ {file = "safetensors-0.4.4-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:210160816d5a36cf41f48f38473b6f70d7bcb4b0527bedf0889cc0b4c3bb07db"},
+ {file = "safetensors-0.4.4-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eb276a53717f2bcfb6df0bcf284d8a12069002508d4c1ca715799226024ccd45"},
+ {file = "safetensors-0.4.4-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a2c28c6487f17d8db0089e8b2cdc13de859366b94cc6cdc50e1b0a4147b56551"},
+ {file = "safetensors-0.4.4-pp38-pypy38_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:7915f0c60e4e6e65d90f136d85dd3b429ae9191c36b380e626064694563dbd9f"},
+ {file = "safetensors-0.4.4-pp38-pypy38_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:00eea99ae422fbfa0b46065acbc58b46bfafadfcec179d4b4a32d5c45006af6c"},
+ {file = "safetensors-0.4.4-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:bb1ed4fcb0b3c2f3ea2c5767434622fe5d660e5752f21ac2e8d737b1e5e480bb"},
+ {file = "safetensors-0.4.4-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:73fc9a0a4343188bdb421783e600bfaf81d0793cd4cce6bafb3c2ed567a74cd5"},
+ {file = "safetensors-0.4.4-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c37e6b714200824c73ca6eaf007382de76f39466a46e97558b8dc4cf643cfbf"},
+ {file = "safetensors-0.4.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f75698c5c5c542417ac4956acfc420f7d4a2396adca63a015fd66641ea751759"},
+ {file = "safetensors-0.4.4-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ca1a209157f242eb183e209040097118472e169f2e069bfbd40c303e24866543"},
+ {file = "safetensors-0.4.4-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:177f2b60a058f92a3cec7a1786c9106c29eca8987ecdfb79ee88126e5f47fa31"},
+ {file = "safetensors-0.4.4-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:ee9622e84fe6e4cd4f020e5fda70d6206feff3157731df7151d457fdae18e541"},
+ {file = "safetensors-0.4.4.tar.gz", hash = "sha256:5fe3e9b705250d0172ed4e100a811543108653fb2b66b9e702a088ad03772a07"},
]
[package.extras]
@@ -7602,13 +7645,13 @@ tornado = ["tornado (>=5)"]
[[package]]
name = "setuptools"
-version = "71.1.0"
+version = "72.1.0"
description = "Easily download, build, install, upgrade, and uninstall Python packages"
optional = false
python-versions = ">=3.8"
files = [
- {file = "setuptools-71.1.0-py3-none-any.whl", hash = "sha256:33874fdc59b3188304b2e7c80d9029097ea31627180896fb549c578ceb8a0855"},
- {file = "setuptools-71.1.0.tar.gz", hash = "sha256:032d42ee9fb536e33087fb66cac5f840eb9391ed05637b3f2a76a7c8fb477936"},
+ {file = "setuptools-72.1.0-py3-none-any.whl", hash = "sha256:5a03e1860cf56bb6ef48ce186b0e557fdba433237481a9a625176c2831be15d1"},
+ {file = "setuptools-72.1.0.tar.gz", hash = "sha256:8d243eff56d095e5817f796ede6ae32941278f542e0f941867cc05ae52b162ec"},
]
[package.extras]
@@ -7752,60 +7795,60 @@ files = [
[[package]]
name = "sqlalchemy"
-version = "2.0.31"
+version = "2.0.32"
description = "Database Abstraction Library"
optional = false
python-versions = ">=3.7"
files = [
- {file = "SQLAlchemy-2.0.31-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f2a213c1b699d3f5768a7272de720387ae0122f1becf0901ed6eaa1abd1baf6c"},
- {file = "SQLAlchemy-2.0.31-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9fea3d0884e82d1e33226935dac990b967bef21315cbcc894605db3441347443"},
- {file = "SQLAlchemy-2.0.31-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3ad7f221d8a69d32d197e5968d798217a4feebe30144986af71ada8c548e9fa"},
- {file = "SQLAlchemy-2.0.31-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9f2bee229715b6366f86a95d497c347c22ddffa2c7c96143b59a2aa5cc9eebbc"},
- {file = "SQLAlchemy-2.0.31-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:cd5b94d4819c0c89280b7c6109c7b788a576084bf0a480ae17c227b0bc41e109"},
- {file = "SQLAlchemy-2.0.31-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:750900a471d39a7eeba57580b11983030517a1f512c2cb287d5ad0fcf3aebd58"},
- {file = "SQLAlchemy-2.0.31-cp310-cp310-win32.whl", hash = "sha256:7bd112be780928c7f493c1a192cd8c5fc2a2a7b52b790bc5a84203fb4381c6be"},
- {file = "SQLAlchemy-2.0.31-cp310-cp310-win_amd64.whl", hash = "sha256:5a48ac4d359f058474fadc2115f78a5cdac9988d4f99eae44917f36aa1476327"},
- {file = "SQLAlchemy-2.0.31-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f68470edd70c3ac3b6cd5c2a22a8daf18415203ca1b036aaeb9b0fb6f54e8298"},
- {file = "SQLAlchemy-2.0.31-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2e2c38c2a4c5c634fe6c3c58a789712719fa1bf9b9d6ff5ebfce9a9e5b89c1ca"},
- {file = "SQLAlchemy-2.0.31-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd15026f77420eb2b324dcb93551ad9c5f22fab2c150c286ef1dc1160f110203"},
- {file = "SQLAlchemy-2.0.31-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2196208432deebdfe3b22185d46b08f00ac9d7b01284e168c212919891289396"},
- {file = "SQLAlchemy-2.0.31-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:352b2770097f41bff6029b280c0e03b217c2dcaddc40726f8f53ed58d8a85da4"},
- {file = "SQLAlchemy-2.0.31-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:56d51ae825d20d604583f82c9527d285e9e6d14f9a5516463d9705dab20c3740"},
- {file = "SQLAlchemy-2.0.31-cp311-cp311-win32.whl", hash = "sha256:6e2622844551945db81c26a02f27d94145b561f9d4b0c39ce7bfd2fda5776dac"},
- {file = "SQLAlchemy-2.0.31-cp311-cp311-win_amd64.whl", hash = "sha256:ccaf1b0c90435b6e430f5dd30a5aede4764942a695552eb3a4ab74ed63c5b8d3"},
- {file = "SQLAlchemy-2.0.31-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:3b74570d99126992d4b0f91fb87c586a574a5872651185de8297c6f90055ae42"},
- {file = "SQLAlchemy-2.0.31-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6f77c4f042ad493cb8595e2f503c7a4fe44cd7bd59c7582fd6d78d7e7b8ec52c"},
- {file = "SQLAlchemy-2.0.31-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cd1591329333daf94467e699e11015d9c944f44c94d2091f4ac493ced0119449"},
- {file = "SQLAlchemy-2.0.31-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:74afabeeff415e35525bf7a4ecdab015f00e06456166a2eba7590e49f8db940e"},
- {file = "SQLAlchemy-2.0.31-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b9c01990d9015df2c6f818aa8f4297d42ee71c9502026bb074e713d496e26b67"},
- {file = "SQLAlchemy-2.0.31-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:66f63278db425838b3c2b1c596654b31939427016ba030e951b292e32b99553e"},
- {file = "SQLAlchemy-2.0.31-cp312-cp312-win32.whl", hash = "sha256:0b0f658414ee4e4b8cbcd4a9bb0fd743c5eeb81fc858ca517217a8013d282c96"},
- {file = "SQLAlchemy-2.0.31-cp312-cp312-win_amd64.whl", hash = "sha256:fa4b1af3e619b5b0b435e333f3967612db06351217c58bfb50cee5f003db2a5a"},
- {file = "SQLAlchemy-2.0.31-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:f43e93057cf52a227eda401251c72b6fbe4756f35fa6bfebb5d73b86881e59b0"},
- {file = "SQLAlchemy-2.0.31-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d337bf94052856d1b330d5fcad44582a30c532a2463776e1651bd3294ee7e58b"},
- {file = "SQLAlchemy-2.0.31-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c06fb43a51ccdff3b4006aafee9fcf15f63f23c580675f7734245ceb6b6a9e05"},
- {file = "SQLAlchemy-2.0.31-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:b6e22630e89f0e8c12332b2b4c282cb01cf4da0d26795b7eae16702a608e7ca1"},
- {file = "SQLAlchemy-2.0.31-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:79a40771363c5e9f3a77f0e28b3302801db08040928146e6808b5b7a40749c88"},
- {file = "SQLAlchemy-2.0.31-cp37-cp37m-win32.whl", hash = "sha256:501ff052229cb79dd4c49c402f6cb03b5a40ae4771efc8bb2bfac9f6c3d3508f"},
- {file = "SQLAlchemy-2.0.31-cp37-cp37m-win_amd64.whl", hash = "sha256:597fec37c382a5442ffd471f66ce12d07d91b281fd474289356b1a0041bdf31d"},
- {file = "SQLAlchemy-2.0.31-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:dc6d69f8829712a4fd799d2ac8d79bdeff651c2301b081fd5d3fe697bd5b4ab9"},
- {file = "SQLAlchemy-2.0.31-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:23b9fbb2f5dd9e630db70fbe47d963c7779e9c81830869bd7d137c2dc1ad05fb"},
- {file = "SQLAlchemy-2.0.31-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2a21c97efcbb9f255d5c12a96ae14da873233597dfd00a3a0c4ce5b3e5e79704"},
- {file = "SQLAlchemy-2.0.31-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:26a6a9837589c42b16693cf7bf836f5d42218f44d198f9343dd71d3164ceeeac"},
- {file = "SQLAlchemy-2.0.31-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:dc251477eae03c20fae8db9c1c23ea2ebc47331bcd73927cdcaecd02af98d3c3"},
- {file = "SQLAlchemy-2.0.31-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:2fd17e3bb8058359fa61248c52c7b09a97cf3c820e54207a50af529876451808"},
- {file = "SQLAlchemy-2.0.31-cp38-cp38-win32.whl", hash = "sha256:c76c81c52e1e08f12f4b6a07af2b96b9b15ea67ccdd40ae17019f1c373faa227"},
- {file = "SQLAlchemy-2.0.31-cp38-cp38-win_amd64.whl", hash = "sha256:4b600e9a212ed59355813becbcf282cfda5c93678e15c25a0ef896b354423238"},
- {file = "SQLAlchemy-2.0.31-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5b6cf796d9fcc9b37011d3f9936189b3c8074a02a4ed0c0fbbc126772c31a6d4"},
- {file = "SQLAlchemy-2.0.31-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:78fe11dbe37d92667c2c6e74379f75746dc947ee505555a0197cfba9a6d4f1a4"},
- {file = "SQLAlchemy-2.0.31-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2fc47dc6185a83c8100b37acda27658fe4dbd33b7d5e7324111f6521008ab4fe"},
- {file = "SQLAlchemy-2.0.31-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8a41514c1a779e2aa9a19f67aaadeb5cbddf0b2b508843fcd7bafdf4c6864005"},
- {file = "SQLAlchemy-2.0.31-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:afb6dde6c11ea4525318e279cd93c8734b795ac8bb5dda0eedd9ebaca7fa23f1"},
- {file = "SQLAlchemy-2.0.31-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:3f9faef422cfbb8fd53716cd14ba95e2ef655400235c3dfad1b5f467ba179c8c"},
- {file = "SQLAlchemy-2.0.31-cp39-cp39-win32.whl", hash = "sha256:fc6b14e8602f59c6ba893980bea96571dd0ed83d8ebb9c4479d9ed5425d562e9"},
- {file = "SQLAlchemy-2.0.31-cp39-cp39-win_amd64.whl", hash = "sha256:3cb8a66b167b033ec72c3812ffc8441d4e9f5f78f5e31e54dcd4c90a4ca5bebc"},
- {file = "SQLAlchemy-2.0.31-py3-none-any.whl", hash = "sha256:69f3e3c08867a8e4856e92d7afb618b95cdee18e0bc1647b77599722c9a28911"},
- {file = "SQLAlchemy-2.0.31.tar.gz", hash = "sha256:b607489dd4a54de56984a0c7656247504bd5523d9d0ba799aef59d4add009484"},
+ {file = "SQLAlchemy-2.0.32-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0c9045ecc2e4db59bfc97b20516dfdf8e41d910ac6fb667ebd3a79ea54084619"},
+ {file = "SQLAlchemy-2.0.32-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:1467940318e4a860afd546ef61fefb98a14d935cd6817ed07a228c7f7c62f389"},
+ {file = "SQLAlchemy-2.0.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5954463675cb15db8d4b521f3566a017c8789222b8316b1e6934c811018ee08b"},
+ {file = "SQLAlchemy-2.0.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:167e7497035c303ae50651b351c28dc22a40bb98fbdb8468cdc971821b1ae533"},
+ {file = "SQLAlchemy-2.0.32-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:b27dfb676ac02529fb6e343b3a482303f16e6bc3a4d868b73935b8792edb52d0"},
+ {file = "SQLAlchemy-2.0.32-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:bf2360a5e0f7bd75fa80431bf8ebcfb920c9f885e7956c7efde89031695cafb8"},
+ {file = "SQLAlchemy-2.0.32-cp310-cp310-win32.whl", hash = "sha256:306fe44e754a91cd9d600a6b070c1f2fadbb4a1a257b8781ccf33c7067fd3e4d"},
+ {file = "SQLAlchemy-2.0.32-cp310-cp310-win_amd64.whl", hash = "sha256:99db65e6f3ab42e06c318f15c98f59a436f1c78179e6a6f40f529c8cc7100b22"},
+ {file = "SQLAlchemy-2.0.32-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:21b053be28a8a414f2ddd401f1be8361e41032d2ef5884b2f31d31cb723e559f"},
+ {file = "SQLAlchemy-2.0.32-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b178e875a7a25b5938b53b006598ee7645172fccafe1c291a706e93f48499ff5"},
+ {file = "SQLAlchemy-2.0.32-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:723a40ee2cc7ea653645bd4cf024326dea2076673fc9d3d33f20f6c81db83e1d"},
+ {file = "SQLAlchemy-2.0.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:295ff8689544f7ee7e819529633d058bd458c1fd7f7e3eebd0f9268ebc56c2a0"},
+ {file = "SQLAlchemy-2.0.32-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:49496b68cd190a147118af585173ee624114dfb2e0297558c460ad7495f9dfe2"},
+ {file = "SQLAlchemy-2.0.32-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:acd9b73c5c15f0ec5ce18128b1fe9157ddd0044abc373e6ecd5ba376a7e5d961"},
+ {file = "SQLAlchemy-2.0.32-cp311-cp311-win32.whl", hash = "sha256:9365a3da32dabd3e69e06b972b1ffb0c89668994c7e8e75ce21d3e5e69ddef28"},
+ {file = "SQLAlchemy-2.0.32-cp311-cp311-win_amd64.whl", hash = "sha256:8bd63d051f4f313b102a2af1cbc8b80f061bf78f3d5bd0843ff70b5859e27924"},
+ {file = "SQLAlchemy-2.0.32-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:6bab3db192a0c35e3c9d1560eb8332463e29e5507dbd822e29a0a3c48c0a8d92"},
+ {file = "SQLAlchemy-2.0.32-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:19d98f4f58b13900d8dec4ed09dd09ef292208ee44cc9c2fe01c1f0a2fe440e9"},
+ {file = "SQLAlchemy-2.0.32-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3cd33c61513cb1b7371fd40cf221256456d26a56284e7d19d1f0b9f1eb7dd7e8"},
+ {file = "SQLAlchemy-2.0.32-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7d6ba0497c1d066dd004e0f02a92426ca2df20fac08728d03f67f6960271feec"},
+ {file = "SQLAlchemy-2.0.32-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2b6be53e4fde0065524f1a0a7929b10e9280987b320716c1509478b712a7688c"},
+ {file = "SQLAlchemy-2.0.32-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:916a798f62f410c0b80b63683c8061f5ebe237b0f4ad778739304253353bc1cb"},
+ {file = "SQLAlchemy-2.0.32-cp312-cp312-win32.whl", hash = "sha256:31983018b74908ebc6c996a16ad3690301a23befb643093fcfe85efd292e384d"},
+ {file = "SQLAlchemy-2.0.32-cp312-cp312-win_amd64.whl", hash = "sha256:4363ed245a6231f2e2957cccdda3c776265a75851f4753c60f3004b90e69bfeb"},
+ {file = "SQLAlchemy-2.0.32-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b8afd5b26570bf41c35c0121801479958b4446751a3971fb9a480c1afd85558e"},
+ {file = "SQLAlchemy-2.0.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c750987fc876813f27b60d619b987b057eb4896b81117f73bb8d9918c14f1cad"},
+ {file = "SQLAlchemy-2.0.32-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ada0102afff4890f651ed91120c1120065663506b760da4e7823913ebd3258be"},
+ {file = "SQLAlchemy-2.0.32-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:78c03d0f8a5ab4f3034c0e8482cfcc415a3ec6193491cfa1c643ed707d476f16"},
+ {file = "SQLAlchemy-2.0.32-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:3bd1cae7519283ff525e64645ebd7a3e0283f3c038f461ecc1c7b040a0c932a1"},
+ {file = "SQLAlchemy-2.0.32-cp37-cp37m-win32.whl", hash = "sha256:01438ebcdc566d58c93af0171c74ec28efe6a29184b773e378a385e6215389da"},
+ {file = "SQLAlchemy-2.0.32-cp37-cp37m-win_amd64.whl", hash = "sha256:4979dc80fbbc9d2ef569e71e0896990bc94df2b9fdbd878290bd129b65ab579c"},
+ {file = "SQLAlchemy-2.0.32-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6c742be912f57586ac43af38b3848f7688863a403dfb220193a882ea60e1ec3a"},
+ {file = "SQLAlchemy-2.0.32-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:62e23d0ac103bcf1c5555b6c88c114089587bc64d048fef5bbdb58dfd26f96da"},
+ {file = "SQLAlchemy-2.0.32-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:251f0d1108aab8ea7b9aadbd07fb47fb8e3a5838dde34aa95a3349876b5a1f1d"},
+ {file = "SQLAlchemy-2.0.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ef18a84e5116340e38eca3e7f9eeaaef62738891422e7c2a0b80feab165905f"},
+ {file = "SQLAlchemy-2.0.32-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:3eb6a97a1d39976f360b10ff208c73afb6a4de86dd2a6212ddf65c4a6a2347d5"},
+ {file = "SQLAlchemy-2.0.32-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:0c1c9b673d21477cec17ab10bc4decb1322843ba35b481585facd88203754fc5"},
+ {file = "SQLAlchemy-2.0.32-cp38-cp38-win32.whl", hash = "sha256:c41a2b9ca80ee555decc605bd3c4520cc6fef9abde8fd66b1cf65126a6922d65"},
+ {file = "SQLAlchemy-2.0.32-cp38-cp38-win_amd64.whl", hash = "sha256:8a37e4d265033c897892279e8adf505c8b6b4075f2b40d77afb31f7185cd6ecd"},
+ {file = "SQLAlchemy-2.0.32-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:52fec964fba2ef46476312a03ec8c425956b05c20220a1a03703537824b5e8e1"},
+ {file = "SQLAlchemy-2.0.32-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:328429aecaba2aee3d71e11f2477c14eec5990fb6d0e884107935f7fb6001632"},
+ {file = "SQLAlchemy-2.0.32-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:85a01b5599e790e76ac3fe3aa2f26e1feba56270023d6afd5550ed63c68552b3"},
+ {file = "SQLAlchemy-2.0.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aaf04784797dcdf4c0aa952c8d234fa01974c4729db55c45732520ce12dd95b4"},
+ {file = "SQLAlchemy-2.0.32-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:4488120becf9b71b3ac718f4138269a6be99a42fe023ec457896ba4f80749525"},
+ {file = "SQLAlchemy-2.0.32-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:14e09e083a5796d513918a66f3d6aedbc131e39e80875afe81d98a03312889e6"},
+ {file = "SQLAlchemy-2.0.32-cp39-cp39-win32.whl", hash = "sha256:0d322cc9c9b2154ba7e82f7bf25ecc7c36fbe2d82e2933b3642fc095a52cfc78"},
+ {file = "SQLAlchemy-2.0.32-cp39-cp39-win_amd64.whl", hash = "sha256:7dd8583df2f98dea28b5cd53a1beac963f4f9d087888d75f22fcc93a07cf8d84"},
+ {file = "SQLAlchemy-2.0.32-py3-none-any.whl", hash = "sha256:e567a8793a692451f706b363ccf3c45e056b67d90ead58c3bc9471af5d212202"},
+ {file = "SQLAlchemy-2.0.32.tar.gz", hash = "sha256:c1b88cc8b02b6a5f0efb0345a03672d4c897dc7d92585176f88c67346f565ea8"},
]
[package.dependencies]
@@ -7871,17 +7914,20 @@ full = ["httpx (>=0.22.0)", "itsdangerous", "jinja2", "python-multipart (>=0.0.7
[[package]]
name = "sympy"
-version = "1.12"
+version = "1.13.1"
description = "Computer algebra system (CAS) in Python"
optional = false
python-versions = ">=3.8"
files = [
- {file = "sympy-1.12-py3-none-any.whl", hash = "sha256:c3588cd4295d0c0f603d0f2ae780587e64e2efeedb3521e46b9bb1d08d184fa5"},
- {file = "sympy-1.12.tar.gz", hash = "sha256:ebf595c8dac3e0fdc4152c51878b498396ec7f30e7a914d6071e674d49420fb8"},
+ {file = "sympy-1.13.1-py3-none-any.whl", hash = "sha256:db36cdc64bf61b9b24578b6f7bab1ecdd2452cf008f34faa33776680c26d66f8"},
+ {file = "sympy-1.13.1.tar.gz", hash = "sha256:9cebf7e04ff162015ce31c9c6c9144daa34a93bd082f54fd8f12deca4f47515f"},
]
[package.dependencies]
-mpmath = ">=0.19"
+mpmath = ">=1.1.0,<1.4"
+
+[package.extras]
+dev = ["hypothesis (>=6.70.0)", "pytest (>=7.1.0)"]
[[package]]
name = "tabulate"
@@ -7914,13 +7960,13 @@ requests = "*"
[[package]]
name = "tenacity"
-version = "8.3.0"
+version = "9.0.0"
description = "Retry code until it succeeds"
optional = false
python-versions = ">=3.8"
files = [
- {file = "tenacity-8.3.0-py3-none-any.whl", hash = "sha256:3649f6443dbc0d9b01b9d8020a9c4ec7a1ff5f6f3c6c8a036ef371f573fe9185"},
- {file = "tenacity-8.3.0.tar.gz", hash = "sha256:953d4e6ad24357bceffbc9707bc74349aca9d245f68eb65419cf0c249a1949a2"},
+ {file = "tenacity-9.0.0-py3-none-any.whl", hash = "sha256:93de0c98785b27fcf659856aa9f54bfbd399e29969b0621bc7f762bd441b4539"},
+ {file = "tenacity-9.0.0.tar.gz", hash = "sha256:807f37ca97d62aa361264d497b0e31e92b8027044942bfa756160d908320d73b"},
]
[package.extras]
@@ -7929,13 +7975,13 @@ test = ["pytest", "tornado (>=4.5)", "typeguard"]
[[package]]
name = "tencentcloud-sdk-python-common"
-version = "3.0.1196"
+version = "3.0.1206"
description = "Tencent Cloud Common SDK for Python"
optional = false
python-versions = "*"
files = [
- {file = "tencentcloud-sdk-python-common-3.0.1196.tar.gz", hash = "sha256:a8acd14f7480987ff0fd1d961ad934b2b7533ab1937d7e3adb74d95dc49954bd"},
- {file = "tencentcloud_sdk_python_common-3.0.1196-py2.py3-none-any.whl", hash = "sha256:5ed438bc3e2818ca8e84b3896aaa2746798fba981bd94b27528eb36efa5b4a30"},
+ {file = "tencentcloud-sdk-python-common-3.0.1206.tar.gz", hash = "sha256:e32745e6d46b94b2c2c33cd68c7e70bff3d63e8e5e5d314bb0b41616521c90f2"},
+ {file = "tencentcloud_sdk_python_common-3.0.1206-py2.py3-none-any.whl", hash = "sha256:2100697933d62135b093bae43eee0f8862b45ca0597da72779e304c9b392ac96"},
]
[package.dependencies]
@@ -7943,17 +7989,17 @@ requests = ">=2.16.0"
[[package]]
name = "tencentcloud-sdk-python-hunyuan"
-version = "3.0.1196"
+version = "3.0.1206"
description = "Tencent Cloud Hunyuan SDK for Python"
optional = false
python-versions = "*"
files = [
- {file = "tencentcloud-sdk-python-hunyuan-3.0.1196.tar.gz", hash = "sha256:ced26497ae5f1b8fcc6cbd12238109274251e82fa1cfedfd6700df776306a36c"},
- {file = "tencentcloud_sdk_python_hunyuan-3.0.1196-py2.py3-none-any.whl", hash = "sha256:d18a19cffeaf4ff8a60670dc2bdb644f3d7ae6a51c30d21b50ded24a9c542248"},
+ {file = "tencentcloud-sdk-python-hunyuan-3.0.1206.tar.gz", hash = "sha256:2c37f2f50e54d23905d91d7a511a217317d944c701127daae548b7275cc32968"},
+ {file = "tencentcloud_sdk_python_hunyuan-3.0.1206-py2.py3-none-any.whl", hash = "sha256:c650315bb5863f28d410fa1062122550d8015600947d04d95e2bff55d0590acc"},
]
[package.dependencies]
-tencentcloud-sdk-python-common = "3.0.1196"
+tencentcloud-sdk-python-common = "3.0.1206"
[[package]]
name = "threadpoolctl"
@@ -8217,13 +8263,13 @@ files = [
[[package]]
name = "tqdm"
-version = "4.66.4"
+version = "4.66.5"
description = "Fast, Extensible Progress Meter"
optional = false
python-versions = ">=3.7"
files = [
- {file = "tqdm-4.66.4-py3-none-any.whl", hash = "sha256:b75ca56b413b030bc3f00af51fd2c1a1a5eac6a0c1cca83cbb37a5c52abce644"},
- {file = "tqdm-4.66.4.tar.gz", hash = "sha256:e4d936c9de8727928f3be6079590e97d9abfe8d39a590be678eb5919ffc186bb"},
+ {file = "tqdm-4.66.5-py3-none-any.whl", hash = "sha256:90279a3770753eafc9194a0364852159802111925aa30eb3f9d85b0e805ac7cd"},
+ {file = "tqdm-4.66.5.tar.gz", hash = "sha256:e1020aef2e5096702d8a025ac7d16b1577279c9d63f8375b63083e9a5f0fcbad"},
]
[package.dependencies]
@@ -8605,13 +8651,13 @@ zstd = ["zstandard (>=0.18.0)"]
[[package]]
name = "uvicorn"
-version = "0.30.3"
+version = "0.30.5"
description = "The lightning-fast ASGI server."
optional = false
python-versions = ">=3.8"
files = [
- {file = "uvicorn-0.30.3-py3-none-any.whl", hash = "sha256:94a3608da0e530cea8f69683aa4126364ac18e3826b6630d1a65f4638aade503"},
- {file = "uvicorn-0.30.3.tar.gz", hash = "sha256:0d114d0831ff1adbf231d358cbf42f17333413042552a624ea6a9b4c33dcfd81"},
+ {file = "uvicorn-0.30.5-py3-none-any.whl", hash = "sha256:b2d86de274726e9878188fa07576c9ceeff90a839e2b6e25c917fe05f5a6c835"},
+ {file = "uvicorn-0.30.5.tar.gz", hash = "sha256:ac6fdbd4425c5fd17a9fe39daf4d4d075da6fdc80f653e5894cdc2fd98752bee"},
]
[package.dependencies]
@@ -9098,13 +9144,13 @@ h11 = ">=0.9.0,<1"
[[package]]
name = "xinference-client"
-version = "0.9.4"
+version = "0.13.3"
description = "Client for Xinference"
optional = false
python-versions = "*"
files = [
- {file = "xinference-client-0.9.4.tar.gz", hash = "sha256:21934bc9f3142ade66aaed33c2b6cf244c274d5b4b3163f9981bebdddacf205f"},
- {file = "xinference_client-0.9.4-py3-none-any.whl", hash = "sha256:6d3f1df3537a011f0afee5f9c9ca4f3ff564ca32cc999cf7038b324c0b907d0c"},
+ {file = "xinference-client-0.13.3.tar.gz", hash = "sha256:822b722100affdff049c27760be7d62ac92de58c87a40d3361066df446ba648f"},
+ {file = "xinference_client-0.13.3-py3-none-any.whl", hash = "sha256:f0eff3858b1ebcef2129726f82b09259c177e11db466a7ca23def3d4849c419f"},
]
[package.dependencies]
@@ -9336,47 +9382,45 @@ test = ["zope.testrunner"]
[[package]]
name = "zope-interface"
-version = "6.4.post2"
+version = "7.0.1"
description = "Interfaces for Python"
optional = false
-python-versions = ">=3.7"
+python-versions = ">=3.8"
files = [
- {file = "zope.interface-6.4.post2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2eccd5bef45883802848f821d940367c1d0ad588de71e5cabe3813175444202c"},
- {file = "zope.interface-6.4.post2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:762e616199f6319bb98e7f4f27d254c84c5fb1c25c908c2a9d0f92b92fb27530"},
- {file = "zope.interface-6.4.post2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5ef8356f16b1a83609f7a992a6e33d792bb5eff2370712c9eaae0d02e1924341"},
- {file = "zope.interface-6.4.post2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0e4fa5d34d7973e6b0efa46fe4405090f3b406f64b6290facbb19dcbf642ad6b"},
- {file = "zope.interface-6.4.post2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d22fce0b0f5715cdac082e35a9e735a1752dc8585f005d045abb1a7c20e197f9"},
- {file = "zope.interface-6.4.post2-cp310-cp310-win_amd64.whl", hash = "sha256:97e615eab34bd8477c3f34197a17ce08c648d38467489359cb9eb7394f1083f7"},
- {file = "zope.interface-6.4.post2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:599f3b07bde2627e163ce484d5497a54a0a8437779362395c6b25e68c6590ede"},
- {file = "zope.interface-6.4.post2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:136cacdde1a2c5e5bc3d0b2a1beed733f97e2dad8c2ad3c2e17116f6590a3827"},
- {file = "zope.interface-6.4.post2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:47937cf2e7ed4e0e37f7851c76edeb8543ec9b0eae149b36ecd26176ff1ca874"},
- {file = "zope.interface-6.4.post2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f0a6be264afb094975b5ef55c911379d6989caa87c4e558814ec4f5125cfa2e"},
- {file = "zope.interface-6.4.post2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47654177e675bafdf4e4738ce58cdc5c6d6ee2157ac0a78a3fa460942b9d64a8"},
- {file = "zope.interface-6.4.post2-cp311-cp311-win_amd64.whl", hash = "sha256:e2fb8e8158306567a3a9a41670c1ff99d0567d7fc96fa93b7abf8b519a46b250"},
- {file = "zope.interface-6.4.post2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:b912750b13d76af8aac45ddf4679535def304b2a48a07989ec736508d0bbfbde"},
- {file = "zope.interface-6.4.post2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:4ac46298e0143d91e4644a27a769d1388d5d89e82ee0cf37bf2b0b001b9712a4"},
- {file = "zope.interface-6.4.post2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:86a94af4a88110ed4bb8961f5ac72edf782958e665d5bfceaab6bf388420a78b"},
- {file = "zope.interface-6.4.post2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:73f9752cf3596771c7726f7eea5b9e634ad47c6d863043589a1c3bb31325c7eb"},
- {file = "zope.interface-6.4.post2-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:00b5c3e9744dcdc9e84c24ed6646d5cf0cf66551347b310b3ffd70f056535854"},
- {file = "zope.interface-6.4.post2-cp312-cp312-win_amd64.whl", hash = "sha256:551db2fe892fcbefb38f6f81ffa62de11090c8119fd4e66a60f3adff70751ec7"},
- {file = "zope.interface-6.4.post2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e96ac6b3169940a8cd57b4f2b8edcad8f5213b60efcd197d59fbe52f0accd66e"},
- {file = "zope.interface-6.4.post2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cebff2fe5dc82cb22122e4e1225e00a4a506b1a16fafa911142ee124febf2c9e"},
- {file = "zope.interface-6.4.post2-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:33ee982237cffaf946db365c3a6ebaa37855d8e3ca5800f6f48890209c1cfefc"},
- {file = "zope.interface-6.4.post2-cp37-cp37m-macosx_11_0_x86_64.whl", hash = "sha256:fbf649bc77510ef2521cf797700b96167bb77838c40780da7ea3edd8b78044d1"},
- {file = "zope.interface-6.4.post2-cp37-cp37m-win_amd64.whl", hash = "sha256:4c0b208a5d6c81434bdfa0f06d9b667e5de15af84d8cae5723c3a33ba6611b82"},
- {file = "zope.interface-6.4.post2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d3fe667935e9562407c2511570dca14604a654988a13d8725667e95161d92e9b"},
- {file = "zope.interface-6.4.post2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:a96e6d4074db29b152222c34d7eec2e2db2f92638d2b2b2c704f9e8db3ae0edc"},
- {file = "zope.interface-6.4.post2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:866a0f583be79f0def667a5d2c60b7b4cc68f0c0a470f227e1122691b443c934"},
- {file = "zope.interface-6.4.post2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5fe919027f29b12f7a2562ba0daf3e045cb388f844e022552a5674fcdf5d21f1"},
- {file = "zope.interface-6.4.post2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8e0343a6e06d94f6b6ac52fbc75269b41dd3c57066541a6c76517f69fe67cb43"},
- {file = "zope.interface-6.4.post2-cp38-cp38-win_amd64.whl", hash = "sha256:dabb70a6e3d9c22df50e08dc55b14ca2a99da95a2d941954255ac76fd6982bc5"},
- {file = "zope.interface-6.4.post2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:706efc19f9679a1b425d6fa2b4bc770d976d0984335eaea0869bd32f627591d2"},
- {file = "zope.interface-6.4.post2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3d136e5b8821073e1a09dde3eb076ea9988e7010c54ffe4d39701adf0c303438"},
- {file = "zope.interface-6.4.post2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1730c93a38b5a18d24549bc81613223962a19d457cfda9bdc66e542f475a36f4"},
- {file = "zope.interface-6.4.post2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bc2676312cc3468a25aac001ec727168994ea3b69b48914944a44c6a0b251e79"},
- {file = "zope.interface-6.4.post2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1a62fd6cd518693568e23e02f41816adedfca637f26716837681c90b36af3671"},
- {file = "zope.interface-6.4.post2-cp39-cp39-win_amd64.whl", hash = "sha256:d3f7e001328bd6466b3414215f66dde3c7c13d8025a9c160a75d7b2687090d15"},
- {file = "zope.interface-6.4.post2.tar.gz", hash = "sha256:1c207e6f6dfd5749a26f5a5fd966602d6b824ec00d2df84a7e9a924e8933654e"},
+ {file = "zope.interface-7.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ec4e87e6fdc511a535254daa122c20e11959ce043b4e3425494b237692a34f1c"},
+ {file = "zope.interface-7.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:51d5713e8e38f2d3ec26e0dfdca398ed0c20abda2eb49ffc15a15a23eb8e5f6d"},
+ {file = "zope.interface-7.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea8d51e5eb29e57d34744369cd08267637aa5a0fefc9b5d33775ab7ff2ebf2e3"},
+ {file = "zope.interface-7.0.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:55bbcc74dc0c7ab489c315c28b61d7a1d03cf938cc99cc58092eb065f120c3a5"},
+ {file = "zope.interface-7.0.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10ebac566dd0cec66f942dc759d46a994a2b3ba7179420f0e2130f88f8a5f400"},
+ {file = "zope.interface-7.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:7039e624bcb820f77cc2ff3d1adcce531932990eee16121077eb51d9c76b6c14"},
+ {file = "zope.interface-7.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:03bd5c0db82237bbc47833a8b25f1cc090646e212f86b601903d79d7e6b37031"},
+ {file = "zope.interface-7.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3f52050c6a10d4a039ec6f2c58e5b3ade5cc570d16cf9d102711e6b8413c90e6"},
+ {file = "zope.interface-7.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:af0b33f04677b57843d529b9257a475d2865403300b48c67654c40abac2f9f24"},
+ {file = "zope.interface-7.0.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:696c2a381fc7876b3056711717dba5eddd07c2c9e5ccd50da54029a1293b6e43"},
+ {file = "zope.interface-7.0.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f89a420cf5a6f2aa7849dd59e1ff0e477f562d97cf8d6a1ee03461e1eec39887"},
+ {file = "zope.interface-7.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:b59deb0ddc7b431e41d720c00f99d68b52cb9bd1d5605a085dc18f502fe9c47f"},
+ {file = "zope.interface-7.0.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:52f5253cca1b35eaeefa51abd366b87f48f8714097c99b131ba61f3fdbbb58e7"},
+ {file = "zope.interface-7.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:88d108d004e0df25224de77ce349a7e73494ea2cb194031f7c9687e68a88ec9b"},
+ {file = "zope.interface-7.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c203d82069ba31e1f3bc7ba530b2461ec86366cd4bfc9b95ec6ce58b1b559c34"},
+ {file = "zope.interface-7.0.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3f3495462bc0438b76536a0e10d765b168ae636092082531b88340dc40dcd118"},
+ {file = "zope.interface-7.0.1-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:192b7a792e3145ed880ff6b1a206fdb783697cfdb4915083bfca7065ec845e60"},
+ {file = "zope.interface-7.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:400d06c9ec8dbcc96f56e79376297e7be07a315605c9a2208720da263d44d76f"},
+ {file = "zope.interface-7.0.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8c1dff87b30fd150c61367d0e2cdc49bb55f8b9fd2a303560bbc24b951573ae1"},
+ {file = "zope.interface-7.0.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f749ca804648d00eda62fe1098f229b082dfca930d8bad8386e572a6eafa7525"},
+ {file = "zope.interface-7.0.1-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4ec212037becf6d2f705b7ed4538d56980b1e7bba237df0d8995cbbed29961dc"},
+ {file = "zope.interface-7.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d33cb526efdc235a2531433fc1287fcb80d807d5b401f9b801b78bf22df560dd"},
+ {file = "zope.interface-7.0.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b419f2144e1762ab845f20316f1df36b15431f2622ebae8a6d5f7e8e712b413c"},
+ {file = "zope.interface-7.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03f1452d5d1f279184d5bdb663a3dc39902d9320eceb63276240791e849054b6"},
+ {file = "zope.interface-7.0.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ba4b3638d014918b918aa90a9c8370bd74a03abf8fcf9deb353b3a461a59a84"},
+ {file = "zope.interface-7.0.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc0615351221926a36a0fbcb2520fb52e0b23e8c22a43754d9cb8f21358c33c0"},
+ {file = "zope.interface-7.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:ce6cbb852fb8f2f9bb7b9cdca44e2e37bce783b5f4c167ff82cb5f5128163c8f"},
+ {file = "zope.interface-7.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5566fd9271c89ad03d81b0831c37d46ae5e2ed211122c998637130159a120cf1"},
+ {file = "zope.interface-7.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:da0cef4d7e3f19c3bd1d71658d6900321af0492fee36ec01b550a10924cffb9c"},
+ {file = "zope.interface-7.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f32ca483e6ade23c7caaee9d5ee5d550cf4146e9b68d2fb6c68bac183aa41c37"},
+ {file = "zope.interface-7.0.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:da21e7eec49252df34d426c2ee9cf0361c923026d37c24728b0fa4cc0599fd03"},
+ {file = "zope.interface-7.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a8195b99e650e6f329ce4e5eb22d448bdfef0406404080812bc96e2a05674cb"},
+ {file = "zope.interface-7.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:19c829d52e921b9fe0b2c0c6a8f9a2508c49678ee1be598f87d143335b6a35dc"},
+ {file = "zope.interface-7.0.1.tar.gz", hash = "sha256:f0f5fda7cbf890371a59ab1d06512da4f2c89a6ea194e595808123c863c38eff"},
]
[package.dependencies]
@@ -9501,5 +9545,5 @@ cffi = ["cffi (>=1.11)"]
[metadata]
lock-version = "2.0"
-python-versions = "^3.10"
-content-hash = "a8b61d74d9322302b7447b6f8728ad606abc160202a8a122a05a8ef3cec7055b"
+python-versions = ">=3.10,<3.13"
+content-hash = "2b822039247a445f72e04e967aef84f841781e2789b70071acad022f36ba26a5"
diff --git a/api/pyproject.toml b/api/pyproject.toml
index 25778f323d..058d67c42f 100644
--- a/api/pyproject.toml
+++ b/api/pyproject.toml
@@ -1,5 +1,5 @@
[project]
-requires-python = ">=3.10"
+requires-python = ">=3.10,<3.13"
[build-system]
requires = ["poetry-core"]
@@ -73,6 +73,7 @@ quote-style = "single"
[tool.pytest_env]
OPENAI_API_KEY = "sk-IamNotARealKeyJustForMockTestKawaiiiiiiiiii"
+UPSTAGE_API_KEY = "up-aaaaaaaaaaaaaaaaaaaa"
AZURE_OPENAI_API_BASE = "https://difyai-openai.openai.azure.com"
AZURE_OPENAI_API_KEY = "xxxxb1707exxxxxxxxxxaaxxxxxf94"
ANTHROPIC_API_KEY = "sk-ant-api11-IamNotARealKeyJustForMockTestKawaiiiiiiiiii-NotBaka-ASkksz"
@@ -92,6 +93,8 @@ CODE_MAX_STRING_LENGTH = "80000"
CODE_EXECUTION_ENDPOINT = "http://127.0.0.1:8194"
CODE_EXECUTION_API_KEY = "dify-sandbox"
FIRECRAWL_API_KEY = "fc-"
+TEI_EMBEDDING_SERVER_URL = "http://a.abc.com:11451"
+TEI_RERANK_SERVER_URL = "http://a.abc.com:11451"
[tool.poetry]
name = "dify-api"
@@ -107,7 +110,7 @@ authlib = "1.3.1"
azure-identity = "1.16.1"
azure-storage-blob = "12.13.0"
beautifulsoup4 = "4.12.2"
-boto3 = "1.34.136"
+boto3 = "1.34.148"
bs4 = "~0.0.1"
cachetools = "~5.3.0"
celery = "~5.3.6"
@@ -123,7 +126,7 @@ flask-migrate = "~4.0.5"
flask-restful = "~0.3.10"
Flask-SQLAlchemy = "~3.1.1"
gevent = "~23.9.1"
-gmpy2 = "~2.1.5"
+gmpy2 = "~2.2.1"
google-ai-generativelanguage = "0.6.1"
google-api-core = "2.18.0"
google-api-python-client = "2.90.0"
@@ -151,10 +154,9 @@ pycryptodome = "3.19.1"
pydantic = "~2.8.2"
pydantic-settings = "~2.3.4"
pydantic_extra_types = "~2.9.0"
-pydub = "~0.25.1"
pyjwt = "~2.8.0"
pypdfium2 = "~4.17.0"
-python = "^3.10"
+python = ">=3.10,<3.13"
python-docx = "~1.1.0"
python-dotenv = "1.0.0"
pyyaml = "~6.0.1"
@@ -173,11 +175,13 @@ transformers = "~4.35.0"
unstructured = { version = "~0.10.27", extras = ["docx", "epub", "md", "msg", "ppt", "pptx"] }
websocket-client = "~1.7.0"
werkzeug = "~3.0.1"
-xinference-client = "0.9.4"
+xinference-client = "0.13.3"
yarl = "~1.9.4"
zhipuai = "1.0.7"
rank-bm25 = "~0.2.2"
openpyxl = "^3.1.5"
+kaleido = "0.2.1"
+
############################################################
# Tool dependencies required by tool implementations
############################################################
@@ -186,7 +190,7 @@ openpyxl = "^3.1.5"
arxiv = "2.1.0"
matplotlib = "~3.8.2"
newspaper3k = "0.2.8"
-duckduckgo-search = "^6.1.8"
+duckduckgo-search = "^6.2.6"
jsonpath-ng = "1.6.1"
numexpr = "~2.9.0"
opensearch-py = "2.4.0"
@@ -204,7 +208,7 @@ cloudscraper = "1.2.71"
[tool.poetry.group.vdb.dependencies]
chromadb = "0.5.1"
oracledb = "~2.2.1"
-pgvecto-rs = "0.1.4"
+pgvecto-rs = { version = "~0.2.1", extras = ['sqlalchemy'] }
pgvector = "0.2.5"
pymilvus = "~2.4.4"
pymysql = "1.1.1"
@@ -238,5 +242,5 @@ pytest-mock = "~3.14.0"
optional = true
[tool.poetry.group.lint.dependencies]
-ruff = "~0.5.1"
+ruff = "~0.5.7"
dotenv-linter = "~0.5.0"
diff --git a/api/services/app_dsl_service.py b/api/services/app_dsl_service.py
index 3764166333..e16e5c715c 100644
--- a/api/services/app_dsl_service.py
+++ b/api/services/app_dsl_service.py
@@ -176,7 +176,7 @@ class AppDslService:
else:
cls._append_model_config_export_data(export_data, app_model)
- return yaml.dump(export_data)
+ return yaml.dump(export_data, allow_unicode=True)
@classmethod
def _check_or_fix_dsl(cls, import_data: dict) -> dict:
@@ -238,6 +238,8 @@ class AppDslService:
# init draft workflow
environment_variables_list = workflow_data.get('environment_variables') or []
environment_variables = [factory.build_variable_from_mapping(obj) for obj in environment_variables_list]
+ conversation_variables_list = workflow_data.get('conversation_variables') or []
+ conversation_variables = [factory.build_variable_from_mapping(obj) for obj in conversation_variables_list]
workflow_service = WorkflowService()
draft_workflow = workflow_service.sync_draft_workflow(
app_model=app,
@@ -246,6 +248,7 @@ class AppDslService:
unique_hash=None,
account=account,
environment_variables=environment_variables,
+ conversation_variables=conversation_variables,
)
workflow_service.publish_workflow(
app_model=app,
diff --git a/api/services/dataset_service.py b/api/services/dataset_service.py
index d5a54ba731..9052a0b785 100644
--- a/api/services/dataset_service.py
+++ b/api/services/dataset_service.py
@@ -197,6 +197,28 @@ class DatasetService:
f"{ex.description}"
)
+ @staticmethod
+ def check_embedding_model_setting(tenant_id: str, embedding_model_provider: str, embedding_model:str):
+ try:
+ model_manager = ModelManager()
+ model_manager.get_model_instance(
+ tenant_id=tenant_id,
+ provider=embedding_model_provider,
+ model_type=ModelType.TEXT_EMBEDDING,
+ model=embedding_model
+ )
+ except LLMBadRequestError:
+ raise ValueError(
+ "No Embedding Model available. Please configure a valid provider "
+ "in the Settings -> Model Provider."
+ )
+ except ProviderTokenNotInitError as ex:
+ raise ValueError(
+ f"The dataset in unavailable, due to: "
+ f"{ex.description}"
+ )
+
+
@staticmethod
def update_dataset(dataset_id, data, user):
data.pop('partial_member_list', None)
diff --git a/api/services/file_service.py b/api/services/file_service.py
index c686b190fe..9139962240 100644
--- a/api/services/file_service.py
+++ b/api/services/file_service.py
@@ -109,7 +109,7 @@ class FileService:
tenant_id=current_user.current_tenant_id,
storage_type=dify_config.STORAGE_TYPE,
key=file_key,
- name=text_name + '.txt',
+ name=text_name,
size=len(text),
extension='txt',
mime_type='text/plain',
diff --git a/api/services/hit_testing_service.py b/api/services/hit_testing_service.py
index 69274dff09..de5f6994b0 100644
--- a/api/services/hit_testing_service.py
+++ b/api/services/hit_testing_service.py
@@ -42,11 +42,12 @@ class HitTestingService:
dataset_id=dataset.id,
query=cls.escape_query_for_search(query),
top_k=retrieval_model.get('top_k', 2),
- score_threshold=retrieval_model['score_threshold']
+ score_threshold=retrieval_model.get('score_threshold', .0)
if retrieval_model['score_threshold_enabled'] else None,
- reranking_model=retrieval_model['reranking_model']
+ reranking_model=retrieval_model.get('reranking_model', None)
if retrieval_model['reranking_enable'] else None,
- reranking_mode=retrieval_model.get('reranking_mode', None),
+ reranking_mode=retrieval_model.get('reranking_mode')
+ if retrieval_model.get('reranking_mode') else 'reranking_model',
weights=retrieval_model.get('weights', None),
)
diff --git a/api/services/message_service.py b/api/services/message_service.py
index e310d70d53..491a914c77 100644
--- a/api/services/message_service.py
+++ b/api/services/message_service.py
@@ -7,7 +7,8 @@ from core.llm_generator.llm_generator import LLMGenerator
from core.memory.token_buffer_memory import TokenBufferMemory
from core.model_manager import ModelManager
from core.model_runtime.entities.model_entities import ModelType
-from core.ops.ops_trace_manager import TraceQueueManager, TraceTask, TraceTaskName
+from core.ops.entities.trace_entity import TraceTaskName
+from core.ops.ops_trace_manager import TraceQueueManager, TraceTask
from core.ops.utils import measure_time
from extensions.ext_database import db
from libs.infinite_scroll_pagination import InfiniteScrollPagination
diff --git a/api/services/model_load_balancing_service.py b/api/services/model_load_balancing_service.py
index 0983839996..80eb72140d 100644
--- a/api/services/model_load_balancing_service.py
+++ b/api/services/model_load_balancing_service.py
@@ -4,6 +4,7 @@ import logging
from json import JSONDecodeError
from typing import Optional
+from constants import HIDDEN_VALUE
from core.entities.provider_configuration import ProviderConfiguration
from core.helper import encrypter
from core.helper.model_provider_cache import ProviderCredentialsCache, ProviderCredentialsCacheType
@@ -511,7 +512,7 @@ class ModelLoadBalancingService:
for key, value in credentials.items():
if key in provider_credential_secret_variables:
# if send [__HIDDEN__] in secret input, it will be same as original value
- if value == '[__HIDDEN__]' and key in original_credentials:
+ if value == HIDDEN_VALUE and key in original_credentials:
credentials[key] = encrypter.decrypt_token(tenant_id, original_credentials[key])
if validate:
diff --git a/api/services/workflow/workflow_converter.py b/api/services/workflow/workflow_converter.py
index 06b129be69..f993608293 100644
--- a/api/services/workflow/workflow_converter.py
+++ b/api/services/workflow/workflow_converter.py
@@ -6,7 +6,6 @@ from core.app.app_config.entities import (
DatasetRetrieveConfigEntity,
EasyUIBasedAppConfig,
ExternalDataVariableEntity,
- FileExtraConfig,
ModelConfigEntity,
PromptTemplateEntity,
VariableEntity,
@@ -14,6 +13,7 @@ from core.app.app_config.entities import (
from core.app.apps.agent_chat.app_config_manager import AgentChatAppConfigManager
from core.app.apps.chat.app_config_manager import ChatAppConfigManager
from core.app.apps.completion.app_config_manager import CompletionAppConfigManager
+from core.file.file_obj import FileExtraConfig
from core.helper import encrypter
from core.model_runtime.entities.llm_entities import LLMMode
from core.model_runtime.utils.encoders import jsonable_encoder
diff --git a/api/services/workflow_app_service.py b/api/services/workflow_app_service.py
index 0476788375..c4d3d27631 100644
--- a/api/services/workflow_app_service.py
+++ b/api/services/workflow_app_service.py
@@ -1,3 +1,5 @@
+import uuid
+
from flask_sqlalchemy.pagination import Pagination
from sqlalchemy import and_, or_
@@ -25,20 +27,26 @@ class WorkflowAppService:
)
status = WorkflowRunStatus.value_of(args.get('status')) if args.get('status') else None
- if args['keyword'] or status:
+ keyword = args['keyword']
+ if keyword or status:
query = query.join(
WorkflowRun, WorkflowRun.id == WorkflowAppLog.workflow_run_id
)
- if args['keyword']:
- keyword_val = f"%{args['keyword'][:30]}%"
+ if keyword:
+ keyword_like_val = f"%{args['keyword'][:30]}%"
keyword_conditions = [
- WorkflowRun.inputs.ilike(keyword_val),
- WorkflowRun.outputs.ilike(keyword_val),
+ WorkflowRun.inputs.ilike(keyword_like_val),
+ WorkflowRun.outputs.ilike(keyword_like_val),
# filter keyword by end user session id if created by end user role
- and_(WorkflowRun.created_by_role == 'end_user', EndUser.session_id.ilike(keyword_val))
+ and_(WorkflowRun.created_by_role == 'end_user', EndUser.session_id.ilike(keyword_like_val))
]
+ # filter keyword by workflow run id
+ keyword_uuid = self._safe_parse_uuid(keyword)
+ if keyword_uuid:
+ keyword_conditions.append(WorkflowRun.id == keyword_uuid)
+
query = query.outerjoin(
EndUser,
and_(WorkflowRun.created_by == EndUser.id, WorkflowRun.created_by_role == CreatedByRole.END_USER.value)
@@ -60,3 +68,14 @@ class WorkflowAppService:
)
return pagination
+
+ @staticmethod
+ def _safe_parse_uuid(value: str):
+ # fast check
+ if len(value) < 32:
+ return None
+
+ try:
+ return uuid.UUID(value)
+ except ValueError:
+ return None
diff --git a/api/services/workflow_service.py b/api/services/workflow_service.py
index fe89e5b6db..42ae0f2cd3 100644
--- a/api/services/workflow_service.py
+++ b/api/services/workflow_service.py
@@ -73,6 +73,7 @@ class WorkflowService:
unique_hash: Optional[str],
account: Account,
environment_variables: Sequence[Variable],
+ conversation_variables: Sequence[Variable],
) -> Workflow:
"""
Sync draft workflow
@@ -100,7 +101,8 @@ class WorkflowService:
graph=json.dumps(graph),
features=json.dumps(features),
created_by=account.id,
- environment_variables=environment_variables
+ environment_variables=environment_variables,
+ conversation_variables=conversation_variables,
)
db.session.add(workflow)
# update draft workflow if found
@@ -110,6 +112,7 @@ class WorkflowService:
workflow.updated_by = account.id
workflow.updated_at = datetime.now(timezone.utc).replace(tzinfo=None)
workflow.environment_variables = environment_variables
+ workflow.conversation_variables = conversation_variables
# commit db session changes
db.session.commit()
@@ -146,7 +149,8 @@ class WorkflowService:
graph=draft_workflow.graph,
features=draft_workflow.features,
created_by=account.id,
- environment_variables=draft_workflow.environment_variables
+ environment_variables=draft_workflow.environment_variables,
+ conversation_variables=draft_workflow.conversation_variables,
)
# commit db session changes
@@ -332,3 +336,25 @@ class WorkflowService:
)
else:
raise ValueError(f"Invalid app mode: {app_model.mode}")
+
+ @classmethod
+ def get_elapsed_time(cls, workflow_run_id: str) -> float:
+ """
+ Get elapsed time
+ """
+ elapsed_time = 0.0
+
+ # fetch workflow node execution by workflow_run_id
+ workflow_nodes = (
+ db.session.query(WorkflowNodeExecution)
+ .filter(WorkflowNodeExecution.workflow_run_id == workflow_run_id)
+ .order_by(WorkflowNodeExecution.created_at.asc())
+ .all()
+ )
+ if not workflow_nodes:
+ return elapsed_time
+
+ for node in workflow_nodes:
+ elapsed_time += node.elapsed_time
+
+ return elapsed_time
diff --git a/api/tasks/deal_dataset_vector_index_task.py b/api/tasks/deal_dataset_vector_index_task.py
index c1b0e7f1a4..ce93e111e5 100644
--- a/api/tasks/deal_dataset_vector_index_task.py
+++ b/api/tasks/deal_dataset_vector_index_task.py
@@ -42,31 +42,42 @@ def deal_dataset_vector_index_task(dataset_id: str, action: str):
).all()
if dataset_documents:
- documents = []
+ dataset_documents_ids = [doc.id for doc in dataset_documents]
+ db.session.query(DatasetDocument).filter(DatasetDocument.id.in_(dataset_documents_ids)) \
+ .update({"indexing_status": "indexing"}, synchronize_session=False)
+ db.session.commit()
+
for dataset_document in dataset_documents:
- # delete from vector index
- segments = db.session.query(DocumentSegment).filter(
- DocumentSegment.document_id == dataset_document.id,
- DocumentSegment.enabled == True
- ) .order_by(DocumentSegment.position.asc()).all()
- for segment in segments:
- document = Document(
- page_content=segment.content,
- metadata={
- "doc_id": segment.index_node_id,
- "doc_hash": segment.index_node_hash,
- "document_id": segment.document_id,
- "dataset_id": segment.dataset_id,
- }
- )
+ try:
+ # add from vector index
+ segments = db.session.query(DocumentSegment).filter(
+ DocumentSegment.document_id == dataset_document.id,
+ DocumentSegment.enabled == True
+ ) .order_by(DocumentSegment.position.asc()).all()
+ if segments:
+ documents = []
+ for segment in segments:
+ document = Document(
+ page_content=segment.content,
+ metadata={
+ "doc_id": segment.index_node_id,
+ "doc_hash": segment.index_node_hash,
+ "document_id": segment.document_id,
+ "dataset_id": segment.dataset_id,
+ }
+ )
- documents.append(document)
-
- # save vector index
- index_processor.load(dataset, documents, with_keywords=False)
+ documents.append(document)
+ # save vector index
+ index_processor.load(dataset, documents, with_keywords=False)
+ db.session.query(DatasetDocument).filter(DatasetDocument.id == dataset_document.id) \
+ .update({"indexing_status": "completed"}, synchronize_session=False)
+ db.session.commit()
+ except Exception as e:
+ db.session.query(DatasetDocument).filter(DatasetDocument.id == dataset_document.id) \
+ .update({"indexing_status": "error", "error": str(e)}, synchronize_session=False)
+ db.session.commit()
elif action == 'update':
- # clean index
- index_processor.clean(dataset, None, with_keywords=False)
dataset_documents = db.session.query(DatasetDocument).filter(
DatasetDocument.dataset_id == dataset_id,
DatasetDocument.indexing_status == 'completed',
@@ -75,28 +86,46 @@ def deal_dataset_vector_index_task(dataset_id: str, action: str):
).all()
# add new index
if dataset_documents:
- documents = []
+ # update document status
+ dataset_documents_ids = [doc.id for doc in dataset_documents]
+ db.session.query(DatasetDocument).filter(DatasetDocument.id.in_(dataset_documents_ids)) \
+ .update({"indexing_status": "indexing"}, synchronize_session=False)
+ db.session.commit()
+
+ # clean index
+ index_processor.clean(dataset, None, with_keywords=False)
+
for dataset_document in dataset_documents:
- # delete from vector index
- segments = db.session.query(DocumentSegment).filter(
- DocumentSegment.document_id == dataset_document.id,
- DocumentSegment.enabled == True
- ).order_by(DocumentSegment.position.asc()).all()
- for segment in segments:
- document = Document(
- page_content=segment.content,
- metadata={
- "doc_id": segment.index_node_id,
- "doc_hash": segment.index_node_hash,
- "document_id": segment.document_id,
- "dataset_id": segment.dataset_id,
- }
- )
+ # update from vector index
+ try:
+ segments = db.session.query(DocumentSegment).filter(
+ DocumentSegment.document_id == dataset_document.id,
+ DocumentSegment.enabled == True
+ ).order_by(DocumentSegment.position.asc()).all()
+ if segments:
+ documents = []
+ for segment in segments:
+ document = Document(
+ page_content=segment.content,
+ metadata={
+ "doc_id": segment.index_node_id,
+ "doc_hash": segment.index_node_hash,
+ "document_id": segment.document_id,
+ "dataset_id": segment.dataset_id,
+ }
+ )
- documents.append(document)
+ documents.append(document)
+ # save vector index
+ index_processor.load(dataset, documents, with_keywords=False)
+ db.session.query(DatasetDocument).filter(DatasetDocument.id == dataset_document.id) \
+ .update({"indexing_status": "completed"}, synchronize_session=False)
+ db.session.commit()
+ except Exception as e:
+ db.session.query(DatasetDocument).filter(DatasetDocument.id == dataset_document.id) \
+ .update({"indexing_status": "error", "error": str(e)}, synchronize_session=False)
+ db.session.commit()
- # save vector index
- index_processor.load(dataset, documents, with_keywords=False)
end_at = time.perf_counter()
logging.info(
diff --git a/api/tasks/ops_trace_task.py b/api/tasks/ops_trace_task.py
index 1d33609205..6b4cab55b3 100644
--- a/api/tasks/ops_trace_task.py
+++ b/api/tasks/ops_trace_task.py
@@ -22,10 +22,8 @@ def process_trace_tasks(tasks_data):
trace_info = tasks_data.get('trace_info')
app_id = tasks_data.get('app_id')
- conversation_id = tasks_data.get('conversation_id')
- message_id = tasks_data.get('message_id')
trace_info_type = tasks_data.get('trace_info_type')
- trace_instance = OpsTraceManager.get_ops_trace_instance(app_id, conversation_id, message_id)
+ trace_instance = OpsTraceManager.get_ops_trace_instance(app_id)
if trace_info.get('message_data'):
trace_info['message_data'] = Message.from_dict(data=trace_info['message_data'])
diff --git a/api/tasks/remove_app_and_related_data_task.py b/api/tasks/remove_app_and_related_data_task.py
index 378756e68c..4efe7ee38c 100644
--- a/api/tasks/remove_app_and_related_data_task.py
+++ b/api/tasks/remove_app_and_related_data_task.py
@@ -1,8 +1,10 @@
import logging
import time
+from collections.abc import Callable
import click
from celery import shared_task
+from sqlalchemy import delete
from sqlalchemy.exc import SQLAlchemyError
from extensions.ext_database import db
@@ -28,7 +30,7 @@ from models.model import (
)
from models.tools import WorkflowToolProvider
from models.web import PinnedConversation, SavedMessage
-from models.workflow import Workflow, WorkflowAppLog, WorkflowNodeExecution, WorkflowRun
+from models.workflow import ConversationVariable, Workflow, WorkflowAppLog, WorkflowNodeExecution, WorkflowRun
@shared_task(queue='app_deletion', bind=True, max_retries=3)
@@ -54,6 +56,7 @@ def remove_app_and_related_data_task(self, tenant_id: str, app_id: str):
_delete_app_tag_bindings(tenant_id, app_id)
_delete_end_users(tenant_id, app_id)
_delete_trace_app_configs(tenant_id, app_id)
+ _delete_conversation_variables(app_id=app_id)
end_at = time.perf_counter()
logging.info(click.style(f'App and related data deleted: {app_id} latency: {end_at - start_at}', fg='green'))
@@ -225,6 +228,13 @@ def _delete_app_conversations(tenant_id: str, app_id: str):
"conversation"
)
+def _delete_conversation_variables(*, app_id: str):
+ stmt = delete(ConversationVariable).where(ConversationVariable.app_id == app_id)
+ with db.engine.connect() as conn:
+ conn.execute(stmt)
+ conn.commit()
+ logging.info(click.style(f"Deleted conversation variables for app {app_id}", fg='green'))
+
def _delete_app_messages(tenant_id: str, app_id: str):
def del_message(message_id: str):
@@ -299,7 +309,7 @@ def _delete_trace_app_configs(tenant_id: str, app_id: str):
)
-def _delete_records(query_sql: str, params: dict, delete_func: callable, name: str) -> None:
+def _delete_records(query_sql: str, params: dict, delete_func: Callable, name: str) -> None:
while True:
with db.engine.begin() as conn:
rs = conn.execute(db.text(query_sql), params)
diff --git a/api/tests/integration_tests/.env.example b/api/tests/integration_tests/.env.example
index f29e5ef4d6..2d52399d29 100644
--- a/api/tests/integration_tests/.env.example
+++ b/api/tests/integration_tests/.env.example
@@ -79,4 +79,7 @@ CODE_EXECUTION_API_KEY=
VOLC_API_KEY=
VOLC_SECRET_KEY=
VOLC_MODEL_ENDPOINT_ID=
-VOLC_EMBEDDING_ENDPOINT_ID=
\ No newline at end of file
+VOLC_EMBEDDING_ENDPOINT_ID=
+
+# 360 AI Credentials
+ZHINAO_API_KEY=
diff --git a/api/tests/integration_tests/model_runtime/__mock/huggingface_tei.py b/api/tests/integration_tests/model_runtime/__mock/huggingface_tei.py
new file mode 100644
index 0000000000..2f66d707ca
--- /dev/null
+++ b/api/tests/integration_tests/model_runtime/__mock/huggingface_tei.py
@@ -0,0 +1,94 @@
+
+from api.core.model_runtime.model_providers.huggingface_tei.tei_helper import TeiModelExtraParameter
+
+
+class MockTEIClass:
+ @staticmethod
+ def get_tei_extra_parameter(server_url: str, model_name: str) -> TeiModelExtraParameter:
+ # During mock, we don't have a real server to query, so we just return a dummy value
+ if 'rerank' in model_name:
+ model_type = 'reranker'
+ else:
+ model_type = 'embedding'
+
+ return TeiModelExtraParameter(model_type=model_type, max_input_length=512, max_client_batch_size=1)
+
+ @staticmethod
+ def invoke_tokenize(server_url: str, texts: list[str]) -> list[list[dict]]:
+ # Use space as token separator, and split the text into tokens
+ tokenized_texts = []
+ for text in texts:
+ tokens = text.split(' ')
+ current_index = 0
+ tokenized_text = []
+ for idx, token in enumerate(tokens):
+ s_token = {
+ 'id': idx,
+ 'text': token,
+ 'special': False,
+ 'start': current_index,
+ 'stop': current_index + len(token),
+ }
+ current_index += len(token) + 1
+ tokenized_text.append(s_token)
+ tokenized_texts.append(tokenized_text)
+ return tokenized_texts
+
+ @staticmethod
+ def invoke_embeddings(server_url: str, texts: list[str]) -> dict:
+ # {
+ # "object": "list",
+ # "data": [
+ # {
+ # "object": "embedding",
+ # "embedding": [...],
+ # "index": 0
+ # }
+ # ],
+ # "model": "MODEL_NAME",
+ # "usage": {
+ # "prompt_tokens": 3,
+ # "total_tokens": 3
+ # }
+ # }
+ embeddings = []
+ for idx, text in enumerate(texts):
+ embedding = [0.1] * 768
+ embeddings.append(
+ {
+ 'object': 'embedding',
+ 'embedding': embedding,
+ 'index': idx,
+ }
+ )
+ return {
+ 'object': 'list',
+ 'data': embeddings,
+ 'model': 'MODEL_NAME',
+ 'usage': {
+ 'prompt_tokens': sum(len(text.split(' ')) for text in texts),
+ 'total_tokens': sum(len(text.split(' ')) for text in texts),
+ },
+ }
+
+ def invoke_rerank(server_url: str, query: str, texts: list[str]) -> list[dict]:
+ # Example response:
+ # [
+ # {
+ # "index": 0,
+ # "text": "Deep Learning is ...",
+ # "score": 0.9950755
+ # }
+ # ]
+ reranked_docs = []
+ for idx, text in enumerate(texts):
+ reranked_docs.append(
+ {
+ 'index': idx,
+ 'text': text,
+ 'score': 0.9,
+ }
+ )
+ # For mock, only return the first document
+ break
+ return reranked_docs
diff --git a/api/tests/integration_tests/model_runtime/__mock/xinference.py b/api/tests/integration_tests/model_runtime/__mock/xinference.py
index ddb18fe919..7cb0a1318e 100644
--- a/api/tests/integration_tests/model_runtime/__mock/xinference.py
+++ b/api/tests/integration_tests/model_runtime/__mock/xinference.py
@@ -106,7 +106,7 @@ class MockXinferenceClass:
def _check_cluster_authenticated(self):
self._cluster_authed = True
- def rerank(self: RESTfulRerankModelHandle, documents: list[str], query: str, top_n: int) -> dict:
+ def rerank(self: RESTfulRerankModelHandle, documents: list[str], query: str, top_n: int, return_documents: bool) -> dict:
# check if self._model_uid is a valid uuid
if not re.match(r'[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}', self._model_uid) and \
self._model_uid != 'rerank':
diff --git a/api/tests/integration_tests/model_runtime/huggingface_tei/__init__.py b/api/tests/integration_tests/model_runtime/huggingface_tei/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/tests/integration_tests/model_runtime/huggingface_tei/test_embeddings.py b/api/tests/integration_tests/model_runtime/huggingface_tei/test_embeddings.py
new file mode 100644
index 0000000000..da65c7dfc7
--- /dev/null
+++ b/api/tests/integration_tests/model_runtime/huggingface_tei/test_embeddings.py
@@ -0,0 +1,72 @@
+import os
+
+import pytest
+from api.core.model_runtime.model_providers.huggingface_tei.text_embedding.text_embedding import TeiHelper
+
+from core.model_runtime.entities.text_embedding_entities import TextEmbeddingResult
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.huggingface_tei.text_embedding.text_embedding import (
+ HuggingfaceTeiTextEmbeddingModel,
+)
+from tests.integration_tests.model_runtime.__mock.huggingface_tei import MockTEIClass
+
+MOCK = os.getenv('MOCK_SWITCH', 'false').lower() == 'true'
+
+
+@pytest.fixture
+def setup_tei_mock(request, monkeypatch: pytest.MonkeyPatch):
+ if MOCK:
+ monkeypatch.setattr(TeiHelper, 'get_tei_extra_parameter', MockTEIClass.get_tei_extra_parameter)
+ monkeypatch.setattr(TeiHelper, 'invoke_tokenize', MockTEIClass.invoke_tokenize)
+ monkeypatch.setattr(TeiHelper, 'invoke_embeddings', MockTEIClass.invoke_embeddings)
+ monkeypatch.setattr(TeiHelper, 'invoke_rerank', MockTEIClass.invoke_rerank)
+ yield
+
+ if MOCK:
+ monkeypatch.undo()
+
+
+@pytest.mark.parametrize('setup_tei_mock', [['none']], indirect=True)
+def test_validate_credentials(setup_tei_mock):
+ model = HuggingfaceTeiTextEmbeddingModel()
+ # model name is only used in mock
+ model_name = 'embedding'
+
+ if MOCK:
+ # TEI Provider will check model type by API endpoint, at real server, the model type is correct.
+ # So we dont need to check model type here. Only check in mock
+ with pytest.raises(CredentialsValidateFailedError):
+ model.validate_credentials(
+ model='reranker',
+ credentials={
+ 'server_url': os.environ.get('TEI_EMBEDDING_SERVER_URL', ""),
+ }
+ )
+
+ model.validate_credentials(
+ model=model_name,
+ credentials={
+ 'server_url': os.environ.get('TEI_EMBEDDING_SERVER_URL', ""),
+ }
+ )
+
+@pytest.mark.parametrize('setup_tei_mock', [['none']], indirect=True)
+def test_invoke_model(setup_tei_mock):
+ model = HuggingfaceTeiTextEmbeddingModel()
+ model_name = 'embedding'
+
+ result = model.invoke(
+ model=model_name,
+ credentials={
+ 'server_url': os.environ.get('TEI_EMBEDDING_SERVER_URL', ""),
+ },
+ texts=[
+ "hello",
+ "world"
+ ],
+ user="abc-123"
+ )
+
+ assert isinstance(result, TextEmbeddingResult)
+ assert len(result.embeddings) == 2
+ assert result.usage.total_tokens > 0
diff --git a/api/tests/integration_tests/model_runtime/huggingface_tei/test_rerank.py b/api/tests/integration_tests/model_runtime/huggingface_tei/test_rerank.py
new file mode 100644
index 0000000000..57e229e6be
--- /dev/null
+++ b/api/tests/integration_tests/model_runtime/huggingface_tei/test_rerank.py
@@ -0,0 +1,76 @@
+import os
+
+import pytest
+
+from core.model_runtime.entities.rerank_entities import RerankDocument, RerankResult
+from core.model_runtime.entities.text_embedding_entities import TextEmbeddingResult
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.huggingface_tei.rerank.rerank import (
+ HuggingfaceTeiRerankModel,
+)
+from core.model_runtime.model_providers.huggingface_tei.text_embedding.text_embedding import TeiHelper
+from tests.integration_tests.model_runtime.__mock.huggingface_tei import MockTEIClass
+
+MOCK = os.getenv('MOCK_SWITCH', 'false').lower() == 'true'
+
+
+@pytest.fixture
+def setup_tei_mock(request, monkeypatch: pytest.MonkeyPatch):
+ if MOCK:
+ monkeypatch.setattr(TeiHelper, 'get_tei_extra_parameter', MockTEIClass.get_tei_extra_parameter)
+ monkeypatch.setattr(TeiHelper, 'invoke_tokenize', MockTEIClass.invoke_tokenize)
+ monkeypatch.setattr(TeiHelper, 'invoke_embeddings', MockTEIClass.invoke_embeddings)
+ monkeypatch.setattr(TeiHelper, 'invoke_rerank', MockTEIClass.invoke_rerank)
+ yield
+
+ if MOCK:
+ monkeypatch.undo()
+
+@pytest.mark.parametrize('setup_tei_mock', [['none']], indirect=True)
+def test_validate_credentials(setup_tei_mock):
+ model = HuggingfaceTeiRerankModel()
+ # model name is only used in mock
+ model_name = 'reranker'
+
+ if MOCK:
+ # TEI Provider will check model type by API endpoint, at real server, the model type is correct.
+ # So we dont need to check model type here. Only check in mock
+ with pytest.raises(CredentialsValidateFailedError):
+ model.validate_credentials(
+ model='embedding',
+ credentials={
+ 'server_url': os.environ.get('TEI_RERANK_SERVER_URL'),
+ }
+ )
+
+ model.validate_credentials(
+ model=model_name,
+ credentials={
+ 'server_url': os.environ.get('TEI_RERANK_SERVER_URL'),
+ }
+ )
+
+@pytest.mark.parametrize('setup_tei_mock', [['none']], indirect=True)
+def test_invoke_model(setup_tei_mock):
+ model = HuggingfaceTeiRerankModel()
+ # model name is only used in mock
+ model_name = 'reranker'
+
+ result = model.invoke(
+ model=model_name,
+ credentials={
+ 'server_url': os.environ.get('TEI_RERANK_SERVER_URL'),
+ },
+ query="Who is Kasumi?",
+ docs=[
+ "Kasumi is a girl's name of Japanese origin meaning \"mist\".",
+ "Her music is a kawaii bass, a mix of future bass, pop, and kawaii music ",
+ "and she leads a team named PopiParty."
+ ],
+ score_threshold=0.8
+ )
+
+ assert isinstance(result, RerankResult)
+ assert len(result.docs) == 1
+ assert result.docs[0].index == 0
+ assert result.docs[0].score >= 0.8
diff --git a/api/tests/integration_tests/model_runtime/openai_api_compatible/test_speech2text.py b/api/tests/integration_tests/model_runtime/openai_api_compatible/test_speech2text.py
new file mode 100644
index 0000000000..61079104dc
--- /dev/null
+++ b/api/tests/integration_tests/model_runtime/openai_api_compatible/test_speech2text.py
@@ -0,0 +1,59 @@
+import os
+
+import pytest
+
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.openai_api_compatible.speech2text.speech2text import (
+ OAICompatSpeech2TextModel,
+)
+
+
+def test_validate_credentials():
+ model = OAICompatSpeech2TextModel()
+
+ with pytest.raises(CredentialsValidateFailedError):
+ model.validate_credentials(
+ model="whisper-1",
+ credentials={
+ "api_key": "invalid_key",
+ "endpoint_url": "https://api.openai.com/v1/"
+ },
+ )
+
+ model.validate_credentials(
+ model="whisper-1",
+ credentials={
+ "api_key": os.environ.get("OPENAI_API_KEY"),
+ "endpoint_url": "https://api.openai.com/v1/"
+ },
+ )
+
+
+def test_invoke_model():
+ model = OAICompatSpeech2TextModel()
+
+ # Get the directory of the current file
+ current_dir = os.path.dirname(os.path.abspath(__file__))
+
+ # Get assets directory
+ assets_dir = os.path.join(os.path.dirname(current_dir), "assets")
+
+ # Construct the path to the audio file
+ audio_file_path = os.path.join(assets_dir, "audio.mp3")
+
+ # Open the file and get the file object
+ with open(audio_file_path, "rb") as audio_file:
+ file = audio_file
+
+ result = model.invoke(
+ model="whisper-1",
+ credentials={
+ "api_key": os.environ.get("OPENAI_API_KEY"),
+ "endpoint_url": "https://api.openai.com/v1/"
+ },
+ file=file,
+ user="abc-123",
+ )
+
+ assert isinstance(result, str)
+ assert result == '1, 2, 3, 4, 5, 6, 7, 8, 9, 10'
diff --git a/api/tests/integration_tests/model_runtime/siliconflow/test_speech2text.py b/api/tests/integration_tests/model_runtime/siliconflow/test_speech2text.py
new file mode 100644
index 0000000000..82b7921c85
--- /dev/null
+++ b/api/tests/integration_tests/model_runtime/siliconflow/test_speech2text.py
@@ -0,0 +1,53 @@
+import os
+
+import pytest
+
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.siliconflow.speech2text.speech2text import SiliconflowSpeech2TextModel
+
+
+def test_validate_credentials():
+ model = SiliconflowSpeech2TextModel()
+
+ with pytest.raises(CredentialsValidateFailedError):
+ model.validate_credentials(
+ model="iic/SenseVoiceSmall",
+ credentials={
+ "api_key": "invalid_key"
+ },
+ )
+
+ model.validate_credentials(
+ model="iic/SenseVoiceSmall",
+ credentials={
+ "api_key": os.environ.get("API_KEY")
+ },
+ )
+
+
+def test_invoke_model():
+ model = SiliconflowSpeech2TextModel()
+
+ # Get the directory of the current file
+ current_dir = os.path.dirname(os.path.abspath(__file__))
+
+ # Get assets directory
+ assets_dir = os.path.join(os.path.dirname(current_dir), "assets")
+
+ # Construct the path to the audio file
+ audio_file_path = os.path.join(assets_dir, "audio.mp3")
+
+ # Open the file and get the file object
+ with open(audio_file_path, "rb") as audio_file:
+ file = audio_file
+
+ result = model.invoke(
+ model="iic/SenseVoiceSmall",
+ credentials={
+ "api_key": os.environ.get("API_KEY")
+ },
+ file=file
+ )
+
+ assert isinstance(result, str)
+ assert result == '1,2,3,4,5,6,7,8,9,10.'
diff --git a/api/tests/integration_tests/model_runtime/siliconflow/test_text_embedding.py b/api/tests/integration_tests/model_runtime/siliconflow/test_text_embedding.py
new file mode 100644
index 0000000000..18bd2e893a
--- /dev/null
+++ b/api/tests/integration_tests/model_runtime/siliconflow/test_text_embedding.py
@@ -0,0 +1,62 @@
+import os
+
+import pytest
+
+from core.model_runtime.entities.text_embedding_entities import TextEmbeddingResult
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.siliconflow.text_embedding.text_embedding import (
+ SiliconflowTextEmbeddingModel,
+)
+
+
+def test_validate_credentials():
+ model = SiliconflowTextEmbeddingModel()
+
+ with pytest.raises(CredentialsValidateFailedError):
+ model.validate_credentials(
+ model="BAAI/bge-large-zh-v1.5",
+ credentials={
+ "api_key": "invalid_key"
+ },
+ )
+
+ model.validate_credentials(
+ model="BAAI/bge-large-zh-v1.5",
+ credentials={
+ "api_key": os.environ.get("API_KEY"),
+ },
+ )
+
+
+def test_invoke_model():
+ model = SiliconflowTextEmbeddingModel()
+
+ result = model.invoke(
+ model="BAAI/bge-large-zh-v1.5",
+ credentials={
+ "api_key": os.environ.get("API_KEY"),
+ },
+ texts=[
+ "hello",
+ "world",
+ ],
+ user="abc-123",
+ )
+
+ assert isinstance(result, TextEmbeddingResult)
+ assert len(result.embeddings) == 2
+ assert result.usage.total_tokens == 6
+
+
+def test_get_num_tokens():
+ model = SiliconflowTextEmbeddingModel()
+
+ num_tokens = model.get_num_tokens(
+ model="BAAI/bge-large-zh-v1.5",
+ credentials={
+ "api_key": os.environ.get("API_KEY"),
+ },
+ texts=["hello", "world"],
+ )
+
+ assert num_tokens == 2
diff --git a/api/tests/integration_tests/model_runtime/upstage/__init__.py b/api/tests/integration_tests/model_runtime/upstage/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/tests/integration_tests/model_runtime/upstage/test_llm.py b/api/tests/integration_tests/model_runtime/upstage/test_llm.py
new file mode 100644
index 0000000000..c35580a8b1
--- /dev/null
+++ b/api/tests/integration_tests/model_runtime/upstage/test_llm.py
@@ -0,0 +1,245 @@
+import os
+from collections.abc import Generator
+
+import pytest
+
+from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta
+from core.model_runtime.entities.message_entities import (
+ AssistantPromptMessage,
+ PromptMessageTool,
+ SystemPromptMessage,
+ UserPromptMessage,
+)
+from core.model_runtime.entities.model_entities import AIModelEntity, ModelType
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
+from core.model_runtime.model_providers.upstage.llm.llm import UpstageLargeLanguageModel
+
+"""FOR MOCK FIXTURES, DO NOT REMOVE"""
+from tests.integration_tests.model_runtime.__mock.openai import setup_openai_mock
+
+
+def test_predefined_models():
+ model = UpstageLargeLanguageModel()
+ model_schemas = model.predefined_models()
+
+ assert len(model_schemas) >= 1
+ assert isinstance(model_schemas[0], AIModelEntity)
+
+@pytest.mark.parametrize('setup_openai_mock', [['chat']], indirect=True)
+def test_validate_credentials_for_chat_model(setup_openai_mock):
+ model = UpstageLargeLanguageModel()
+
+ with pytest.raises(CredentialsValidateFailedError):
+ # model name to gpt-3.5-turbo because of mocking
+ model.validate_credentials(
+ model='gpt-3.5-turbo',
+ credentials={
+ 'upstage_api_key': 'invalid_key'
+ }
+ )
+
+ model.validate_credentials(
+ model='solar-1-mini-chat',
+ credentials={
+ 'upstage_api_key': os.environ.get('UPSTAGE_API_KEY')
+ }
+ )
+
+@pytest.mark.parametrize('setup_openai_mock', [['chat']], indirect=True)
+def test_invoke_chat_model(setup_openai_mock):
+ model = UpstageLargeLanguageModel()
+
+ result = model.invoke(
+ model='solar-1-mini-chat',
+ credentials={
+ 'upstage_api_key': os.environ.get('UPSTAGE_API_KEY')
+ },
+ prompt_messages=[
+ SystemPromptMessage(
+ content='You are a helpful AI assistant.',
+ ),
+ UserPromptMessage(
+ content='Hello World!'
+ )
+ ],
+ model_parameters={
+ 'temperature': 0.0,
+ 'top_p': 1.0,
+ 'presence_penalty': 0.0,
+ 'frequency_penalty': 0.0,
+ 'max_tokens': 10
+ },
+ stop=['How'],
+ stream=False,
+ user="abc-123"
+ )
+
+ assert isinstance(result, LLMResult)
+ assert len(result.message.content) > 0
+
+@pytest.mark.parametrize('setup_openai_mock', [['chat']], indirect=True)
+def test_invoke_chat_model_with_tools(setup_openai_mock):
+ model = UpstageLargeLanguageModel()
+
+ result = model.invoke(
+ model='solar-1-mini-chat',
+ credentials={
+ 'upstage_api_key': os.environ.get('UPSTAGE_API_KEY')
+ },
+ prompt_messages=[
+ SystemPromptMessage(
+ content='You are a helpful AI assistant.',
+ ),
+ UserPromptMessage(
+ content="what's the weather today in London?",
+ )
+ ],
+ model_parameters={
+ 'temperature': 0.0,
+ 'max_tokens': 100
+ },
+ tools=[
+ PromptMessageTool(
+ name='get_weather',
+ description='Determine weather in my location',
+ parameters={
+ "type": "object",
+ "properties": {
+ "location": {
+ "type": "string",
+ "description": "The city and state e.g. San Francisco, CA"
+ },
+ "unit": {
+ "type": "string",
+ "enum": [
+ "c",
+ "f"
+ ]
+ }
+ },
+ "required": [
+ "location"
+ ]
+ }
+ ),
+ PromptMessageTool(
+ name='get_stock_price',
+ description='Get the current stock price',
+ parameters={
+ "type": "object",
+ "properties": {
+ "symbol": {
+ "type": "string",
+ "description": "The stock symbol"
+ }
+ },
+ "required": [
+ "symbol"
+ ]
+ }
+ )
+ ],
+ stream=False,
+ user="abc-123"
+ )
+
+ assert isinstance(result, LLMResult)
+ assert isinstance(result.message, AssistantPromptMessage)
+ assert len(result.message.tool_calls) > 0
+
+@pytest.mark.parametrize('setup_openai_mock', [['chat']], indirect=True)
+def test_invoke_stream_chat_model(setup_openai_mock):
+ model = UpstageLargeLanguageModel()
+
+ result = model.invoke(
+ model='solar-1-mini-chat',
+ credentials={
+ 'upstage_api_key': os.environ.get('UPSTAGE_API_KEY')
+ },
+ prompt_messages=[
+ SystemPromptMessage(
+ content='You are a helpful AI assistant.',
+ ),
+ UserPromptMessage(
+ content='Hello World!'
+ )
+ ],
+ model_parameters={
+ 'temperature': 0.0,
+ 'max_tokens': 100
+ },
+ stream=True,
+ user="abc-123"
+ )
+
+ assert isinstance(result, Generator)
+
+ for chunk in result:
+ assert isinstance(chunk, LLMResultChunk)
+ assert isinstance(chunk.delta, LLMResultChunkDelta)
+ assert isinstance(chunk.delta.message, AssistantPromptMessage)
+ assert len(chunk.delta.message.content) > 0 if chunk.delta.finish_reason is None else True
+ if chunk.delta.finish_reason is not None:
+ assert chunk.delta.usage is not None
+ assert chunk.delta.usage.completion_tokens > 0
+
+
+def test_get_num_tokens():
+ model = UpstageLargeLanguageModel()
+
+ num_tokens = model.get_num_tokens(
+ model='solar-1-mini-chat',
+ credentials={
+ 'upstage_api_key': os.environ.get('UPSTAGE_API_KEY')
+ },
+ prompt_messages=[
+ UserPromptMessage(
+ content='Hello World!'
+ )
+ ]
+ )
+
+ assert num_tokens == 13
+
+ num_tokens = model.get_num_tokens(
+ model='solar-1-mini-chat',
+ credentials={
+ 'upstage_api_key': os.environ.get('UPSTAGE_API_KEY')
+ },
+ prompt_messages=[
+ SystemPromptMessage(
+ content='You are a helpful AI assistant.',
+ ),
+ UserPromptMessage(
+ content='Hello World!'
+ )
+ ],
+ tools=[
+ PromptMessageTool(
+ name='get_weather',
+ description='Determine weather in my location',
+ parameters={
+ "type": "object",
+ "properties": {
+ "location": {
+ "type": "string",
+ "description": "The city and state e.g. San Francisco, CA"
+ },
+ "unit": {
+ "type": "string",
+ "enum": [
+ "c",
+ "f"
+ ]
+ }
+ },
+ "required": [
+ "location"
+ ]
+ }
+ ),
+ ]
+ )
+
+ assert num_tokens == 106
diff --git a/api/tests/integration_tests/model_runtime/upstage/test_provider.py b/api/tests/integration_tests/model_runtime/upstage/test_provider.py
new file mode 100644
index 0000000000..c33eef49b2
--- /dev/null
+++ b/api/tests/integration_tests/model_runtime/upstage/test_provider.py
@@ -0,0 +1,23 @@
+import os
+
+import pytest
+
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.upstage.upstage import UpstageProvider
+from tests.integration_tests.model_runtime.__mock.openai import setup_openai_mock
+
+
+@pytest.mark.parametrize('setup_openai_mock', [['chat']], indirect=True)
+def test_validate_provider_credentials(setup_openai_mock):
+ provider = UpstageProvider()
+
+ with pytest.raises(CredentialsValidateFailedError):
+ provider.validate_provider_credentials(
+ credentials={}
+ )
+
+ provider.validate_provider_credentials(
+ credentials={
+ 'upstage_api_key': os.environ.get('UPSTAGE_API_KEY')
+ }
+ )
diff --git a/api/tests/integration_tests/model_runtime/upstage/test_text_embedding.py b/api/tests/integration_tests/model_runtime/upstage/test_text_embedding.py
new file mode 100644
index 0000000000..54135a0e74
--- /dev/null
+++ b/api/tests/integration_tests/model_runtime/upstage/test_text_embedding.py
@@ -0,0 +1,67 @@
+import os
+
+import pytest
+
+from core.model_runtime.entities.text_embedding_entities import TextEmbeddingResult
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.upstage.text_embedding.text_embedding import UpstageTextEmbeddingModel
+from tests.integration_tests.model_runtime.__mock.openai import setup_openai_mock
+
+
+@pytest.mark.parametrize('setup_openai_mock', [['text_embedding']], indirect=True)
+def test_validate_credentials(setup_openai_mock):
+ model = UpstageTextEmbeddingModel()
+
+ with pytest.raises(CredentialsValidateFailedError):
+ model.validate_credentials(
+ model='solar-embedding-1-large-passage',
+ credentials={
+ 'upstage_api_key': 'invalid_key'
+ }
+ )
+
+ model.validate_credentials(
+ model='solar-embedding-1-large-passage',
+ credentials={
+ 'upstage_api_key': os.environ.get('UPSTAGE_API_KEY')
+ }
+ )
+
+@pytest.mark.parametrize('setup_openai_mock', [['text_embedding']], indirect=True)
+def test_invoke_model(setup_openai_mock):
+ model = UpstageTextEmbeddingModel()
+
+ result = model.invoke(
+ model='solar-embedding-1-large-passage',
+ credentials={
+ 'upstage_api_key': os.environ.get('UPSTAGE_API_KEY'),
+ },
+ texts=[
+ "hello",
+ "world",
+ " ".join(["long_text"] * 100),
+ " ".join(["another_long_text"] * 100)
+ ],
+ user="abc-123"
+ )
+
+ assert isinstance(result, TextEmbeddingResult)
+ assert len(result.embeddings) == 4
+ assert result.usage.total_tokens == 2
+
+
+def test_get_num_tokens():
+ model = UpstageTextEmbeddingModel()
+
+ num_tokens = model.get_num_tokens(
+ model='solar-embedding-1-large-passage',
+ credentials={
+ 'upstage_api_key': os.environ.get('UPSTAGE_API_KEY'),
+ },
+ texts=[
+ "hello",
+ "world"
+ ]
+ )
+
+ assert num_tokens == 5
diff --git a/api/tests/integration_tests/model_runtime/zhinao/__init__.py b/api/tests/integration_tests/model_runtime/zhinao/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/api/tests/integration_tests/model_runtime/zhinao/test_llm.py b/api/tests/integration_tests/model_runtime/zhinao/test_llm.py
new file mode 100644
index 0000000000..47a5b6cae2
--- /dev/null
+++ b/api/tests/integration_tests/model_runtime/zhinao/test_llm.py
@@ -0,0 +1,106 @@
+import os
+from collections.abc import Generator
+
+import pytest
+
+from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta
+from core.model_runtime.entities.message_entities import AssistantPromptMessage, SystemPromptMessage, UserPromptMessage
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.zhinao.llm.llm import ZhinaoLargeLanguageModel
+
+
+def test_validate_credentials():
+ model = ZhinaoLargeLanguageModel()
+
+ with pytest.raises(CredentialsValidateFailedError):
+ model.validate_credentials(
+ model='360gpt2-pro',
+ credentials={
+ 'api_key': 'invalid_key'
+ }
+ )
+
+ model.validate_credentials(
+ model='360gpt2-pro',
+ credentials={
+ 'api_key': os.environ.get('ZHINAO_API_KEY')
+ }
+ )
+
+
+def test_invoke_model():
+ model = ZhinaoLargeLanguageModel()
+
+ response = model.invoke(
+ model='360gpt2-pro',
+ credentials={
+ 'api_key': os.environ.get('ZHINAO_API_KEY')
+ },
+ prompt_messages=[
+ UserPromptMessage(
+ content='Who are you?'
+ )
+ ],
+ model_parameters={
+ 'temperature': 0.5,
+ 'max_tokens': 10
+ },
+ stop=['How'],
+ stream=False,
+ user="abc-123"
+ )
+
+ assert isinstance(response, LLMResult)
+ assert len(response.message.content) > 0
+
+
+def test_invoke_stream_model():
+ model = ZhinaoLargeLanguageModel()
+
+ response = model.invoke(
+ model='360gpt2-pro',
+ credentials={
+ 'api_key': os.environ.get('ZHINAO_API_KEY')
+ },
+ prompt_messages=[
+ UserPromptMessage(
+ content='Hello World!'
+ )
+ ],
+ model_parameters={
+ 'temperature': 0.5,
+ 'max_tokens': 100,
+ 'seed': 1234
+ },
+ stream=True,
+ user="abc-123"
+ )
+
+ assert isinstance(response, Generator)
+
+ for chunk in response:
+ assert isinstance(chunk, LLMResultChunk)
+ assert isinstance(chunk.delta, LLMResultChunkDelta)
+ assert isinstance(chunk.delta.message, AssistantPromptMessage)
+ assert len(chunk.delta.message.content) > 0 if chunk.delta.finish_reason is None else True
+
+
+def test_get_num_tokens():
+ model = ZhinaoLargeLanguageModel()
+
+ num_tokens = model.get_num_tokens(
+ model='360gpt2-pro',
+ credentials={
+ 'api_key': os.environ.get('ZHINAO_API_KEY')
+ },
+ prompt_messages=[
+ SystemPromptMessage(
+ content='You are a helpful AI assistant.',
+ ),
+ UserPromptMessage(
+ content='Hello World!'
+ )
+ ]
+ )
+
+ assert num_tokens == 21
diff --git a/api/tests/integration_tests/model_runtime/zhinao/test_provider.py b/api/tests/integration_tests/model_runtime/zhinao/test_provider.py
new file mode 100644
index 0000000000..87b0e6c2d9
--- /dev/null
+++ b/api/tests/integration_tests/model_runtime/zhinao/test_provider.py
@@ -0,0 +1,21 @@
+import os
+
+import pytest
+
+from core.model_runtime.errors.validate import CredentialsValidateFailedError
+from core.model_runtime.model_providers.zhinao.zhinao import ZhinaoProvider
+
+
+def test_validate_provider_credentials():
+ provider = ZhinaoProvider()
+
+ with pytest.raises(CredentialsValidateFailedError):
+ provider.validate_provider_credentials(
+ credentials={}
+ )
+
+ provider.validate_provider_credentials(
+ credentials={
+ 'api_key': os.environ.get('ZHINAO_API_KEY')
+ }
+ )
diff --git a/api/tests/unit_tests/core/app/segments/test_factory.py b/api/tests/unit_tests/core/app/segments/test_factory.py
index 85321ee374..a8429b9c1b 100644
--- a/api/tests/unit_tests/core/app/segments/test_factory.py
+++ b/api/tests/unit_tests/core/app/segments/test_factory.py
@@ -7,15 +7,16 @@ from core.app.segments import (
ArrayNumberVariable,
ArrayObjectVariable,
ArrayStringVariable,
+ FileSegment,
FileVariable,
FloatVariable,
IntegerVariable,
- NoneSegment,
ObjectSegment,
SecretVariable,
StringVariable,
factory,
)
+from core.app.segments.exc import VariableError
def test_string_variable():
@@ -44,7 +45,7 @@ def test_secret_variable():
def test_invalid_value_type():
test_data = {'value_type': 'unknown', 'name': 'test_invalid', 'value': 'value'}
- with pytest.raises(ValueError):
+ with pytest.raises(VariableError):
factory.build_variable_from_mapping(test_data)
@@ -67,7 +68,7 @@ def test_build_a_object_variable_with_none_value():
}
)
assert isinstance(var, ObjectSegment)
- assert isinstance(var.value['key1'], NoneSegment)
+ assert var.value['key1'] is None
def test_object_variable():
@@ -77,26 +78,14 @@ def test_object_variable():
'name': 'test_object',
'description': 'Description of the variable.',
'value': {
- 'key1': {
- 'id': str(uuid4()),
- 'value_type': 'string',
- 'name': 'text',
- 'value': 'text',
- 'description': 'Description of the variable.',
- },
- 'key2': {
- 'id': str(uuid4()),
- 'value_type': 'number',
- 'name': 'number',
- 'value': 1,
- 'description': 'Description of the variable.',
- },
+ 'key1': 'text',
+ 'key2': 2,
},
}
variable = factory.build_variable_from_mapping(mapping)
assert isinstance(variable, ObjectSegment)
- assert isinstance(variable.value['key1'], StringVariable)
- assert isinstance(variable.value['key2'], IntegerVariable)
+ assert isinstance(variable.value['key1'], str)
+ assert isinstance(variable.value['key2'], int)
def test_array_string_variable():
@@ -106,26 +95,14 @@ def test_array_string_variable():
'name': 'test_array',
'description': 'Description of the variable.',
'value': [
- {
- 'id': str(uuid4()),
- 'value_type': 'string',
- 'name': 'text',
- 'value': 'text',
- 'description': 'Description of the variable.',
- },
- {
- 'id': str(uuid4()),
- 'value_type': 'string',
- 'name': 'text',
- 'value': 'text',
- 'description': 'Description of the variable.',
- },
+ 'text',
+ 'text',
],
}
variable = factory.build_variable_from_mapping(mapping)
assert isinstance(variable, ArrayStringVariable)
- assert isinstance(variable.value[0], StringVariable)
- assert isinstance(variable.value[1], StringVariable)
+ assert isinstance(variable.value[0], str)
+ assert isinstance(variable.value[1], str)
def test_array_number_variable():
@@ -135,26 +112,14 @@ def test_array_number_variable():
'name': 'test_array',
'description': 'Description of the variable.',
'value': [
- {
- 'id': str(uuid4()),
- 'value_type': 'number',
- 'name': 'number',
- 'value': 1,
- 'description': 'Description of the variable.',
- },
- {
- 'id': str(uuid4()),
- 'value_type': 'number',
- 'name': 'number',
- 'value': 2.0,
- 'description': 'Description of the variable.',
- },
+ 1,
+ 2.0,
],
}
variable = factory.build_variable_from_mapping(mapping)
assert isinstance(variable, ArrayNumberVariable)
- assert isinstance(variable.value[0], IntegerVariable)
- assert isinstance(variable.value[1], FloatVariable)
+ assert isinstance(variable.value[0], int)
+ assert isinstance(variable.value[1], float)
def test_array_object_variable():
@@ -165,59 +130,23 @@ def test_array_object_variable():
'description': 'Description of the variable.',
'value': [
{
- 'id': str(uuid4()),
- 'value_type': 'object',
- 'name': 'object',
- 'description': 'Description of the variable.',
- 'value': {
- 'key1': {
- 'id': str(uuid4()),
- 'value_type': 'string',
- 'name': 'text',
- 'value': 'text',
- 'description': 'Description of the variable.',
- },
- 'key2': {
- 'id': str(uuid4()),
- 'value_type': 'number',
- 'name': 'number',
- 'value': 1,
- 'description': 'Description of the variable.',
- },
- },
+ 'key1': 'text',
+ 'key2': 1,
},
{
- 'id': str(uuid4()),
- 'value_type': 'object',
- 'name': 'object',
- 'description': 'Description of the variable.',
- 'value': {
- 'key1': {
- 'id': str(uuid4()),
- 'value_type': 'string',
- 'name': 'text',
- 'value': 'text',
- 'description': 'Description of the variable.',
- },
- 'key2': {
- 'id': str(uuid4()),
- 'value_type': 'number',
- 'name': 'number',
- 'value': 1,
- 'description': 'Description of the variable.',
- },
- },
+ 'key1': 'text',
+ 'key2': 1,
},
],
}
variable = factory.build_variable_from_mapping(mapping)
assert isinstance(variable, ArrayObjectVariable)
- assert isinstance(variable.value[0], ObjectSegment)
- assert isinstance(variable.value[1], ObjectSegment)
- assert isinstance(variable.value[0].value['key1'], StringVariable)
- assert isinstance(variable.value[0].value['key2'], IntegerVariable)
- assert isinstance(variable.value[1].value['key1'], StringVariable)
- assert isinstance(variable.value[1].value['key2'], IntegerVariable)
+ assert isinstance(variable.value[0], dict)
+ assert isinstance(variable.value[1], dict)
+ assert isinstance(variable.value[0]['key1'], str)
+ assert isinstance(variable.value[0]['key2'], int)
+ assert isinstance(variable.value[1]['key1'], str)
+ assert isinstance(variable.value[1]['key2'], int)
def test_file_variable():
@@ -257,51 +186,53 @@ def test_array_file_variable():
'value': [
{
'id': str(uuid4()),
- 'name': 'file',
- 'value_type': 'file',
- 'value': {
- 'id': str(uuid4()),
- 'tenant_id': 'tenant_id',
- 'type': 'image',
- 'transfer_method': 'local_file',
- 'url': 'url',
- 'related_id': 'related_id',
- 'extra_config': {
- 'image_config': {
- 'width': 100,
- 'height': 100,
- },
+ 'tenant_id': 'tenant_id',
+ 'type': 'image',
+ 'transfer_method': 'local_file',
+ 'url': 'url',
+ 'related_id': 'related_id',
+ 'extra_config': {
+ 'image_config': {
+ 'width': 100,
+ 'height': 100,
},
- 'filename': 'filename',
- 'extension': 'extension',
- 'mime_type': 'mime_type',
},
+ 'filename': 'filename',
+ 'extension': 'extension',
+ 'mime_type': 'mime_type',
},
{
'id': str(uuid4()),
- 'name': 'file',
- 'value_type': 'file',
- 'value': {
- 'id': str(uuid4()),
- 'tenant_id': 'tenant_id',
- 'type': 'image',
- 'transfer_method': 'local_file',
- 'url': 'url',
- 'related_id': 'related_id',
- 'extra_config': {
- 'image_config': {
- 'width': 100,
- 'height': 100,
- },
+ 'tenant_id': 'tenant_id',
+ 'type': 'image',
+ 'transfer_method': 'local_file',
+ 'url': 'url',
+ 'related_id': 'related_id',
+ 'extra_config': {
+ 'image_config': {
+ 'width': 100,
+ 'height': 100,
},
- 'filename': 'filename',
- 'extension': 'extension',
- 'mime_type': 'mime_type',
},
+ 'filename': 'filename',
+ 'extension': 'extension',
+ 'mime_type': 'mime_type',
},
],
}
variable = factory.build_variable_from_mapping(mapping)
assert isinstance(variable, ArrayFileVariable)
- assert isinstance(variable.value[0], FileVariable)
- assert isinstance(variable.value[1], FileVariable)
+ assert isinstance(variable.value[0], FileSegment)
+ assert isinstance(variable.value[1], FileSegment)
+
+
+def test_variable_cannot_large_than_5_kb():
+ with pytest.raises(VariableError):
+ factory.build_variable_from_mapping(
+ {
+ 'id': str(uuid4()),
+ 'value_type': 'string',
+ 'name': 'test_text',
+ 'value': 'a' * 1024 * 6,
+ }
+ )
diff --git a/api/tests/unit_tests/core/app/segments/test_variables.py b/api/tests/unit_tests/core/app/segments/test_variables.py
index e3f513971a..1f45c15f87 100644
--- a/api/tests/unit_tests/core/app/segments/test_variables.py
+++ b/api/tests/unit_tests/core/app/segments/test_variables.py
@@ -54,20 +54,10 @@ def test_object_variable_to_object():
var = ObjectVariable(
name='object',
value={
- 'key1': ObjectVariable(
- name='object',
- value={
- 'key2': StringVariable(name='key2', value='value2'),
- },
- ),
- 'key2': ArrayAnyVariable(
- name='array',
- value=[
- StringVariable(name='key5_1', value='value5_1'),
- IntegerVariable(name='key5_2', value=42),
- ObjectVariable(name='key5_3', value={}),
- ],
- ),
+ 'key1': {
+ 'key2': 'value2',
+ },
+ 'key2': ['value5_1', 42, {}],
},
)
diff --git a/api/tests/unit_tests/core/prompt/test_advanced_prompt_transform.py b/api/tests/unit_tests/core/prompt/test_advanced_prompt_transform.py
index fd284488b5..d24cd4aae9 100644
--- a/api/tests/unit_tests/core/prompt/test_advanced_prompt_transform.py
+++ b/api/tests/unit_tests/core/prompt/test_advanced_prompt_transform.py
@@ -2,8 +2,8 @@ from unittest.mock import MagicMock
import pytest
-from core.app.app_config.entities import FileExtraConfig, ModelConfigEntity
-from core.file.file_obj import FileTransferMethod, FileType, FileVar
+from core.app.app_config.entities import ModelConfigEntity
+from core.file.file_obj import FileExtraConfig, FileTransferMethod, FileType, FileVar
from core.memory.token_buffer_memory import TokenBufferMemory
from core.model_runtime.entities.message_entities import AssistantPromptMessage, PromptMessageRole, UserPromptMessage
from core.prompt.advanced_prompt_transform import AdvancedPromptTransform
diff --git a/api/tests/unit_tests/core/workflow/nodes/test_variable_assigner.py b/api/tests/unit_tests/core/workflow/nodes/test_variable_assigner.py
new file mode 100644
index 0000000000..8706ba05ce
--- /dev/null
+++ b/api/tests/unit_tests/core/workflow/nodes/test_variable_assigner.py
@@ -0,0 +1,150 @@
+from unittest import mock
+from uuid import uuid4
+
+from core.app.entities.app_invoke_entities import InvokeFrom
+from core.app.segments import ArrayStringVariable, StringVariable
+from core.workflow.entities.node_entities import SystemVariable
+from core.workflow.entities.variable_pool import VariablePool
+from core.workflow.nodes.base_node import UserFrom
+from core.workflow.nodes.variable_assigner import VariableAssignerNode, WriteMode
+
+DEFAULT_NODE_ID = 'node_id'
+
+
+def test_overwrite_string_variable():
+ conversation_variable = StringVariable(
+ id=str(uuid4()),
+ name='test_conversation_variable',
+ value='the first value',
+ )
+
+ input_variable = StringVariable(
+ id=str(uuid4()),
+ name='test_string_variable',
+ value='the second value',
+ )
+
+ node = VariableAssignerNode(
+ tenant_id='tenant_id',
+ app_id='app_id',
+ workflow_id='workflow_id',
+ user_id='user_id',
+ user_from=UserFrom.ACCOUNT,
+ invoke_from=InvokeFrom.DEBUGGER,
+ config={
+ 'id': 'node_id',
+ 'data': {
+ 'assigned_variable_selector': ['conversation', conversation_variable.name],
+ 'write_mode': WriteMode.OVER_WRITE.value,
+ 'input_variable_selector': [DEFAULT_NODE_ID, input_variable.name],
+ },
+ },
+ )
+
+ variable_pool = VariablePool(
+ system_variables={SystemVariable.CONVERSATION_ID: 'conversation_id'},
+ user_inputs={},
+ environment_variables=[],
+ conversation_variables=[conversation_variable],
+ )
+ variable_pool.add(
+ [DEFAULT_NODE_ID, input_variable.name],
+ input_variable,
+ )
+
+ with mock.patch('core.workflow.nodes.variable_assigner.update_conversation_variable') as mock_run:
+ node.run(variable_pool)
+ mock_run.assert_called_once()
+
+ got = variable_pool.get(['conversation', conversation_variable.name])
+ assert got is not None
+ assert got.value == 'the second value'
+ assert got.to_object() == 'the second value'
+
+
+def test_append_variable_to_array():
+ conversation_variable = ArrayStringVariable(
+ id=str(uuid4()),
+ name='test_conversation_variable',
+ value=['the first value'],
+ )
+
+ input_variable = StringVariable(
+ id=str(uuid4()),
+ name='test_string_variable',
+ value='the second value',
+ )
+
+ node = VariableAssignerNode(
+ tenant_id='tenant_id',
+ app_id='app_id',
+ workflow_id='workflow_id',
+ user_id='user_id',
+ user_from=UserFrom.ACCOUNT,
+ invoke_from=InvokeFrom.DEBUGGER,
+ config={
+ 'id': 'node_id',
+ 'data': {
+ 'assigned_variable_selector': ['conversation', conversation_variable.name],
+ 'write_mode': WriteMode.APPEND.value,
+ 'input_variable_selector': [DEFAULT_NODE_ID, input_variable.name],
+ },
+ },
+ )
+
+ variable_pool = VariablePool(
+ system_variables={SystemVariable.CONVERSATION_ID: 'conversation_id'},
+ user_inputs={},
+ environment_variables=[],
+ conversation_variables=[conversation_variable],
+ )
+ variable_pool.add(
+ [DEFAULT_NODE_ID, input_variable.name],
+ input_variable,
+ )
+
+ with mock.patch('core.workflow.nodes.variable_assigner.update_conversation_variable') as mock_run:
+ node.run(variable_pool)
+ mock_run.assert_called_once()
+
+ got = variable_pool.get(['conversation', conversation_variable.name])
+ assert got is not None
+ assert got.to_object() == ['the first value', 'the second value']
+
+
+def test_clear_array():
+ conversation_variable = ArrayStringVariable(
+ id=str(uuid4()),
+ name='test_conversation_variable',
+ value=['the first value'],
+ )
+
+ node = VariableAssignerNode(
+ tenant_id='tenant_id',
+ app_id='app_id',
+ workflow_id='workflow_id',
+ user_id='user_id',
+ user_from=UserFrom.ACCOUNT,
+ invoke_from=InvokeFrom.DEBUGGER,
+ config={
+ 'id': 'node_id',
+ 'data': {
+ 'assigned_variable_selector': ['conversation', conversation_variable.name],
+ 'write_mode': WriteMode.CLEAR.value,
+ 'input_variable_selector': [],
+ },
+ },
+ )
+
+ variable_pool = VariablePool(
+ system_variables={SystemVariable.CONVERSATION_ID: 'conversation_id'},
+ user_inputs={},
+ environment_variables=[],
+ conversation_variables=[conversation_variable],
+ )
+
+ node.run(variable_pool)
+
+ got = variable_pool.get(['conversation', conversation_variable.name])
+ assert got is not None
+ assert got.to_object() == []
diff --git a/api/tests/unit_tests/models/test_conversation_variable.py b/api/tests/unit_tests/models/test_conversation_variable.py
new file mode 100644
index 0000000000..9e16010d7e
--- /dev/null
+++ b/api/tests/unit_tests/models/test_conversation_variable.py
@@ -0,0 +1,25 @@
+from uuid import uuid4
+
+from core.app.segments import SegmentType, factory
+from models import ConversationVariable
+
+
+def test_from_variable_and_to_variable():
+ variable = factory.build_variable_from_mapping(
+ {
+ 'id': str(uuid4()),
+ 'name': 'name',
+ 'value_type': SegmentType.OBJECT,
+ 'value': {
+ 'key': {
+ 'key': 'value',
+ }
+ },
+ }
+ )
+
+ conversation_variable = ConversationVariable.from_variable(
+ app_id='app_id', conversation_id='conversation_id', variable=variable
+ )
+
+ assert conversation_variable.to_variable() == variable
diff --git a/api/tests/unit_tests/services/workflow/test_workflow_converter.py b/api/tests/unit_tests/services/workflow/test_workflow_converter.py
index 29d55df8c3..f589cd2097 100644
--- a/api/tests/unit_tests/services/workflow/test_workflow_converter.py
+++ b/api/tests/unit_tests/services/workflow/test_workflow_converter.py
@@ -208,7 +208,8 @@ def test__convert_to_knowledge_retrieval_node_for_chatbot():
reranking_model={
'reranking_provider_name': 'cohere',
'reranking_model_name': 'rerank-english-v2.0'
- }
+ },
+ reranking_enabled=True
)
)
@@ -251,7 +252,8 @@ def test__convert_to_knowledge_retrieval_node_for_workflow_app():
reranking_model={
'reranking_provider_name': 'cohere',
'reranking_model_name': 'rerank-english-v2.0'
- }
+ },
+ reranking_enabled=True
)
)
diff --git a/dev/pytest/pytest_model_runtime.sh b/dev/pytest/pytest_model_runtime.sh
index 2e113346c7..aba13292ab 100755
--- a/dev/pytest/pytest_model_runtime.sh
+++ b/dev/pytest/pytest_model_runtime.sh
@@ -5,4 +5,6 @@ pytest api/tests/integration_tests/model_runtime/anthropic \
api/tests/integration_tests/model_runtime/azure_openai \
api/tests/integration_tests/model_runtime/openai api/tests/integration_tests/model_runtime/chatglm \
api/tests/integration_tests/model_runtime/google api/tests/integration_tests/model_runtime/xinference \
- api/tests/integration_tests/model_runtime/huggingface_hub/test_llm.py
+ api/tests/integration_tests/model_runtime/huggingface_hub/test_llm.py \
+ api/tests/integration_tests/model_runtime/upstage
+
diff --git a/docker-legacy/docker-compose.middleware.yaml b/docker-legacy/docker-compose.middleware.yaml
index 38760901b1..fadbb3e608 100644
--- a/docker-legacy/docker-compose.middleware.yaml
+++ b/docker-legacy/docker-compose.middleware.yaml
@@ -73,7 +73,7 @@ services:
# ssrf_proxy server
# for more information, please refer to
- # https://docs.dify.ai/getting-started/install-self-hosted/install-faq#id-16.-why-is-ssrf_proxy-needed
+ # https://docs.dify.ai/learn-more/faq/self-host-faq#id-18.-why-is-ssrf_proxy-needed
ssrf_proxy:
image: ubuntu/squid:latest
restart: always
diff --git a/docker-legacy/docker-compose.yaml b/docker-legacy/docker-compose.yaml
index 9d7039df2c..807946f3fe 100644
--- a/docker-legacy/docker-compose.yaml
+++ b/docker-legacy/docker-compose.yaml
@@ -2,7 +2,7 @@ version: '3'
services:
# API service
api:
- image: langgenius/dify-api:0.6.15
+ image: langgenius/dify-api:0.6.16
restart: always
environment:
# Startup mode, 'api' starts the API server.
@@ -224,7 +224,7 @@ services:
# worker service
# The Celery worker for processing the queue.
worker:
- image: langgenius/dify-api:0.6.15
+ image: langgenius/dify-api:0.6.16
restart: always
environment:
CONSOLE_WEB_URL: ''
@@ -390,7 +390,7 @@ services:
# Frontend web application.
web:
- image: langgenius/dify-web:0.6.15
+ image: langgenius/dify-web:0.6.16
restart: always
environment:
# The base URL of console application api server, refers to the Console base URL of WEB service if console domain is
@@ -494,7 +494,7 @@ services:
# ssrf_proxy server
# for more information, please refer to
- # https://docs.dify.ai/getting-started/install-self-hosted/install-faq#id-16.-why-is-ssrf_proxy-needed
+ # https://docs.dify.ai/learn-more/faq/self-host-faq#id-18.-why-is-ssrf_proxy-needed
ssrf_proxy:
image: ubuntu/squid:latest
restart: always
diff --git a/docker/.env.example b/docker/.env.example
index 2f8ec358f4..6fee8b4b3c 100644
--- a/docker/.env.example
+++ b/docker/.env.example
@@ -124,10 +124,36 @@ GUNICORN_TIMEOUT=360
# The number of Celery workers. The default is 1, and can be set as needed.
CELERY_WORKER_AMOUNT=
+# Flag indicating whether to enable autoscaling of Celery workers.
+#
+# Autoscaling is useful when tasks are CPU intensive and can be dynamically
+# allocated and deallocated based on the workload.
+#
+# When autoscaling is enabled, the maximum and minimum number of workers can
+# be specified. The autoscaling algorithm will dynamically adjust the number
+# of workers within the specified range.
+#
+# Default is false (i.e., autoscaling is disabled).
+#
+# Example:
+# CELERY_AUTO_SCALE=true
+CELERY_AUTO_SCALE=false
+
+# The maximum number of Celery workers that can be autoscaled.
+# This is optional and only used when autoscaling is enabled.
+# Default is not set.
+CELERY_MAX_WORKERS=
+
+# The minimum number of Celery workers that can be autoscaled.
+# This is optional and only used when autoscaling is enabled.
+# Default is not set.
+CELERY_MIN_WORKERS=
+
# API Tool configuration
API_TOOL_DEFAULT_CONNECT_TIMEOUT=10
API_TOOL_DEFAULT_READ_TIMEOUT=60
+
# ------------------------------
# Database Configuration
# The database uses PostgreSQL. Please use the public schema.
@@ -147,6 +173,36 @@ SQLALCHEMY_POOL_RECYCLE=3600
# Whether to print SQL, default is false.
SQLALCHEMY_ECHO=false
+# Maximum number of connections to the database
+# Default is 100
+#
+# Reference: https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-MAX-CONNECTIONS
+POSTGRES_MAX_CONNECTIONS=100
+
+# Sets the amount of shared memory used for postgres's shared buffers.
+# Default is 128MB
+# Recommended value: 25% of available memory
+# Reference: https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-SHARED-BUFFERS
+POSTGRES_SHARED_BUFFERS=128MB
+
+# Sets the amount of memory used by each database worker for working space.
+# Default is 4MB
+#
+# Reference: https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-WORK-MEM
+POSTGRES_WORK_MEM=4MB
+
+# Sets the amount of memory reserved for maintenance activities.
+# Default is 64MB
+#
+# Reference: https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM
+POSTGRES_MAINTENANCE_WORK_MEM=64MB
+
+# Sets the planner's assumption about the effective cache size.
+# Default is 4096MB
+#
+# Reference: https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-EFFECTIVE-CACHE-SIZE
+POSTGRES_EFFECTIVE_CACHE_SIZE=4096MB
+
# ------------------------------
# Redis Configuration
# This Redis configuration is used for caching and for pub/sub during conversation.
@@ -601,6 +657,23 @@ NGINX_KEEPALIVE_TIMEOUT=65
NGINX_PROXY_READ_TIMEOUT=3600s
NGINX_PROXY_SEND_TIMEOUT=3600s
+# Set true to accept requests for /.well-known/acme-challenge/
+NGINX_ENABLE_CERTBOT_CHALLENGE=false
+
+# ------------------------------
+# Certbot Configuration
+# ------------------------------
+
+# Email address (required to get certificates from Let's Encrypt)
+CERTBOT_EMAIL=your_email@example.com
+
+# Domain name
+CERTBOT_DOMAIN=your_domain.com
+
+# certbot command options
+# i.e: --force-renewal --dry-run --test-cert --debug
+CERTBOT_OPTIONS=
+
# ------------------------------
# Environment Variables for SSRF Proxy
# ------------------------------
@@ -611,8 +684,9 @@ SSRF_SANDBOX_HOST=sandbox
# ------------------------------
# docker env var for specifying vector db type at startup
-# (based on the vector db type, the corresponding docker
+# (based on the vector db type, the corresponding docker
# compose profile will be used)
+# if you want to use unstructured, add ',unstructured' to the end
# ------------------------------
COMPOSE_PROFILES=${VECTOR_STORE:-weaviate}
diff --git a/docker/README.md b/docker/README.md
index 6bff8bc314..1223a58024 100644
--- a/docker/README.md
+++ b/docker/README.md
@@ -3,42 +3,52 @@
Welcome to the new `docker` directory for deploying Dify using Docker Compose. This README outlines the updates, deployment instructions, and migration details for existing users.
### What's Updated
+
+- **Certbot Container**: `docker-compose.yaml` now contains `certbot` for managing SSL certificates. This container automatically renews certificates and ensures secure HTTPS connections.
+ For more information, refer `docker/certbot/README.md`.
+
- **Persistent Environment Variables**: Environment variables are now managed through a `.env` file, ensuring that your configurations persist across deployments.
- > What is `.env`?
- > The `.env` file is a crucial component in Docker and Docker Compose environments, serving as a centralized configuration file where you can define environment variables that are accessible to the containers at runtime. This file simplifies the management of environment settings across different stages of development, testing, and production, providing consistency and ease of configuration to deployments.
+ > What is `.env`?
+ > The `.env` file is a crucial component in Docker and Docker Compose environments, serving as a centralized configuration file where you can define environment variables that are accessible to the containers at runtime. This file simplifies the management of environment settings across different stages of development, testing, and production, providing consistency and ease of configuration to deployments.
- **Unified Vector Database Services**: All vector database services are now managed from a single Docker Compose file `docker-compose.yaml`. You can switch between different vector databases by setting the `VECTOR_STORE` environment variable in your `.env` file.
- **Mandatory .env File**: A `.env` file is now required to run `docker compose up`. This file is crucial for configuring your deployment and for any custom settings to persist through upgrades.
- **Legacy Support**: Previous deployment files are now located in the `docker-legacy` directory and will no longer be maintained.
### How to Deploy Dify with `docker-compose.yaml`
+
1. **Prerequisites**: Ensure Docker and Docker Compose are installed on your system.
2. **Environment Setup**:
- - Navigate to the `docker` directory.
- - Copy the `.env.example` file to a new file named `.env` by running `cp .env.example .env`.
- - Customize the `.env` file as needed. Refer to the `.env.example` file for detailed configuration options.
+ - Navigate to the `docker` directory.
+ - Copy the `.env.example` file to a new file named `.env` by running `cp .env.example .env`.
+ - Customize the `.env` file as needed. Refer to the `.env.example` file for detailed configuration options.
3. **Running the Services**:
- - Execute `docker compose up` from the `docker` directory to start the services.
- - To specify a vector database, set the `VECTOR_store` variable in your `.env` file to your desired vector database service, such as `milvus`, `weaviate`, or `opensearch`.
+ - Execute `docker compose up` from the `docker` directory to start the services.
+ - To specify a vector database, set the `VECTOR_STORE` variable in your `.env` file to your desired vector database service, such as `milvus`, `weaviate`, or `opensearch`.
+4. **SSL Certificate Setup**:
+ - Rrefer `docker/certbot/README.md` to set up SSL certificates using Certbot.
### How to Deploy Middleware for Developing Dify
+
1. **Middleware Setup**:
- - Use the `docker-compose.middleware.yaml` for setting up essential middleware services like databases and caches.
- - Navigate to the `docker` directory.
- - Ensure the `middleware.env` file is created by running `cp middleware.env.example middleware.env` (refer to the `middleware.env.example` file).
+ - Use the `docker-compose.middleware.yaml` for setting up essential middleware services like databases and caches.
+ - Navigate to the `docker` directory.
+ - Ensure the `middleware.env` file is created by running `cp middleware.env.example middleware.env` (refer to the `middleware.env.example` file).
2. **Running Middleware Services**:
- - Execute `docker-compose -f docker-compose.middleware.yaml up -d` to start the middleware services.
+ - Execute `docker-compose -f docker-compose.middleware.yaml up -d` to start the middleware services.
### Migration for Existing Users
+
For users migrating from the `docker-legacy` setup:
+
1. **Review Changes**: Familiarize yourself with the new `.env` configuration and Docker Compose setup.
2. **Transfer Customizations**:
- - If you have customized configurations such as `docker-compose.yaml`, `ssrf_proxy/squid.conf`, or `nginx/conf.d/default.conf`, you will need to reflect these changes in the `.env` file you create.
+ - If you have customized configurations such as `docker-compose.yaml`, `ssrf_proxy/squid.conf`, or `nginx/conf.d/default.conf`, you will need to reflect these changes in the `.env` file you create.
3. **Data Migration**:
- - Ensure that data from services like databases and caches is backed up and migrated appropriately to the new structure if necessary.
+ - Ensure that data from services like databases and caches is backed up and migrated appropriately to the new structure if necessary.
-### Overview of `.env`
+### Overview of `.env`
#### Key Modules and Customization
@@ -47,42 +57,43 @@ For users migrating from the `docker-legacy` setup:
- **API and Web Services**: Users can define URLs and other settings that affect how the API and web frontends operate.
#### Other notable variables
+
The `.env.example` file provided in the Docker setup is extensive and covers a wide range of configuration options. It is structured into several sections, each pertaining to different aspects of the application and its services. Here are some of the key sections and variables:
1. **Common Variables**:
- - `CONSOLE_API_URL`, `SERVICE_API_URL`: URLs for different API services.
- - `APP_WEB_URL`: Frontend application URL.
- - `FILES_URL`: Base URL for file downloads and previews.
+ - `CONSOLE_API_URL`, `SERVICE_API_URL`: URLs for different API services.
+ - `APP_WEB_URL`: Frontend application URL.
+ - `FILES_URL`: Base URL for file downloads and previews.
2. **Server Configuration**:
- - `LOG_LEVEL`, `DEBUG`, `FLASK_DEBUG`: Logging and debug settings.
- - `SECRET_KEY`: A key for encrypting session cookies and other sensitive data.
+ - `LOG_LEVEL`, `DEBUG`, `FLASK_DEBUG`: Logging and debug settings.
+ - `SECRET_KEY`: A key for encrypting session cookies and other sensitive data.
3. **Database Configuration**:
- - `DB_USERNAME`, `DB_PASSWORD`, `DB_HOST`, `DB_PORT`, `DB_DATABASE`: PostgreSQL database credentials and connection details.
+ - `DB_USERNAME`, `DB_PASSWORD`, `DB_HOST`, `DB_PORT`, `DB_DATABASE`: PostgreSQL database credentials and connection details.
4. **Redis Configuration**:
- - `REDIS_HOST`, `REDIS_PORT`, `REDIS_PASSWORD`: Redis server connection settings.
+ - `REDIS_HOST`, `REDIS_PORT`, `REDIS_PASSWORD`: Redis server connection settings.
5. **Celery Configuration**:
- - `CELERY_BROKER_URL`: Configuration for Celery message broker.
+ - `CELERY_BROKER_URL`: Configuration for Celery message broker.
6. **Storage Configuration**:
- - `STORAGE_TYPE`, `S3_BUCKET_NAME`, `AZURE_BLOB_ACCOUNT_NAME`: Settings for file storage options like local, S3, Azure Blob, etc.
+ - `STORAGE_TYPE`, `S3_BUCKET_NAME`, `AZURE_BLOB_ACCOUNT_NAME`: Settings for file storage options like local, S3, Azure Blob, etc.
7. **Vector Database Configuration**:
- - `VECTOR_STORE`: Type of vector database (e.g., `weaviate`, `milvus`).
- - Specific settings for each vector store like `WEAVIATE_ENDPOINT`, `MILVUS_HOST`.
+ - `VECTOR_STORE`: Type of vector database (e.g., `weaviate`, `milvus`).
+ - Specific settings for each vector store like `WEAVIATE_ENDPOINT`, `MILVUS_HOST`.
8. **CORS Configuration**:
- - `WEB_API_CORS_ALLOW_ORIGINS`, `CONSOLE_CORS_ALLOW_ORIGINS`: Settings for cross-origin resource sharing.
+ - `WEB_API_CORS_ALLOW_ORIGINS`, `CONSOLE_CORS_ALLOW_ORIGINS`: Settings for cross-origin resource sharing.
9. **Other Service-Specific Environment Variables**:
- - Each service like `nginx`, `redis`, `db`, and vector databases have specific environment variables that are directly referenced in the `docker-compose.yaml`.
-
+ - Each service like `nginx`, `redis`, `db`, and vector databases have specific environment variables that are directly referenced in the `docker-compose.yaml`.
### Additional Information
+
- **Continuous Improvement Phase**: We are actively seeking feedback from the community to refine and enhance the deployment process. As more users adopt this new method, we will continue to make improvements based on your experiences and suggestions.
- **Support**: For detailed configuration options and environment variable settings, refer to the `.env.example` file and the Docker Compose configuration files in the `docker` directory.
-This README aims to guide you through the deployment process using the new Docker Compose setup. For any issues or further assistance, please refer to the official documentation or contact support.
\ No newline at end of file
+This README aims to guide you through the deployment process using the new Docker Compose setup. For any issues or further assistance, please refer to the official documentation or contact support.
diff --git a/docker/certbot/README.md b/docker/certbot/README.md
new file mode 100644
index 0000000000..3fab2f4bb7
--- /dev/null
+++ b/docker/certbot/README.md
@@ -0,0 +1,76 @@
+# Launching new servers with SSL certificates
+
+## Short description
+
+Docker-compose certbot configurations with Backward compatibility (without certbot container).
+Use `docker-compose --profile certbot up` to use this features.
+
+## The simplest way for launching new servers with SSL certificates
+
+1. Get letsencrypt certs
+ set `.env` values
+ ```properties
+ NGINX_SSL_CERT_FILENAME=fullchain.pem
+ NGINX_SSL_CERT_KEY_FILENAME=privkey.pem
+ NGINX_ENABLE_CERTBOT_CHALLENGE=true
+ CERTBOT_DOMAIN=your_domain.com
+ CERTBOT_EMAIL=example@your_domain.com
+ ```
+ excecute command:
+ ```shell
+ sudo docker network prune
+ sudo docker-compose --profile certbot up --force-recreate -d
+ ```
+ then after the containers launched:
+ ```shell
+ sudo docker-compose exec -it certbot /bin/sh /update-cert.sh
+ ```
+2. Edit `.env` file and `sudo docker-compose --profile certbot up` again.
+ set `.env` value additionally
+ ```properties
+ NGINX_HTTPS_ENABLED=true
+ ```
+ excecute command:
+ ```shell
+ sudo docker-compose --profile certbot up -d --no-deps --force-recreate nginx
+ ```
+ Then you can access your serve with HTTPS.
+ [https://your_domain.com](https://your_domain.com)
+
+## SSL certificates renewal
+
+For SSL certificates renewal, execute commands below:
+
+```shell
+sudo docker-compose exec -it certbot /bin/sh /update-cert.sh
+sudo docker-compose exec nginx nginx -s reload
+```
+
+## Options for certbot
+
+`CERTBOT_OPTIONS` key might be helpful for testing. i.e.,
+
+```properties
+CERTBOT_OPTIONS=--dry-run
+```
+
+To apply changes to `CERTBOT_OPTIONS`, regenerate the certbot container before updating the certificates.
+
+```shell
+sudo docker-compose --profile certbot up -d --no-deps --force-recreate certbot
+sudo docker-compose exec -it certbot /bin/sh /update-cert.sh
+```
+
+Then, reload the nginx container if necessary.
+
+```shell
+sudo docker-compose exec nginx nginx -s reload
+```
+
+## For legacy servers
+
+To use cert files dir `nginx/ssl` as before, simply launch containers WITHOUT `--profile certbot` option.
+
+```shell
+sudo docker-compose up -d
+```
\ No newline at end of file
diff --git a/docker/certbot/docker-entrypoint.sh b/docker/certbot/docker-entrypoint.sh
new file mode 100755
index 0000000000..a70ecd8254
--- /dev/null
+++ b/docker/certbot/docker-entrypoint.sh
@@ -0,0 +1,30 @@
+#!/bin/sh
+set -e
+
+printf '%s\n' "Docker entrypoint script is running"
+
+printf '%s\n' "\nChecking specific environment variables:"
+printf '%s\n' "CERTBOT_EMAIL: ${CERTBOT_EMAIL:-Not set}"
+printf '%s\n' "CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-Not set}"
+printf '%s\n' "CERTBOT_OPTIONS: ${CERTBOT_OPTIONS:-Not set}"
+
+printf '%s\n' "\nChecking mounted directories:"
+for dir in "/etc/letsencrypt" "/var/www/html" "/var/log/letsencrypt"; do
+ if [ -d "$dir" ]; then
+ printf '%s\n' "$dir exists. Contents:"
+ ls -la "$dir"
+ else
+ printf '%s\n' "$dir does not exist."
+ fi
+done
+
+printf '%s\n' "\nGenerating update-cert.sh from template"
+sed -e "s|\${CERTBOT_EMAIL}|$CERTBOT_EMAIL|g" \
+ -e "s|\${CERTBOT_DOMAIN}|$CERTBOT_DOMAIN|g" \
+ -e "s|\${CERTBOT_OPTIONS}|$CERTBOT_OPTIONS|g" \
+ /update-cert.template.txt > /update-cert.sh
+
+chmod +x /update-cert.sh
+
+printf '%s\n' "\nExecuting command:" "$@"
+exec "$@"
diff --git a/docker/certbot/update-cert.template.txt b/docker/certbot/update-cert.template.txt
new file mode 100755
index 0000000000..16786a192e
--- /dev/null
+++ b/docker/certbot/update-cert.template.txt
@@ -0,0 +1,19 @@
+#!/bin/bash
+set -e
+
+DOMAIN="${CERTBOT_DOMAIN}"
+EMAIL="${CERTBOT_EMAIL}"
+OPTIONS="${CERTBOT_OPTIONS}"
+CERT_NAME="${DOMAIN}" # 証明書名をドメイン名と同じにする
+
+# Check if the certificate already exists
+if [ -f "/etc/letsencrypt/renewal/${CERT_NAME}.conf" ]; then
+ echo "Certificate exists. Attempting to renew..."
+ certbot renew --noninteractive --cert-name ${CERT_NAME} --webroot --webroot-path=/var/www/html --email ${EMAIL} --agree-tos --no-eff-email ${OPTIONS}
+else
+ echo "Certificate does not exist. Obtaining a new certificate..."
+ certbot certonly --noninteractive --webroot --webroot-path=/var/www/html --email ${EMAIL} --agree-tos --no-eff-email -d ${DOMAIN} ${OPTIONS}
+fi
+echo "Certificate operation successful"
+# Note: Nginx reload should be handled outside this container
+echo "Please ensure to reload Nginx to apply any certificate changes."
diff --git a/docker/docker-compose.middleware.yaml b/docker/docker-compose.middleware.yaml
index 6ab003ceab..3aa84d009e 100644
--- a/docker/docker-compose.middleware.yaml
+++ b/docker/docker-compose.middleware.yaml
@@ -9,6 +9,12 @@ services:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-difyai123456}
POSTGRES_DB: ${POSTGRES_DB:-dify}
PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata}
+ command: >
+ postgres -c 'max_connections=${POSTGRES_MAX_CONNECTIONS:-100}'
+ -c 'shared_buffers=${POSTGRES_SHARED_BUFFERS:-128MB}'
+ -c 'work_mem=${POSTGRES_WORK_MEM:-4MB}'
+ -c 'maintenance_work_mem=${POSTGRES_MAINTENANCE_WORK_MEM:-64MB}'
+ -c 'effective_cache_size=${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB}'
volumes:
- ./volumes/db/data:/var/lib/postgresql/data
ports:
diff --git a/docker/docker-compose.yaml b/docker/docker-compose.yaml
index a9b7b8acb0..2b10fbc2cc 100644
--- a/docker/docker-compose.yaml
+++ b/docker/docker-compose.yaml
@@ -22,6 +22,9 @@ x-shared-env: &shared-api-worker-env
CELERY_WORKER_CLASS: ${CELERY_WORKER_CLASS:-}
GUNICORN_TIMEOUT: ${GUNICORN_TIMEOUT:-360}
CELERY_WORKER_AMOUNT: ${CELERY_WORKER_AMOUNT:-}
+ CELERY_AUTO_SCALE: ${CELERY_AUTO_SCALE:-false}
+ CELERY_MAX_WORKERS: ${CELERY_MAX_WORKERS:-}
+ CELERY_MIN_WORKERS: ${CELERY_MIN_WORKERS:-}
API_TOOL_DEFAULT_CONNECT_TIMEOUT: ${API_TOOL_DEFAULT_CONNECT_TIMEOUT:-10}
API_TOOL_DEFAULT_READ_TIMEOUT: ${API_TOOL_DEFAULT_READ_TIMEOUT:-60}
DB_USERNAME: ${DB_USERNAME:-postgres}
@@ -32,6 +35,11 @@ x-shared-env: &shared-api-worker-env
SQLALCHEMY_POOL_SIZE: ${SQLALCHEMY_POOL_SIZE:-30}
SQLALCHEMY_POOL_RECYCLE: ${SQLALCHEMY_POOL_RECYCLE:-3600}
SQLALCHEMY_ECHO: ${SQLALCHEMY_ECHO:-false}
+ POSTGRES_MAX_CONNECTIONS: ${POSTGRES_MAX_CONNECTIONS:-100}
+ POSTGRES_SHARED_BUFFERS: ${POSTGRES_SHARED_BUFFERS:-128MB}
+ POSTGRES_WORK_MEM: ${POSTGRES_WORK_MEM:-4MB}
+ POSTGRES_MAINTENANCE_WORK_MEM: ${POSTGRES_MAINTENANCE_WORK_MEM:-64MB}
+ POSTGRES_EFFECTIVE_CACHE_SIZE: ${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB}
REDIS_HOST: ${REDIS_HOST:-redis}
REDIS_PORT: ${REDIS_PORT:-6379}
REDIS_USERNAME: ${REDIS_USERNAME:-}
@@ -179,7 +187,7 @@ x-shared-env: &shared-api-worker-env
services:
# API service
api:
- image: langgenius/dify-api:0.6.15
+ image: langgenius/dify-api:0.6.16
restart: always
environment:
# Use the shared environment variables.
@@ -199,7 +207,7 @@ services:
# worker service
# The Celery worker for processing the queue.
worker:
- image: langgenius/dify-api:0.6.15
+ image: langgenius/dify-api:0.6.16
restart: always
environment:
# Use the shared environment variables.
@@ -218,12 +226,13 @@ services:
# Frontend web application.
web:
- image: langgenius/dify-web:0.6.15
+ image: langgenius/dify-web:0.6.16
restart: always
environment:
CONSOLE_API_URL: ${CONSOLE_API_URL:-}
APP_API_URL: ${APP_API_URL:-}
SENTRY_DSN: ${WEB_SENTRY_DSN:-}
+ NEXT_TELEMETRY_DISABLED: ${NEXT_TELEMETRY_DISABLED:-0}
# The postgres database.
db:
@@ -234,6 +243,12 @@ services:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-difyai123456}
POSTGRES_DB: ${POSTGRES_DB:-dify}
PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata}
+ command: >
+ postgres -c 'max_connections=${POSTGRES_MAX_CONNECTIONS:-100}'
+ -c 'shared_buffers=${POSTGRES_SHARED_BUFFERS:-128MB}'
+ -c 'work_mem=${POSTGRES_WORK_MEM:-4MB}'
+ -c 'maintenance_work_mem=${POSTGRES_MAINTENANCE_WORK_MEM:-64MB}'
+ -c 'effective_cache_size=${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB}'
volumes:
- ./volumes/db/data:/var/lib/postgresql/data
healthcheck:
@@ -276,7 +291,7 @@ services:
# ssrf_proxy server
# for more information, please refer to
- # https://docs.dify.ai/getting-started/install-self-hosted/install-faq#id-16.-why-is-ssrf_proxy-needed
+ # https://docs.dify.ai/learn-more/faq/self-host-faq#id-18.-why-is-ssrf_proxy-needed
ssrf_proxy:
image: ubuntu/squid:latest
restart: always
@@ -295,6 +310,26 @@ services:
- ssrf_proxy_network
- default
+ # Certbot service
+ # use `docker-compose --profile certbot up` to start the certbot service.
+ certbot:
+ image: certbot/certbot
+ profiles:
+ - certbot
+ volumes:
+ - ./volumes/certbot/conf:/etc/letsencrypt
+ - ./volumes/certbot/www:/var/www/html
+ - ./volumes/certbot/logs:/var/log/letsencrypt
+ - ./volumes/certbot/conf/live:/etc/letsencrypt/live
+ - ./certbot/update-cert.template.txt:/update-cert.template.txt
+ - ./certbot/docker-entrypoint.sh:/docker-entrypoint.sh
+ environment:
+ - CERTBOT_EMAIL=${CERTBOT_EMAIL}
+ - CERTBOT_DOMAIN=${CERTBOT_DOMAIN}
+ - CERTBOT_OPTIONS=${CERTBOT_OPTIONS:-}
+ entrypoint: [ "/docker-entrypoint.sh" ]
+ command: ["tail", "-f", "/dev/null"]
+
# The nginx reverse proxy.
# used for reverse proxying the API service and Web service.
nginx:
@@ -306,7 +341,10 @@ services:
- ./nginx/https.conf.template:/etc/nginx/https.conf.template
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/docker-entrypoint.sh:/docker-entrypoint-mount.sh
- - ./nginx/ssl:/etc/ssl
+ - ./nginx/ssl:/etc/ssl # cert dir (legacy)
+ - ./volumes/certbot/conf/live:/etc/letsencrypt/live # cert dir (with certbot container)
+ - ./volumes/certbot/conf:/etc/letsencrypt
+ - ./volumes/certbot/www:/var/www/html
entrypoint: [ "sh", "-c", "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ]
environment:
NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_}
@@ -323,6 +361,8 @@ services:
NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65}
NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s}
NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s}
+ NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false}
+ CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-}
depends_on:
- api
- web
@@ -390,7 +430,7 @@ services:
# pgvecto-rs vector store
pgvecto-rs:
- image: tensorchord/pgvecto-rs:pg16-v0.2.0
+ image: tensorchord/pgvecto-rs:pg16-v0.3.0
profiles:
- pgvecto-rs
restart: always
@@ -453,7 +493,7 @@ services:
- ./volumes/milvus/etcd:/etcd
command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
healthcheck:
- test: ["CMD", "etcdctl", "endpoint", "health"]
+ test: [ "CMD", "etcdctl", "endpoint", "health" ]
interval: 30s
timeout: 20s
retries: 3
@@ -472,7 +512,7 @@ services:
- ./volumes/milvus/minio:/minio_data
command: minio server /minio_data --console-address ":9001"
healthcheck:
- test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
+ test: [ "CMD", "curl", "-f", "http://localhost:9000/minio/health/live" ]
interval: 30s
timeout: 20s
retries: 3
@@ -484,7 +524,7 @@ services:
image: milvusdb/milvus:v2.3.1
profiles:
- milvus
- command: ["milvus", "run", "standalone"]
+ command: [ "milvus", "run", "standalone" ]
environment:
ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379}
MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000}
@@ -492,7 +532,7 @@ services:
volumes:
- ./volumes/milvus/milvus:/var/lib/milvus
healthcheck:
- test: ["CMD", "curl", "-f", "http://localhost:9091/healthz"]
+ test: [ "CMD", "curl", "-f", "http://localhost:9091/healthz" ]
interval: 30s
start_period: 90s
timeout: 20s
@@ -555,6 +595,16 @@ services:
ports:
- "${MYSCALE_PORT:-8123}:${MYSCALE_PORT:-8123}"
+ # unstructured .
+ # (if used, you need to set ETL_TYPE to Unstructured in the api & worker service.)
+ unstructured:
+ image: downloads.unstructured.io/unstructured-io/unstructured-api:latest
+ profiles:
+ - unstructured
+ restart: always
+ volumes:
+ - ./volumes/unstructured:/app/data
+
networks:
# create a network between sandbox, api and ssrf_proxy, and can not access outside.
ssrf_proxy_network:
diff --git a/docker/middleware.env.example b/docker/middleware.env.example
index 750dcfe950..04d0fb5ed3 100644
--- a/docker/middleware.env.example
+++ b/docker/middleware.env.example
@@ -9,6 +9,35 @@ POSTGRES_DB=dify
# postgres data directory
PGDATA=/var/lib/postgresql/data/pgdata
+# Maximum number of connections to the database
+# Default is 100
+#
+# Reference: https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-MAX-CONNECTIONS
+POSTGRES_MAX_CONNECTIONS=100
+
+# Sets the amount of shared memory used for postgres's shared buffers.
+# Default is 128MB
+# Recommended value: 25% of available memory
+# Reference: https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-SHARED-BUFFERS
+POSTGRES_SHARED_BUFFERS=128MB
+
+# Sets the amount of memory used by each database worker for working space.
+# Default is 4MB
+#
+# Reference: https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-WORK-MEM
+POSTGRES_WORK_MEM=4MB
+
+# Sets the amount of memory reserved for maintenance activities.
+# Default is 64MB
+#
+# Reference: https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM
+POSTGRES_MAINTENANCE_WORK_MEM=64MB
+
+# Sets the planner's assumption about the effective cache size.
+# Default is 4096MB
+#
+# Reference: https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-EFFECTIVE-CACHE-SIZE
+POSTGRES_EFFECTIVE_CACHE_SIZE=4096MB
# ------------------------------
# Environment Variables for sandbox Service
diff --git a/docker/nginx/conf.d/default.conf.template b/docker/nginx/conf.d/default.conf.template
index 9f6e99af51..9691122cea 100644
--- a/docker/nginx/conf.d/default.conf.template
+++ b/docker/nginx/conf.d/default.conf.template
@@ -29,6 +29,9 @@ server {
include proxy.conf;
}
+ # placeholder for acme challenge location
+ ${ACME_CHALLENGE_LOCATION}
+
# placeholder for https config defined in https.conf.template
${HTTPS_CONFIG}
}
diff --git a/docker/nginx/docker-entrypoint.sh b/docker/nginx/docker-entrypoint.sh
index df432a0213..d343cb3efa 100755
--- a/docker/nginx/docker-entrypoint.sh
+++ b/docker/nginx/docker-entrypoint.sh
@@ -1,6 +1,19 @@
#!/bin/bash
if [ "${NGINX_HTTPS_ENABLED}" = "true" ]; then
+ # Check if the certificate and key files for the specified domain exist
+ if [ -n "${CERTBOT_DOMAIN}" ] && \
+ [ -f "/etc/letsencrypt/live/${CERTBOT_DOMAIN}/${NGINX_SSL_CERT_FILENAME}" ] && \
+ [ -f "/etc/letsencrypt/live/${CERTBOT_DOMAIN}/${NGINX_SSL_CERT_KEY_FILENAME}" ]; then
+ SSL_CERTIFICATE_PATH="/etc/letsencrypt/live/${CERTBOT_DOMAIN}/${NGINX_SSL_CERT_FILENAME}"
+ SSL_CERTIFICATE_KEY_PATH="/etc/letsencrypt/live/${CERTBOT_DOMAIN}/${NGINX_SSL_CERT_KEY_FILENAME}"
+ else
+ SSL_CERTIFICATE_PATH="/etc/ssl/${NGINX_SSL_CERT_FILENAME}"
+ SSL_CERTIFICATE_KEY_PATH="/etc/ssl/${NGINX_SSL_CERT_KEY_FILENAME}"
+ fi
+ export SSL_CERTIFICATE_PATH
+ export SSL_CERTIFICATE_KEY_PATH
+
# set the HTTPS_CONFIG environment variable to the content of the https.conf.template
HTTPS_CONFIG=$(envsubst < /etc/nginx/https.conf.template)
export HTTPS_CONFIG
@@ -8,6 +21,13 @@ if [ "${NGINX_HTTPS_ENABLED}" = "true" ]; then
envsubst '${HTTPS_CONFIG}' < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf
fi
+if [ "${NGINX_ENABLE_CERTBOT_CHALLENGE}" = "true" ]; then
+ ACME_CHALLENGE_LOCATION='location /.well-known/acme-challenge/ { root /var/www/html; }'
+else
+ ACME_CHALLENGE_LOCATION=''
+fi
+export ACME_CHALLENGE_LOCATION
+
env_vars=$(printenv | cut -d= -f1 | sed 's/^/$/g' | paste -sd, -)
envsubst "$env_vars" < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf
diff --git a/docker/nginx/https.conf.template b/docker/nginx/https.conf.template
index 12a6f56e3b..95ea36f463 100644
--- a/docker/nginx/https.conf.template
+++ b/docker/nginx/https.conf.template
@@ -1,8 +1,8 @@
# Please do not directly edit this file. Instead, modify the .env variables related to NGINX configuration.
listen ${NGINX_SSL_PORT} ssl;
-ssl_certificate ./../ssl/${NGINX_SSL_CERT_FILENAME};
-ssl_certificate_key ./../ssl/${NGINX_SSL_CERT_KEY_FILENAME};
+ssl_certificate ${SSL_CERTIFICATE_PATH};
+ssl_certificate_key ${SSL_CERTIFICATE_KEY_PATH};
ssl_protocols ${NGINX_SSL_PROTOCOLS};
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
diff --git a/web/.env.example b/web/.env.example
index 653913033d..439092c20e 100644
--- a/web/.env.example
+++ b/web/.env.example
@@ -13,3 +13,6 @@ NEXT_PUBLIC_PUBLIC_API_PREFIX=http://localhost:5001/api
# SENTRY
NEXT_PUBLIC_SENTRY_DSN=
+
+# Disable Next.js Telemetry (https://nextjs.org/telemetry)
+NEXT_TELEMETRY_DISABLED=1
\ No newline at end of file
diff --git a/web/Dockerfile b/web/Dockerfile
index 56957f0927..48bdb2301a 100644
--- a/web/Dockerfile
+++ b/web/Dockerfile
@@ -39,6 +39,7 @@ ENV DEPLOY_ENV=PRODUCTION
ENV CONSOLE_API_URL=http://127.0.0.1:5001
ENV APP_API_URL=http://127.0.0.1:5001
ENV PORT=3000
+ENV NEXT_TELEMETRY_DISABLED=1
# set timezone
ENV TZ=UTC
diff --git a/web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/panel.tsx b/web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/panel.tsx
index 88c37d0b12..bc724c1449 100644
--- a/web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/panel.tsx
+++ b/web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/panel.tsx
@@ -117,7 +117,6 @@ const Panel: FC = () => {
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [])
- const [isFold, setFold] = useState(false)
const [controlShowPopup, setControlShowPopup] = useState(0)
const showPopup = useCallback(() => {
setControlShowPopup(Date.now())
diff --git a/web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/provider-config-modal.tsx b/web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/provider-config-modal.tsx
index 2411d2baa4..e7ecd2f4ce 100644
--- a/web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/provider-config-modal.tsx
+++ b/web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/provider-config-modal.tsx
@@ -14,7 +14,7 @@ import {
import { Lock01 } from '@/app/components/base/icons/src/vender/solid/security'
import Button from '@/app/components/base/button'
import { LinkExternal02 } from '@/app/components/base/icons/src/vender/line/general'
-import ConfirmUi from '@/app/components/base/confirm'
+import Confirm from '@/app/components/base/confirm'
import { addTracingConfig, removeTracingConfig, updateTracingConfig } from '@/service/apps'
import Toast from '@/app/components/base/toast'
@@ -276,9 +276,8 @@ const ProviderConfigModal: FC = ({
)
: (
- {
title={t('app.deleteAppConfirmTitle')}
content={t('app.deleteAppConfirmContent')}
isShow={showConfirmDelete}
- onClose={() => setShowConfirmDelete(false)}
onConfirm={onConfirmDelete}
onCancel={() => setShowConfirmDelete(false)}
/>
diff --git a/web/app/(commonLayout)/datasets/DatasetCard.tsx b/web/app/(commonLayout)/datasets/DatasetCard.tsx
index d4b83f8a1f..096d9d357e 100644
--- a/web/app/(commonLayout)/datasets/DatasetCard.tsx
+++ b/web/app/(commonLayout)/datasets/DatasetCard.tsx
@@ -1,7 +1,7 @@
'use client'
import { useContext } from 'use-context-selector'
-import Link from 'next/link'
+import { useRouter } from 'next/navigation'
import { useCallback, useEffect, useState } from 'react'
import { useTranslation } from 'react-i18next'
import {
@@ -33,6 +33,8 @@ const DatasetCard = ({
}: DatasetCardProps) => {
const { t } = useTranslation()
const { notify } = useContext(ToastContext)
+ const { push } = useRouter()
+
const { isCurrentWorkspaceDatasetOperator } = useAppContext()
const [tags, setTags] = useState(dataset.tags)
@@ -107,10 +109,13 @@ const DatasetCard = ({
return (
<>
- {
+ e.preventDefault()
+ push(`/datasets/${dataset.id}/documents`)
+ }}
>