diff --git a/CONTRIBUTING_JA.md b/CONTRIBUTING_JA.md new file mode 100644 index 0000000000..f83c4b3fc3 --- /dev/null +++ b/CONTRIBUTING_JA.md @@ -0,0 +1,55 @@ +# コントリビュート + +[Dify](https://dify.ai) に興味を持ち、貢献したいと思うようになったことに感謝します!始める前に、 +[行動規範](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md)を読み、 +[既存の問題](https://github.com/langgenius/langgenius-gateway/issues)をチェックしてください。 +本ドキュメントは、[Dify](https://dify.ai) をビルドしてテストするための開発環境の構築方法を説明するものです。 + +### 依存関係のインストール + +[Dify](https://dify.ai)をビルドするには、お使いのマシンに以下の依存関係をインストールし、設定する必要があります: + +- [Git](http://git-scm.com/) +- [Docker](https://www.docker.com/) +- [Docker Compose](https://docs.docker.com/compose/install/) +- [Node.js v18.x (LTS)](http://nodejs.org) +- [npm](https://www.npmjs.com/) バージョン 8.x.x もしくは [Yarn](https://yarnpkg.com/) +- [Python](https://www.python.org/) バージョン 3.10.x + +## ローカル開発 + +開発環境を構築するには、プロジェクトの git リポジトリをフォークし、適切なパッケージマネージャを使用してバックエンドとフロントエンドの依存関係をインストールし、docker-compose スタックを実行するように作成します。 + +### リポジトリのフォーク + +[リポジトリ](https://github.com/langgenius/dify) をフォークする必要があります。 + +### リポジトリのクローン + +GitHub でフォークしたリポジトリのクローンを作成する: + +``` +git clone git@github.com:/dify.git +``` + +### バックエンドのインストール + +バックエンドアプリケーションのインストール方法については、[Backend README](api/README.md) を参照してください。 + +### フロントエンドのインストール + +フロントエンドアプリケーションのインストール方法については、[Frontend README](web/README.md) を参照してください。 + +### ブラウザで dify にアクセス + +[Dify](https://dify.ai) をローカル環境で見ることができるようになりました [http://localhost:3000](http://localhost:3000)。 + +## プルリクエストの作成 + +変更後、プルリクエスト (PR) をオープンしてください。プルリクエストを提出すると、Dify チーム/コミュニティの他の人があなたと一緒にそれをレビューします。 + +マージコンフリクトなどの問題が発生したり、プルリクエストの開き方がわからなくなったりしませんでしたか? [GitHub's pull request tutorial](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests) で、マージコンフリクトやその他の問題を解決する方法をチェックしてみてください。あなたの PR がマージされると、[コントリビュータチャート](https://github.com/langgenius/langgenius-gateway/graphs/contributors)にコントリビュータとして誇らしげに掲載されます。 + +## コミュニティチャンネル + +お困りですか?何か質問がありますか? [Discord Community サーバ](https://discord.gg/AhzKf7dNgk)に参加してください。私たちがお手伝いします! diff --git a/README.md b/README.md index e80559bcf7..499707d2e7 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,8 @@ ![](./images/describe-en.png)

English | - 简体中文 + 简体中文 | + 日本語

[Website](https://dify.ai) • [Docs](https://docs.dify.ai) • [Twitter](https://twitter.com/dify_ai) • [Discord](https://discord.gg/FngNHpbcY7) diff --git a/README_CN.md b/README_CN.md index c72c03bd6a..17cd027aee 100644 --- a/README_CN.md +++ b/README_CN.md @@ -1,7 +1,8 @@ ![](./images/describe-cn.jpg)

English | - 简体中文 + 简体中文 | + 日本語

diff --git a/README_JA.md b/README_JA.md new file mode 100644 index 0000000000..6b62747c9c --- /dev/null +++ b/README_JA.md @@ -0,0 +1,116 @@ +![](./images/describe-en.png) +

+ English | + 简体中文 | + 日本語 +

+ +[Web サイト](https://dify.ai) • [ドキュメント](https://docs.dify.ai) • [Twitter](https://twitter.com/dify_ai) • [Discord](https://discord.gg/FngNHpbcY7) + +**Dify** は、より多くの人々が持続可能な AI ネイティブアプリケーションを作成できるように設計された、使いやすい LLMOps プラットフォームです。様々なアプリケーションタイプに対応したビジュアルオーケストレーションにより Dify は Backend-as-a-Service API としても機能する、すぐに使えるアプリケーションを提供します。プラグインやデータセットを統合するための1つの API で開発プロセスを統一し、プロンプトエンジニアリング、ビジュアル分析、継続的な改善のための1つのインターフェイスを使って業務を合理化します。 + +Difyで作成したアプリケーションは以下の通りです: + +フォームモードとチャット会話モードをサポートする、すぐに使える Web サイト +プラグイン機能、コンテキストの強化などを網羅する単一の API により、バックエンドのコーディングの手間を省きます。 +アプリケーションの視覚的なデータ分析、ログレビュー、アノテーションが可能です。 +Dify は LangChain と互換性があり、複数の LLM を徐々にサポートします: + +- GPT 3 (text-davinci-003) +- GPT 3.5 Turbo(ChatGPT) +- GPT-4 + +## クラウドサービスの利用 + +[Dify.ai](https://dify.ai) をご覧ください + +## Community Edition のインストール + +### システム要件 + +Dify をインストールする前に、お使いのマシンが以下の最低システム要件を満たしていることを確認してください: + +- CPU >= 1 Core +- RAM >= 4GB + +### クイックスタート + +Dify サーバーを起動する最も簡単な方法は、[docker-compose.yml](docker/docker-compose.yaml) ファイルを実行することです。インストールコマンドを実行する前に、[Docker](https://docs.docker.com/get-docker/) と [Docker Compose](https://docs.docker.com/compose/install/) がお使いのマシンにインストールされていることを確認してください: + +```bash +cd docker +docker-compose up -d +``` + +実行後、ブラウザで [http://localhost/install](http://localhost/install) にアクセスし、初期化インストール作業を開始することができます。 + +### 構成 + +カスタマイズが必要な場合は、[docker-compose.yml](docker/docker-compose.yaml) ファイルのコメントを参照し、手動で環境設定をお願いします。変更後、再度 'docker-compose up -d' を実行してください。 + +## ロードマップ + +開発中の機能: + +- **データセット**, Notionやウェブページからのコンテンツ同期など、より多くのデータセットをサポートします +テキスト、ウェブページ、さらには Notion コンテンツなど、より多くのデータセットをサポートする予定です。ユーザーは、自分のデータソースをもとに AI アプリケーションを構築することができます。 +- **プラグイン**, アプリケーションに ChatGPT プラグイン標準のプラグインを導入する、または Dify 制作のプラグインを利用する +今後、ChatGPT 規格に準拠したプラグインや、ディファイ独自のプラグインを公開し、より多くの機能をアプリケーションで実現できるようにします。 +- **オープンソースモデル**, 例えばモデルプロバイダーとして Llama を採用したり、さらにファインチューニングを行う +Llama のような優れたオープンソースモデルを、私たちのプラットフォームのモデルオプションとして提供したり、さらなる微調整のために使用したりすることで、協力していきます。 + + +## Q&A + +**Q: Dify で何ができるのか?** + +A: Dify はシンプルでパワフルな LLM 開発・運用ツールです。商用グレードのアプリケーション、パーソナルアシスタントを構築するために使用することができます。独自のアプリケーションを開発したい場合、LangDifyGenius は OpenAI と統合する際のバックエンド作業を省き、視覚的な操作機能を提供し、GPT モデルを継続的に改善・訓練することが可能です。 + +**Q: Dify を使って、自分のモデルを「トレーニング」するにはどうすればいいのでしょうか?** + +A: プロンプトエンジニアリング、コンテキスト拡張、ファインチューニングからなる価値あるアプリケーションです。プロンプトとプログラミング言語を組み合わせたハイブリッドプログラミングアプローチ(テンプレートエンジンのようなもの)で、長文の埋め込みやユーザー入力の YouTube 動画からの字幕取り込みなどを簡単に実現し、これらはすべて LLM が処理するコンテキストとして提出される予定です。また、アプリケーションの操作性を重視し、ユーザーがアプリケーションを使用する際に生成したデータを分析、アノテーション、継続的なトレーニングに利用できるようにしました。適切なツールがなければ、これらのステップに時間がかかることがあります。 + +**Q: 自分でアプリケーションを作りたい場合、何を準備すればよいですか?** + +A: すでに OpenAI API Key をお持ちだと思いますが、お持ちでない場合はご登録ください。もし、すでにトレーニングのコンテキストとなるコンテンツをお持ちでしたら、それは素晴らしいことです! + +**Q: インターフェイスにどの言語が使えますか?** + +A: 現在、英語と中国語に対応しており、言語パックを寄贈することも可能です。 + +## Star ヒストリー + +[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date) + +## お問合せ + +ご質問、ご提案、パートナーシップに関するお問い合わせは、以下のチャンネルからお気軽にご連絡ください: + +- GitHub Repo で Issue や PR を提出する +- [Discord](https://discord.gg/FngNHpbcY7) コミュニティで議論に参加する。 +- hello@dify.ai にメールを送信します + +私たちは、皆様のお手伝いをさせていただき、より楽しく、より便利な AI アプリケーションを一緒に作っていきたいと思っています! + +## コントリビュート + +適切なレビューを行うため、コミットへの直接アクセスが可能なコントリビュータを含むすべてのコードコントリビュータは、プルリクエストで提出し、マージされる前にコア開発チームによって承認される必要があります。 + +私たちはすべてのプルリクエストを歓迎します!協力したい方は、[コントリビューションガイド](CONTRIBUTING.md) をチェックしてみてください。 + +## セキュリティ + +プライバシー保護のため、GitHub へのセキュリティ問題の投稿は避けてください。代わりに、あなたの質問を security@dify.ai に送ってください。より詳細な回答を提供します。 + +## 引用 + +本ソフトウェアは、以下のオープンソースソフトウェアを使用しています: + +- Chase, H. (2022). LangChain [Computer software]. https://github.com/hwchase17/langchain +- Liu, J. (2022). LlamaIndex [Computer software]. doi: 10.5281/zenodo.1234. + +詳しくは、各ソフトウェアの公式サイトまたはライセンス文をご参照ください。 + +## ライセンス + +このリポジトリは、[Dify Open Source License](LICENSE) のもとで利用できます。 diff --git a/api/config.py b/api/config.py index 1e6000c8ae..f81527da61 100644 --- a/api/config.py +++ b/api/config.py @@ -47,6 +47,7 @@ DEFAULTS = { 'PDF_PREVIEW': 'True', 'LOG_LEVEL': 'INFO', 'DISABLE_PROVIDER_CONFIG_VALIDATION': 'False', + 'DEFAULT_LLM_PROVIDER': 'openai' } @@ -181,6 +182,10 @@ class Config: # You could disable it for compatibility with certain OpenAPI providers self.DISABLE_PROVIDER_CONFIG_VALIDATION = get_bool_env('DISABLE_PROVIDER_CONFIG_VALIDATION') + # For temp use only + # set default LLM provider, default is 'openai', support `azure_openai` + self.DEFAULT_LLM_PROVIDER = get_env('DEFAULT_LLM_PROVIDER') + class CloudEditionConfig(Config): def __init__(self): diff --git a/api/controllers/console/workspace/providers.py b/api/controllers/console/workspace/providers.py index bc6b8320af..dc9e9c45f1 100644 --- a/api/controllers/console/workspace/providers.py +++ b/api/controllers/console/workspace/providers.py @@ -82,29 +82,33 @@ class ProviderTokenApi(Resource): args = parser.parse_args() - if not args['token']: - raise ValueError('Token is empty') + if args['token']: + try: + ProviderService.validate_provider_configs( + tenant=current_user.current_tenant, + provider_name=ProviderName(provider), + configs=args['token'] + ) + token_is_valid = True + except ValidateFailedError: + token_is_valid = False - try: - ProviderService.validate_provider_configs( + base64_encrypted_token = ProviderService.get_encrypted_token( tenant=current_user.current_tenant, provider_name=ProviderName(provider), configs=args['token'] ) - token_is_valid = True - except ValidateFailedError: + else: + base64_encrypted_token = None token_is_valid = False tenant = current_user.current_tenant - base64_encrypted_token = ProviderService.get_encrypted_token( - tenant=current_user.current_tenant, - provider_name=ProviderName(provider), - configs=args['token'] - ) - - provider_model = Provider.query.filter_by(tenant_id=tenant.id, provider_name=provider, - provider_type=ProviderType.CUSTOM.value).first() + provider_model = db.session.query(Provider).filter( + Provider.tenant_id == tenant.id, + Provider.provider_name == provider, + Provider.provider_type == ProviderType.CUSTOM.value + ).first() # Only allow updating token for CUSTOM provider type if provider_model: @@ -117,6 +121,16 @@ class ProviderTokenApi(Resource): is_valid=token_is_valid) db.session.add(provider_model) + if provider_model.is_valid: + other_providers = db.session.query(Provider).filter( + Provider.tenant_id == tenant.id, + Provider.provider_name != provider, + Provider.provider_type == ProviderType.CUSTOM.value + ).all() + + for other_provider in other_providers: + other_provider.is_valid = False + db.session.commit() if provider in [ProviderName.ANTHROPIC.value, ProviderName.AZURE_OPENAI.value, ProviderName.COHERE.value, diff --git a/api/core/embedding/openai_embedding.py b/api/core/embedding/openai_embedding.py index 0938397423..0f7cb252e2 100644 --- a/api/core/embedding/openai_embedding.py +++ b/api/core/embedding/openai_embedding.py @@ -11,9 +11,10 @@ from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_except @retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6)) def get_embedding( - text: str, - engine: Optional[str] = None, - openai_api_key: Optional[str] = None, + text: str, + engine: Optional[str] = None, + api_key: Optional[str] = None, + **kwargs ) -> List[float]: """Get embedding. @@ -25,11 +26,12 @@ def get_embedding( """ text = text.replace("\n", " ") - return openai.Embedding.create(input=[text], engine=engine, api_key=openai_api_key)["data"][0]["embedding"] + return openai.Embedding.create(input=[text], engine=engine, api_key=api_key, **kwargs)["data"][0]["embedding"] @retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6)) -async def aget_embedding(text: str, engine: Optional[str] = None, openai_api_key: Optional[str] = None) -> List[float]: +async def aget_embedding(text: str, engine: Optional[str] = None, api_key: Optional[str] = None, **kwargs) -> List[ + float]: """Asynchronously get embedding. NOTE: Copied from OpenAI's embedding utils: @@ -42,16 +44,17 @@ async def aget_embedding(text: str, engine: Optional[str] = None, openai_api_key # replace newlines, which can negatively affect performance. text = text.replace("\n", " ") - return (await openai.Embedding.acreate(input=[text], engine=engine, api_key=openai_api_key))["data"][0][ + return (await openai.Embedding.acreate(input=[text], engine=engine, api_key=api_key, **kwargs))["data"][0][ "embedding" ] @retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6)) def get_embeddings( - list_of_text: List[str], - engine: Optional[str] = None, - openai_api_key: Optional[str] = None + list_of_text: List[str], + engine: Optional[str] = None, + api_key: Optional[str] = None, + **kwargs ) -> List[List[float]]: """Get embeddings. @@ -67,14 +70,14 @@ def get_embeddings( # replace newlines, which can negatively affect performance. list_of_text = [text.replace("\n", " ") for text in list_of_text] - data = openai.Embedding.create(input=list_of_text, engine=engine, api_key=openai_api_key).data + data = openai.Embedding.create(input=list_of_text, engine=engine, api_key=api_key, **kwargs).data data = sorted(data, key=lambda x: x["index"]) # maintain the same order as input. return [d["embedding"] for d in data] @retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6)) async def aget_embeddings( - list_of_text: List[str], engine: Optional[str] = None, openai_api_key: Optional[str] = None + list_of_text: List[str], engine: Optional[str] = None, api_key: Optional[str] = None, **kwargs ) -> List[List[float]]: """Asynchronously get embeddings. @@ -90,7 +93,7 @@ async def aget_embeddings( # replace newlines, which can negatively affect performance. list_of_text = [text.replace("\n", " ") for text in list_of_text] - data = (await openai.Embedding.acreate(input=list_of_text, engine=engine, api_key=openai_api_key)).data + data = (await openai.Embedding.acreate(input=list_of_text, engine=engine, api_key=api_key, **kwargs)).data data = sorted(data, key=lambda x: x["index"]) # maintain the same order as input. return [d["embedding"] for d in data] @@ -98,19 +101,30 @@ async def aget_embeddings( class OpenAIEmbedding(BaseEmbedding): def __init__( - self, - mode: str = OpenAIEmbeddingMode.TEXT_SEARCH_MODE, - model: str = OpenAIEmbeddingModelType.TEXT_EMBED_ADA_002, - deployment_name: Optional[str] = None, - openai_api_key: Optional[str] = None, - **kwargs: Any, + self, + mode: str = OpenAIEmbeddingMode.TEXT_SEARCH_MODE, + model: str = OpenAIEmbeddingModelType.TEXT_EMBED_ADA_002, + deployment_name: Optional[str] = None, + openai_api_key: Optional[str] = None, + **kwargs: Any, ) -> None: """Init params.""" - super().__init__(**kwargs) + new_kwargs = {} + + if 'embed_batch_size' in kwargs: + new_kwargs['embed_batch_size'] = kwargs['embed_batch_size'] + + if 'tokenizer' in kwargs: + new_kwargs['tokenizer'] = kwargs['tokenizer'] + + super().__init__(**new_kwargs) self.mode = OpenAIEmbeddingMode(mode) self.model = OpenAIEmbeddingModelType(model) self.deployment_name = deployment_name self.openai_api_key = openai_api_key + self.openai_api_type = kwargs.get('openai_api_type') + self.openai_api_version = kwargs.get('openai_api_version') + self.openai_api_base = kwargs.get('openai_api_base') @handle_llm_exceptions def _get_query_embedding(self, query: str) -> List[float]: @@ -122,7 +136,9 @@ class OpenAIEmbedding(BaseEmbedding): if key not in _QUERY_MODE_MODEL_DICT: raise ValueError(f"Invalid mode, model combination: {key}") engine = _QUERY_MODE_MODEL_DICT[key] - return get_embedding(query, engine=engine, openai_api_key=self.openai_api_key) + return get_embedding(query, engine=engine, api_key=self.openai_api_key, + api_type=self.openai_api_type, api_version=self.openai_api_version, + api_base=self.openai_api_base) def _get_text_embedding(self, text: str) -> List[float]: """Get text embedding.""" @@ -133,7 +149,9 @@ class OpenAIEmbedding(BaseEmbedding): if key not in _TEXT_MODE_MODEL_DICT: raise ValueError(f"Invalid mode, model combination: {key}") engine = _TEXT_MODE_MODEL_DICT[key] - return get_embedding(text, engine=engine, openai_api_key=self.openai_api_key) + return get_embedding(text, engine=engine, api_key=self.openai_api_key, + api_type=self.openai_api_type, api_version=self.openai_api_version, + api_base=self.openai_api_base) async def _aget_text_embedding(self, text: str) -> List[float]: """Asynchronously get text embedding.""" @@ -144,7 +162,9 @@ class OpenAIEmbedding(BaseEmbedding): if key not in _TEXT_MODE_MODEL_DICT: raise ValueError(f"Invalid mode, model combination: {key}") engine = _TEXT_MODE_MODEL_DICT[key] - return await aget_embedding(text, engine=engine, openai_api_key=self.openai_api_key) + return await aget_embedding(text, engine=engine, api_key=self.openai_api_key, + api_type=self.openai_api_type, api_version=self.openai_api_version, + api_base=self.openai_api_base) def _get_text_embeddings(self, texts: List[str]) -> List[List[float]]: """Get text embeddings. @@ -160,7 +180,9 @@ class OpenAIEmbedding(BaseEmbedding): if key not in _TEXT_MODE_MODEL_DICT: raise ValueError(f"Invalid mode, model combination: {key}") engine = _TEXT_MODE_MODEL_DICT[key] - embeddings = get_embeddings(texts, engine=engine, openai_api_key=self.openai_api_key) + embeddings = get_embeddings(texts, engine=engine, api_key=self.openai_api_key, + api_type=self.openai_api_type, api_version=self.openai_api_version, + api_base=self.openai_api_base) return embeddings async def _aget_text_embeddings(self, texts: List[str]) -> List[List[float]]: @@ -172,5 +194,7 @@ class OpenAIEmbedding(BaseEmbedding): if key not in _TEXT_MODE_MODEL_DICT: raise ValueError(f"Invalid mode, model combination: {key}") engine = _TEXT_MODE_MODEL_DICT[key] - embeddings = await aget_embeddings(texts, engine=engine, openai_api_key=self.openai_api_key) + embeddings = await aget_embeddings(texts, engine=engine, api_key=self.openai_api_key, + api_type=self.openai_api_type, api_version=self.openai_api_version, + api_base=self.openai_api_base) return embeddings diff --git a/api/core/index/index_builder.py b/api/core/index/index_builder.py index baf16b0f3a..7f0486546e 100644 --- a/api/core/index/index_builder.py +++ b/api/core/index/index_builder.py @@ -33,8 +33,11 @@ class IndexBuilder: max_chunk_overlap=20 ) + provider = LLMBuilder.get_default_provider(tenant_id) + model_credentials = LLMBuilder.get_model_credentials( tenant_id=tenant_id, + model_provider=provider, model_name='text-embedding-ada-002' ) diff --git a/api/core/llm/llm_builder.py b/api/core/llm/llm_builder.py index 4355593c5d..30b0a931b3 100644 --- a/api/core/llm/llm_builder.py +++ b/api/core/llm/llm_builder.py @@ -4,9 +4,14 @@ from langchain.callbacks import CallbackManager from langchain.llms.fake import FakeListLLM from core.constant import llm_constant +from core.llm.error import ProviderTokenNotInitError +from core.llm.provider.base import BaseProvider from core.llm.provider.llm_provider_service import LLMProviderService +from core.llm.streamable_azure_chat_open_ai import StreamableAzureChatOpenAI +from core.llm.streamable_azure_open_ai import StreamableAzureOpenAI from core.llm.streamable_chat_open_ai import StreamableChatOpenAI from core.llm.streamable_open_ai import StreamableOpenAI +from models.provider import ProviderType class LLMBuilder: @@ -31,16 +36,23 @@ class LLMBuilder: if model_name == 'fake': return FakeListLLM(responses=[]) + provider = cls.get_default_provider(tenant_id) + mode = cls.get_mode_by_model(model_name) if mode == 'chat': - # llm_cls = StreamableAzureChatOpenAI - llm_cls = StreamableChatOpenAI + if provider == 'openai': + llm_cls = StreamableChatOpenAI + else: + llm_cls = StreamableAzureChatOpenAI elif mode == 'completion': - llm_cls = StreamableOpenAI + if provider == 'openai': + llm_cls = StreamableOpenAI + else: + llm_cls = StreamableAzureOpenAI else: raise ValueError(f"model name {model_name} is not supported.") - model_credentials = cls.get_model_credentials(tenant_id, model_name) + model_credentials = cls.get_model_credentials(tenant_id, provider, model_name) return llm_cls( model_name=model_name, @@ -86,18 +98,31 @@ class LLMBuilder: raise ValueError(f"model name {model_name} is not supported.") @classmethod - def get_model_credentials(cls, tenant_id: str, model_name: str) -> dict: + def get_model_credentials(cls, tenant_id: str, model_provider: str, model_name: str) -> dict: """ Returns the API credentials for the given tenant_id and model_name, based on the model's provider. Raises an exception if the model_name is not found or if the provider is not found. """ if not model_name: raise Exception('model name not found') + # + # if model_name not in llm_constant.models: + # raise Exception('model {} not found'.format(model_name)) - if model_name not in llm_constant.models: - raise Exception('model {} not found'.format(model_name)) - - model_provider = llm_constant.models[model_name] + # model_provider = llm_constant.models[model_name] provider_service = LLMProviderService(tenant_id=tenant_id, provider_name=model_provider) return provider_service.get_credentials(model_name) + + @classmethod + def get_default_provider(cls, tenant_id: str) -> str: + provider = BaseProvider.get_valid_provider(tenant_id) + if not provider: + raise ProviderTokenNotInitError() + + if provider.provider_type == ProviderType.SYSTEM.value: + provider_name = 'openai' + else: + provider_name = provider.provider_name + + return provider_name diff --git a/api/core/llm/provider/azure_provider.py b/api/core/llm/provider/azure_provider.py index e0ba0d0734..d68ed3ccc4 100644 --- a/api/core/llm/provider/azure_provider.py +++ b/api/core/llm/provider/azure_provider.py @@ -36,10 +36,9 @@ class AzureProvider(BaseProvider): """ Returns the API credentials for Azure OpenAI as a dictionary. """ - encrypted_config = self.get_provider_api_key(model_id=model_id) - config = json.loads(encrypted_config) + config = self.get_provider_api_key(model_id=model_id) config['openai_api_type'] = 'azure' - config['deployment_name'] = model_id + config['deployment_name'] = model_id.replace('.', '') return config def get_provider_name(self): @@ -51,12 +50,11 @@ class AzureProvider(BaseProvider): """ try: config = self.get_provider_api_key() - config = json.loads(config) except: config = { 'openai_api_type': 'azure', 'openai_api_version': '2023-03-15-preview', - 'openai_api_base': 'https://foo.microsoft.com/bar', + 'openai_api_base': 'https://.openai.azure.com/', 'openai_api_key': '' } @@ -65,7 +63,7 @@ class AzureProvider(BaseProvider): config = { 'openai_api_type': 'azure', 'openai_api_version': '2023-03-15-preview', - 'openai_api_base': 'https://foo.microsoft.com/bar', + 'openai_api_base': 'https://.openai.azure.com/', 'openai_api_key': '' } diff --git a/api/core/llm/provider/base.py b/api/core/llm/provider/base.py index 89343ff62a..71bb32dca6 100644 --- a/api/core/llm/provider/base.py +++ b/api/core/llm/provider/base.py @@ -14,7 +14,7 @@ class BaseProvider(ABC): def __init__(self, tenant_id: str): self.tenant_id = tenant_id - def get_provider_api_key(self, model_id: Optional[str] = None, prefer_custom: bool = True) -> str: + def get_provider_api_key(self, model_id: Optional[str] = None, prefer_custom: bool = True) -> Union[str | dict]: """ Returns the decrypted API key for the given tenant_id and provider_name. If the provider is of type SYSTEM and the quota is exceeded, raises a QuotaExceededError. @@ -43,23 +43,35 @@ class BaseProvider(ABC): Returns the Provider instance for the given tenant_id and provider_name. If both CUSTOM and System providers exist, the preferred provider will be returned based on the prefer_custom flag. """ - providers = db.session.query(Provider).filter( - Provider.tenant_id == self.tenant_id, - Provider.provider_name == self.get_provider_name().value - ).order_by(Provider.provider_type.desc() if prefer_custom else Provider.provider_type).all() + return BaseProvider.get_valid_provider(self.tenant_id, self.get_provider_name().value, prefer_custom) + + @classmethod + def get_valid_provider(cls, tenant_id: str, provider_name: str = None, prefer_custom: bool = False) -> Optional[Provider]: + """ + Returns the Provider instance for the given tenant_id and provider_name. + If both CUSTOM and System providers exist, the preferred provider will be returned based on the prefer_custom flag. + """ + query = db.session.query(Provider).filter( + Provider.tenant_id == tenant_id + ) + + if provider_name: + query = query.filter(Provider.provider_name == provider_name) + + providers = query.order_by(Provider.provider_type.desc() if prefer_custom else Provider.provider_type).all() custom_provider = None system_provider = None for provider in providers: - if provider.provider_type == ProviderType.CUSTOM.value: + if provider.provider_type == ProviderType.CUSTOM.value and provider.is_valid and provider.encrypted_config: custom_provider = provider - elif provider.provider_type == ProviderType.SYSTEM.value: + elif provider.provider_type == ProviderType.SYSTEM.value and provider.is_valid: system_provider = provider - if custom_provider and custom_provider.is_valid and custom_provider.encrypted_config: + if custom_provider: return custom_provider - elif system_provider and system_provider.is_valid: + elif system_provider: return system_provider else: return None @@ -80,7 +92,7 @@ class BaseProvider(ABC): try: config = self.get_provider_api_key() except: - config = 'THIS-IS-A-MOCK-TOKEN' + config = '' if obfuscated: return self.obfuscated_token(config) diff --git a/api/core/llm/streamable_azure_chat_open_ai.py b/api/core/llm/streamable_azure_chat_open_ai.py index 539ce92774..f3d514cf58 100644 --- a/api/core/llm/streamable_azure_chat_open_ai.py +++ b/api/core/llm/streamable_azure_chat_open_ai.py @@ -1,12 +1,50 @@ -import requests from langchain.schema import BaseMessage, ChatResult, LLMResult from langchain.chat_models import AzureChatOpenAI -from typing import Optional, List +from typing import Optional, List, Dict, Any + +from pydantic import root_validator from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async class StreamableAzureChatOpenAI(AzureChatOpenAI): + @root_validator() + def validate_environment(cls, values: Dict) -> Dict: + """Validate that api key and python package exists in environment.""" + try: + import openai + except ImportError: + raise ValueError( + "Could not import openai python package. " + "Please install it with `pip install openai`." + ) + try: + values["client"] = openai.ChatCompletion + except AttributeError: + raise ValueError( + "`openai` has no `ChatCompletion` attribute, this is likely " + "due to an old version of the openai package. Try upgrading it " + "with `pip install --upgrade openai`." + ) + if values["n"] < 1: + raise ValueError("n must be at least 1.") + if values["n"] > 1 and values["streaming"]: + raise ValueError("n must be 1 when streaming.") + return values + + @property + def _default_params(self) -> Dict[str, Any]: + """Get the default parameters for calling OpenAI API.""" + return { + **super()._default_params, + "engine": self.deployment_name, + "api_type": self.openai_api_type, + "api_base": self.openai_api_base, + "api_version": self.openai_api_version, + "api_key": self.openai_api_key, + "organization": self.openai_organization if self.openai_organization else None, + } + def get_messages_tokens(self, messages: List[BaseMessage]) -> int: """Get the number of tokens in a list of messages. diff --git a/api/core/llm/streamable_azure_open_ai.py b/api/core/llm/streamable_azure_open_ai.py new file mode 100644 index 0000000000..e383f8cf23 --- /dev/null +++ b/api/core/llm/streamable_azure_open_ai.py @@ -0,0 +1,64 @@ +import os + +from langchain.llms import AzureOpenAI +from langchain.schema import LLMResult +from typing import Optional, List, Dict, Mapping, Any + +from pydantic import root_validator + +from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async + + +class StreamableAzureOpenAI(AzureOpenAI): + openai_api_type: str = "azure" + openai_api_version: str = "" + + @root_validator() + def validate_environment(cls, values: Dict) -> Dict: + """Validate that api key and python package exists in environment.""" + try: + import openai + + values["client"] = openai.Completion + except ImportError: + raise ValueError( + "Could not import openai python package. " + "Please install it with `pip install openai`." + ) + if values["streaming"] and values["n"] > 1: + raise ValueError("Cannot stream results when n > 1.") + if values["streaming"] and values["best_of"] > 1: + raise ValueError("Cannot stream results when best_of > 1.") + return values + + @property + def _invocation_params(self) -> Dict[str, Any]: + return {**super()._invocation_params, **{ + "api_type": self.openai_api_type, + "api_base": self.openai_api_base, + "api_version": self.openai_api_version, + "api_key": self.openai_api_key, + "organization": self.openai_organization if self.openai_organization else None, + }} + + @property + def _identifying_params(self) -> Mapping[str, Any]: + return {**super()._identifying_params, **{ + "api_type": self.openai_api_type, + "api_base": self.openai_api_base, + "api_version": self.openai_api_version, + "api_key": self.openai_api_key, + "organization": self.openai_organization if self.openai_organization else None, + }} + + @handle_llm_exceptions + def generate( + self, prompts: List[str], stop: Optional[List[str]] = None + ) -> LLMResult: + return super().generate(prompts, stop) + + @handle_llm_exceptions_async + async def agenerate( + self, prompts: List[str], stop: Optional[List[str]] = None + ) -> LLMResult: + return await super().agenerate(prompts, stop) diff --git a/api/core/llm/streamable_chat_open_ai.py b/api/core/llm/streamable_chat_open_ai.py index 59391e4ce0..582041ba09 100644 --- a/api/core/llm/streamable_chat_open_ai.py +++ b/api/core/llm/streamable_chat_open_ai.py @@ -1,12 +1,52 @@ +import os + from langchain.schema import BaseMessage, ChatResult, LLMResult from langchain.chat_models import ChatOpenAI -from typing import Optional, List +from typing import Optional, List, Dict, Any + +from pydantic import root_validator from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async class StreamableChatOpenAI(ChatOpenAI): + @root_validator() + def validate_environment(cls, values: Dict) -> Dict: + """Validate that api key and python package exists in environment.""" + try: + import openai + except ImportError: + raise ValueError( + "Could not import openai python package. " + "Please install it with `pip install openai`." + ) + try: + values["client"] = openai.ChatCompletion + except AttributeError: + raise ValueError( + "`openai` has no `ChatCompletion` attribute, this is likely " + "due to an old version of the openai package. Try upgrading it " + "with `pip install --upgrade openai`." + ) + if values["n"] < 1: + raise ValueError("n must be at least 1.") + if values["n"] > 1 and values["streaming"]: + raise ValueError("n must be 1 when streaming.") + return values + + @property + def _default_params(self) -> Dict[str, Any]: + """Get the default parameters for calling OpenAI API.""" + return { + **super()._default_params, + "api_type": 'openai', + "api_base": os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1"), + "api_version": None, + "api_key": self.openai_api_key, + "organization": self.openai_organization if self.openai_organization else None, + } + def get_messages_tokens(self, messages: List[BaseMessage]) -> int: """Get the number of tokens in a list of messages. diff --git a/api/core/llm/streamable_open_ai.py b/api/core/llm/streamable_open_ai.py index 94754af30e..9cf1b4c4bb 100644 --- a/api/core/llm/streamable_open_ai.py +++ b/api/core/llm/streamable_open_ai.py @@ -1,12 +1,54 @@ +import os + from langchain.schema import LLMResult -from typing import Optional, List +from typing import Optional, List, Dict, Any, Mapping from langchain import OpenAI +from pydantic import root_validator from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async class StreamableOpenAI(OpenAI): + @root_validator() + def validate_environment(cls, values: Dict) -> Dict: + """Validate that api key and python package exists in environment.""" + try: + import openai + + values["client"] = openai.Completion + except ImportError: + raise ValueError( + "Could not import openai python package. " + "Please install it with `pip install openai`." + ) + if values["streaming"] and values["n"] > 1: + raise ValueError("Cannot stream results when n > 1.") + if values["streaming"] and values["best_of"] > 1: + raise ValueError("Cannot stream results when best_of > 1.") + return values + + @property + def _invocation_params(self) -> Dict[str, Any]: + return {**super()._invocation_params, **{ + "api_type": 'openai', + "api_base": os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1"), + "api_version": None, + "api_key": self.openai_api_key, + "organization": self.openai_organization if self.openai_organization else None, + }} + + @property + def _identifying_params(self) -> Mapping[str, Any]: + return {**super()._identifying_params, **{ + "api_type": 'openai', + "api_base": os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1"), + "api_version": None, + "api_key": self.openai_api_key, + "organization": self.openai_organization if self.openai_organization else None, + }} + + @handle_llm_exceptions def generate( self, prompts: List[str], stop: Optional[List[str]] = None diff --git a/mock-server/.gitignore b/mock-server/.gitignore deleted file mode 100644 index 02651453d8..0000000000 --- a/mock-server/.gitignore +++ /dev/null @@ -1,117 +0,0 @@ -# Logs -logs -*.log -npm-debug.log* -yarn-debug.log* -yarn-error.log* -lerna-debug.log* - -# Diagnostic reports (https://nodejs.org/api/report.html) -report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json - -# Runtime data -pids -*.pid -*.seed -*.pid.lock - -# Directory for instrumented libs generated by jscoverage/JSCover -lib-cov - -# Coverage directory used by tools like istanbul -coverage -*.lcov - -# nyc test coverage -.nyc_output - -# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files) -.grunt - -# Bower dependency directory (https://bower.io/) -bower_components - -# node-waf configuration -.lock-wscript - -# Compiled binary addons (https://nodejs.org/api/addons.html) -build/Release - -# Dependency directories -node_modules/ -jspm_packages/ - -# TypeScript v1 declaration files -typings/ - -# TypeScript cache -*.tsbuildinfo - -# Optional npm cache directory -.npm - -# Optional eslint cache -.eslintcache - -# Microbundle cache -.rpt2_cache/ -.rts2_cache_cjs/ -.rts2_cache_es/ -.rts2_cache_umd/ - -# Optional REPL history -.node_repl_history - -# Output of 'npm pack' -*.tgz - -# Yarn Integrity file -.yarn-integrity - -# dotenv environment variables file -.env -.env.test - -# parcel-bundler cache (https://parceljs.org/) -.cache - -# Next.js build output -.next - -# Nuxt.js build / generate output -.nuxt -dist - -# Gatsby files -.cache/ -# Comment in the public line in if your project uses Gatsby and *not* Next.js -# https://nextjs.org/blog/next-9-1#public-directory-support -# public - -# vuepress build output -.vuepress/dist - -# Serverless directories -.serverless/ - -# FuseBox cache -.fusebox/ - -# DynamoDB Local files -.dynamodb/ - -# TernJS port file -.tern-port - -# npm -package-lock.json - -# yarn -.pnp.cjs -.pnp.loader.mjs -.yarn/ -yarn.lock -.yarnrc.yml - -# pmpm -pnpm-lock.yaml \ No newline at end of file diff --git a/mock-server/README.md b/mock-server/README.md deleted file mode 100644 index 7b0a621e84..0000000000 --- a/mock-server/README.md +++ /dev/null @@ -1 +0,0 @@ -# Mock Server diff --git a/mock-server/api/apps.js b/mock-server/api/apps.js deleted file mode 100644 index d704387376..0000000000 --- a/mock-server/api/apps.js +++ /dev/null @@ -1,551 +0,0 @@ -const chars = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ-_' - -function randomString (length) { - let result = '' - for (let i = length; i > 0; --i) result += chars[Math.floor(Math.random() * chars.length)] - return result -} - -// https://www.notion.so/55773516a0194781ae211792a44a3663?pvs=4 -const VirtualData = new Array(10).fill().map((_, index) => { - const date = new Date(Date.now() - index * 24 * 60 * 60 * 1000) - return { - date: `${date.getFullYear()}-${date.getMonth()}-${date.getDate()}`, - conversation_count: Math.floor(Math.random() * 10) + index, - terminal_count: Math.floor(Math.random() * 10) + index, - token_count: Math.floor(Math.random() * 10) + index, - total_price: Math.floor(Math.random() * 10) + index, - } -}) - -const registerAPI = function (app) { - const apps = [{ - id: '1', - name: 'chat app', - mode: 'chat', - description: 'description01', - enable_site: true, - enable_api: true, - api_rpm: 60, - api_rph: 3600, - is_demo: false, - model_config: { - provider: 'OPENAI', - model_id: 'gpt-3.5-turbo', - configs: { - prompt_template: '你是我的解梦小助手,请参考 {{book}} 回答我有关梦境的问题。在回答前请称呼我为 {{myName}}。', - prompt_variables: [ - { - key: 'book', - name: '书', - value: '《梦境解析》', - type: 'string', - description: '请具体说下书名' - }, - { - key: 'myName', - name: 'your name', - value: 'Book', - type: 'string', - description: 'please tell me your name' - } - ], - completion_params: { - max_token: 16, - temperature: 1, // 0-2 - top_p: 1, - presence_penalty: 1, // -2-2 - frequency_penalty: 1, // -2-2 - } - } - }, - site: { - access_token: '1000', - title: 'site 01', - author: 'John', - default_language: 'zh-Hans-CN', - customize_domain: 'http://customize_domain', - theme: 'theme', - customize_token_strategy: 'must', - prompt_public: true - } - }, - { - id: '2', - name: 'completion app', - mode: 'completion', // genertation text - description: 'description 02', // genertation text - enable_site: false, - enable_api: false, - api_rpm: 60, - api_rph: 3600, - is_demo: false, - model_config: { - provider: 'OPENAI', - model_id: 'text-davinci-003', - configs: { - prompt_template: '你是我的翻译小助手,请把以下内容 {{langA}} 翻译成 {{langB}},以下的内容:', - prompt_variables: [ - { - key: 'langA', - name: '原始语音', - value: '中文', - type: 'string', - description: '这是中文格式的原始语音' - }, - { - key: 'langB', - name: '目标语言', - value: '英语', - type: 'string', - description: '这是英语格式的目标语言' - } - ], - completion_params: { - max_token: 16, - temperature: 1, // 0-2 - top_p: 1, - presence_penalty: 1, // -2-2 - frequency_penalty: 1, // -2-2 - } - } - }, - site: { - access_token: '2000', - title: 'site 02', - author: 'Mark', - default_language: 'en-US', - customize_domain: 'http://customize_domain', - theme: 'theme', - customize_token_strategy: 'must', - prompt_public: false - } - }, - ] - - const apikeys = [{ - id: '111121312313132', - token: 'sk-DEFGHJKMNPQRSTWXYZabcdefhijk1234', - last_used_at: '1679212138000', - created_at: '1673316000000' - }, { - id: '43441242131223123', - token: 'sk-EEFGHJKMNPQRSTWXYZabcdefhijk5678', - last_used_at: '1679212721000', - created_at: '1679212731000' - }] - - // create app - app.post('/apps', async (req, res) => { - apps.push({ - id: apps.length + 1 + '', - ...req.body, - - }) - res.send({ - result: 'success' - }) - }) - - // app list - app.get('/apps', async (req, res) => { - res.send({ - data: apps - }) - }) - - // app detail - app.get('/apps/:id', async (req, res) => { - const item = apps.find(item => item.id === req.params.id) || apps[0] - res.send(item) - }) - - // update app name - app.post('/apps/:id/name', async (req, res) => { - const item = apps.find(item => item.id === req.params.id) - item.name = req.body.name - res.send(item || null) - }) - - // update app site-enable status - app.post('/apps/:id/site-enable', async (req, res) => { - const item = apps.find(item => item.id === req.params.id) - console.log(item) - item.enable_site = req.body.enable_site - res.send(item || null) - }) - - // update app api-enable status - app.post('/apps/:id/api-enable', async (req, res) => { - const item = apps.find(item => item.id === req.params.id) - console.log(item) - item.enable_api = req.body.enable_api - res.send(item || null) - }) - - // update app rate-limit - app.post('/apps/:id/rate-limit', async (req, res) => { - const item = apps.find(item => item.id === req.params.id) - console.log(item) - item.api_rpm = req.body.api_rpm - item.api_rph = req.body.api_rph - res.send(item || null) - }) - - // update app url including code - app.post('/apps/:id/site/access-token-reset', async (req, res) => { - const item = apps.find(item => item.id === req.params.id) - console.log(item) - item.site.access_token = randomString(12) - res.send(item || null) - }) - - // update app config - app.post('/apps/:id/site', async (req, res) => { - const item = apps.find(item => item.id === req.params.id) - console.log(item) - item.name = req.body.title - item.description = req.body.description - item.prompt_public = req.body.prompt_public - item.default_language = req.body.default_language - res.send(item || null) - }) - - // get statistics daily-conversations - app.get('/apps/:id/statistics/daily-conversations', async (req, res) => { - const item = apps.find(item => item.id === req.params.id) - if (item) { - res.send({ - data: VirtualData - }) - } else { - res.send({ - data: [] - }) - } - }) - - // get statistics daily-end-users - app.get('/apps/:id/statistics/daily-end-users', async (req, res) => { - const item = apps.find(item => item.id === req.params.id) - if (item) { - res.send({ - data: VirtualData - }) - } else { - res.send({ - data: [] - }) - } - }) - - // get statistics token-costs - app.get('/apps/:id/statistics/token-costs', async (req, res) => { - const item = apps.find(item => item.id === req.params.id) - if (item) { - res.send({ - data: VirtualData - }) - } else { - res.send({ - data: [] - }) - } - }) - - // update app model config - app.post('/apps/:id/model-config', async (req, res) => { - const item = apps.find(item => item.id === req.params.id) - console.log(item) - item.model_config = req.body - res.send(item || null) - }) - - - // get api keys list - app.get('/apps/:id/api-keys', async (req, res) => { - res.send({ - data: apikeys - }) - }) - - // del api key - app.delete('/apps/:id/api-keys/:api_key_id', async (req, res) => { - res.send({ - result: 'success' - }) - }) - - // create api key - app.post('/apps/:id/api-keys', async (req, res) => { - res.send({ - id: 'e2424241313131', - token: 'sk-GEFGHJKMNPQRSTWXYZabcdefhijk0124', - created_at: '1679216688962' - }) - }) - - // get completion-conversations - app.get('/apps/:id/completion-conversations', async (req, res) => { - const data = { - data: [{ - id: 1, - from_end_user_id: 'user 1', - summary: 'summary1', - created_at: '2023-10-11', - annotated: true, - message_count: 100, - user_feedback_stats: { - like: 4, dislike: 5 - }, - admin_feedback_stats: { - like: 1, dislike: 2 - }, - message: { - message: 'message1', - query: 'question1', - answer: 'answer1' - } - }, { - id: 12, - from_end_user_id: 'user 2', - summary: 'summary2', - created_at: '2023-10-01', - annotated: false, - message_count: 10, - user_feedback_stats: { - like: 2, dislike: 20 - }, - admin_feedback_stats: { - like: 12, dislike: 21 - }, - message: { - message: 'message2', - query: 'question2', - answer: 'answer2' - } - }, { - id: 13, - from_end_user_id: 'user 3', - summary: 'summary3', - created_at: '2023-10-11', - annotated: false, - message_count: 20, - user_feedback_stats: { - like: 2, dislike: 0 - }, - admin_feedback_stats: { - like: 0, dislike: 21 - }, - message: { - message: 'message3', - query: 'question3', - answer: 'answer3' - } - }], - total: 200 - } - res.send(data) - }) - - // get chat-conversations - app.get('/apps/:id/chat-conversations', async (req, res) => { - const data = { - data: [{ - id: 1, - from_end_user_id: 'user 1', - summary: 'summary1', - created_at: '2023-10-11', - read_at: '2023-10-12', - annotated: true, - message_count: 100, - user_feedback_stats: { - like: 4, dislike: 5 - }, - admin_feedback_stats: { - like: 1, dislike: 2 - }, - message: { - message: 'message1', - query: 'question1', - answer: 'answer1' - } - }, { - id: 12, - from_end_user_id: 'user 2', - summary: 'summary2', - created_at: '2023-10-01', - annotated: false, - message_count: 10, - user_feedback_stats: { - like: 2, dislike: 20 - }, - admin_feedback_stats: { - like: 12, dislike: 21 - }, - message: { - message: 'message2', - query: 'question2', - answer: 'answer2' - } - }, { - id: 13, - from_end_user_id: 'user 3', - summary: 'summary3', - created_at: '2023-10-11', - annotated: false, - message_count: 20, - user_feedback_stats: { - like: 2, dislike: 0 - }, - admin_feedback_stats: { - like: 0, dislike: 21 - }, - message: { - message: 'message3', - query: 'question3', - answer: 'answer3' - } - }], - total: 200 - } - res.send(data) - }) - - // get completion-conversation detail - app.get('/apps/:id/completion-conversations/:cid', async (req, res) => { - const data = - { - id: 1, - from_end_user_id: 'user 1', - summary: 'summary1', - created_at: '2023-10-11', - annotated: true, - message: { - message: 'question1', - // query: 'question1', - answer: 'answer1', - annotation: { - content: '这是一段纠正的内容' - } - }, - model_config: { - provider: 'openai', - model_id: 'model_id', - configs: { - prompt_template: '你是我的翻译小助手,请把以下内容 {{langA}} 翻译成 {{langB}},以下的内容:{{content}}' - } - } - } - res.send(data) - }) - - // get chat-conversation detail - app.get('/apps/:id/chat-conversations/:cid', async (req, res) => { - const data = - { - id: 1, - from_end_user_id: 'user 1', - summary: 'summary1', - created_at: '2023-10-11', - annotated: true, - message: { - message: 'question1', - // query: 'question1', - answer: 'answer1', - created_at: '2023-08-09 13:00', - provider_response_latency: 130, - message_tokens: 230 - }, - model_config: { - provider: 'openai', - model_id: 'model_id', - configs: { - prompt_template: '你是我的翻译小助手,请把以下内容 {{langA}} 翻译成 {{langB}},以下的内容:{{content}}' - } - } - } - res.send(data) - }) - - // get chat-conversation message list - app.get('/apps/:id/chat-messages', async (req, res) => { - const data = { - data: [{ - id: 1, - created_at: '2023-10-11 07:09', - message: '请说说人为什么会做梦?' + req.query.conversation_id, - answer: '梦境通常是个人内心深处的反映,很难确定每个人梦境的确切含义,因为它们可能会受到梦境者的文化背景、生活经验和情感状态等多种因素的影响。', - provider_response_latency: 450, - answer_tokens: 200, - annotation: { - content: 'string', - account: { - id: 'string', - name: 'string', - email: 'string' - } - }, - feedbacks: { - rating: 'like', - content: 'string', - from_source: 'log' - } - }, { - id: 2, - created_at: '2023-10-11 8:23', - message: '夜里经常做梦会影响次日的精神状态吗?', - answer: '总之,这个梦境可能与梦境者的个人经历和情感状态有关,但在一般情况下,它可能表示一种强烈的情感反应,包括愤怒、不满和对于正义和自由的渴望。', - provider_response_latency: 400, - answer_tokens: 250, - annotation: { - content: 'string', - account: { - id: 'string', - name: 'string', - email: 'string' - } - }, - // feedbacks: { - // rating: 'like', - // content: 'string', - // from_source: 'log' - // } - }, { - id: 3, - created_at: '2023-10-11 10:20', - message: '梦见在山上手撕鬼子,大师解解梦', - answer: '但是,一般来说,“手撕鬼子”这个场景可能是梦境者对于过去历史上的战争、侵略以及对于自己国家和族群的保护与维护的情感反应。在梦中,你可能会感到自己充满力量和勇气,去对抗那些看似强大的侵略者。', - provider_response_latency: 288, - answer_tokens: 100, - annotation: { - content: 'string', - account: { - id: 'string', - name: 'string', - email: 'string' - } - }, - feedbacks: { - rating: 'dislike', - content: 'string', - from_source: 'log' - } - }], - limit: 20, - has_more: true - } - res.send(data) - }) - - app.post('/apps/:id/annotations', async (req, res) => { - res.send({ result: 'success' }) - }) - - app.post('/apps/:id/feedbacks', async (req, res) => { - res.send({ result: 'success' }) - }) - -} - -module.exports = registerAPI \ No newline at end of file diff --git a/mock-server/api/common.js b/mock-server/api/common.js deleted file mode 100644 index 3e43ad524a..0000000000 --- a/mock-server/api/common.js +++ /dev/null @@ -1,38 +0,0 @@ - -const registerAPI = function (app) { - app.post('/login', async (req, res) => { - res.send({ - result: 'success' - }) - }) - - // get user info - app.get('/account/profile', async (req, res) => { - res.send({ - id: '11122222', - name: 'Joel', - email: 'iamjoel007@gmail.com' - }) - }) - - // logout - app.get('/logout', async (req, res) => { - res.send({ - result: 'success' - }) - }) - - // Langgenius version - app.get('/version', async (req, res) => { - res.send({ - current_version: 'v1.0.0', - latest_version: 'v1.0.0', - upgradeable: true, - compatible_upgrade: true - }) - }) - -} - -module.exports = registerAPI - diff --git a/mock-server/api/datasets.js b/mock-server/api/datasets.js deleted file mode 100644 index 0821b3786b..0000000000 --- a/mock-server/api/datasets.js +++ /dev/null @@ -1,249 +0,0 @@ -const registerAPI = function (app) { - app.get("/datasets/:id/documents", async (req, res) => { - if (req.params.id === "0") res.send({ data: [] }); - else { - res.send({ - data: [ - { - id: 1, - name: "Steve Jobs' life", - words: "70k", - word_count: 100, - updated_at: 1681801029, - indexing_status: "completed", - archived: true, - enabled: false, - data_source_info: { - upload_file: { - // id: string - // name: string - // size: number - // mime_type: string - // created_at: number - // created_by: string - extension: "pdf", - }, - }, - }, - { - id: 2, - name: "Steve Jobs' life", - word_count: "10k", - hit_count: 10, - updated_at: 1681801029, - indexing_status: "waiting", - archived: true, - enabled: false, - data_source_info: { - upload_file: { - extension: "json", - }, - }, - }, - { - id: 3, - name: "Steve Jobs' life xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", - word_count: "100k", - hit_count: 0, - updated_at: 1681801029, - indexing_status: "indexing", - archived: false, - enabled: true, - data_source_info: { - upload_file: { - extension: "txt", - }, - }, - }, - { - id: 4, - name: "Steve Jobs' life xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", - word_count: "100k", - hit_count: 0, - updated_at: 1681801029, - indexing_status: "splitting", - archived: false, - enabled: true, - data_source_info: { - upload_file: { - extension: "md", - }, - }, - }, - { - id: 5, - name: "Steve Jobs' life", - word_count: "100k", - hit_count: 0, - updated_at: 1681801029, - indexing_status: "error", - archived: false, - enabled: false, - data_source_info: { - upload_file: { - extension: "html", - }, - }, - }, - ], - total: 100, - id: req.params.id, - }); - } - }); - - app.get("/datasets/:id/documents/:did/segments", async (req, res) => { - if (req.params.id === "0") res.send({ data: [] }); - else { - res.send({ - data: new Array(100).fill({ - id: 1234, - content: `他的坚持让我很为难。众所周知他非常注意保护自己的隐私,而我想他应该从来没有看过我写的书。也许将来的某个时候吧,我还是这么说。但是,到了2009年,他的妻子劳伦·鲍威尔(Laurene Powell)直言不讳地对我说:“如果你真的打算写一本关于史蒂夫的书,最好现在就开始。”他当时刚刚第二次因病休假。我向劳伦坦承,当乔布斯第一次提出这个想法时,我并不知道他病了。几乎没有人知道,她说。他是在接受癌症手术之前给我打的电话,直到今天他还将此事作为一个秘密,她这么解释道。\n - 他的坚持让我很为难。众所周知他非常注意保护自己的隐私,而我想他应该从来没有看过我写的书。也许将来的某个时候吧,我还是这么说。但是,到了2009年,他的妻子劳伦·鲍威尔(Laurene Powell)直言不讳地对我说:“如果你真的打算写一本关于史蒂夫的书,最好现在就开始。”他当时刚刚第二次因病休假。我向劳伦坦承,当乔布斯第一次提出这个想法时,我并不知道他病了。几乎没有人知道,她说。他是在接受癌症手术之前给我打的电话,直到今天他还将此事作为一个秘密,她这么解释道。`, - enabled: true, - keyWords: [ - "劳伦·鲍威尔", - "劳伦·鲍威尔", - "手术", - "秘密", - "癌症", - "乔布斯", - "史蒂夫", - "书", - "休假", - "坚持", - "隐私", - ], - word_count: 120, - hit_count: 100, - status: "ok", - index_node_hash: "index_node_hash value", - }), - limit: 100, - has_more: true, - }); - } - }); - - // get doc detail - app.get("/datasets/:id/documents/:did", async (req, res) => { - const fixedParams = { - // originInfo: { - originalFilename: "Original filename", - originalFileSize: "16mb", - uploadDate: "2023-01-01", - lastUpdateDate: "2023-01-05", - source: "Source", - // }, - // technicalParameters: { - segmentSpecification: "909090", - segmentLength: 100, - avgParagraphLength: 130, - }; - const bookData = { - doc_type: "book", - doc_metadata: { - title: "机器学习实战", - language: "zh", - author: "Peter Harrington", - publisher: "人民邮电出版社", - publicationDate: "2013-01-01", - ISBN: "9787115335500", - category: "技术", - }, - }; - const webData = { - doc_type: "webPage", - doc_metadata: { - title: "深度学习入门教程", - url: "https://www.example.com/deep-learning-tutorial", - language: "zh", - publishDate: "2020-05-01", - authorPublisher: "张三", - topicsKeywords: "深度学习, 人工智能, 教程", - description: - "这是一篇详细的深度学习入门教程,适用于对人工智能和深度学习感兴趣的初学者。", - }, - }; - const postData = { - doc_type: "socialMediaPost", - doc_metadata: { - platform: "Twitter", - authorUsername: "example_user", - publishDate: "2021-08-15", - postURL: "https://twitter.com/example_user/status/1234567890", - topicsTags: - "AI, DeepLearning, Tutorial, Example, Example2, Example3, AI, DeepLearning, Tutorial, Example, Example2, Example3, AI, DeepLearning, Tutorial, Example, Example2, Example3,", - }, - }; - res.send({ - id: "550e8400-e29b-41d4-a716-446655440000", - position: 1, - dataset_id: "550e8400-e29b-41d4-a716-446655440002", - data_source_type: "upload_file", - data_source_info: { - upload_file: { - extension: "html", - id: "550e8400-e29b-41d4-a716-446655440003", - }, - }, - dataset_process_rule_id: "550e8400-e29b-41d4-a716-446655440004", - batch: "20230410123456123456", - name: "example_document", - created_from: "web", - created_by: "550e8400-e29b-41d4-a716-446655440005", - created_api_request_id: "550e8400-e29b-41d4-a716-446655440006", - created_at: 1671269696, - processing_started_at: 1671269700, - word_count: 11, - parsing_completed_at: 1671269710, - cleaning_completed_at: 1671269720, - splitting_completed_at: 1671269730, - tokens: 10, - indexing_latency: 5.0, - completed_at: 1671269740, - paused_by: null, - paused_at: null, - error: null, - stopped_at: null, - indexing_status: "completed", - enabled: true, - disabled_at: null, - disabled_by: null, - archived: false, - archived_reason: null, - archived_by: null, - archived_at: null, - updated_at: 1671269740, - ...(req.params.did === "book" - ? bookData - : req.params.did === "web" - ? webData - : req.params.did === "post" - ? postData - : {}), - segment_count: 10, - hit_count: 9, - status: "ok", - }); - }); - - // // logout - // app.get("/logout", async (req, res) => { - // res.send({ - // result: "success", - // }); - // }); - - // // Langgenius version - // app.get("/version", async (req, res) => { - // res.send({ - // current_version: "v1.0.0", - // latest_version: "v1.0.0", - // upgradeable: true, - // compatible_upgrade: true, - // }); - // }); -}; - -module.exports = registerAPI; diff --git a/mock-server/api/debug.js b/mock-server/api/debug.js deleted file mode 100644 index 2e6f3ca0a7..0000000000 --- a/mock-server/api/debug.js +++ /dev/null @@ -1,119 +0,0 @@ -const registerAPI = function (app) { - const coversationList = [ - { - id: '1', - name: '梦的解析', - inputs: { - book: '《梦的解析》', - callMe: '大师', - }, - chats: [] - }, - { - id: '2', - name: '生命的起源', - inputs: { - book: '《x x x》', - } - }, - ] - // site info - app.get('/apps/site/info', async (req, res) => { - // const id = req.params.id - res.send({ - enable_site: true, - appId: '1', - site: { - title: 'Story Bot', - description: '这是一款解梦聊天机器人,你可以选择你喜欢的解梦人进行解梦,这句话是客户端应用说明', - }, - prompt_public: true, //id === '1', - prompt_template: '你是我的解梦小助手,请参考 {{book}} 回答我有关梦境的问题。在回答前请称呼我为 {{myName}}。', - }) - }) - - app.post('/apps/:id/chat-messages', async (req, res) => { - const conversationId = req.body.conversation_id ? req.body.conversation_id : Date.now() + '' - res.send({ - id: Date.now() + '', - conversation_id: Date.now() + '', - answer: 'balabababab' - }) - }) - - app.post('/apps/:id/completion-messages', async (req, res) => { - res.send({ - id: Date.now() + '', - answer: `做为一个AI助手,我可以为你提供随机生成的段落,这些段落可以用于测试、占位符、或者其他目的。以下是一个随机生成的段落: - - “随着科技的不断发展,越来越多的人开始意识到人工智能的重要性。人工智能已经成为我们生活中不可或缺的一部分,它可以帮助我们完成很多繁琐的工作,也可以为我们提供更智能、更便捷的服务。虽然人工智能带来了很多好处,但它也面临着很多挑战。例如,人工智能的算法可能会出现偏见,导致对某些人群不公平。此外,人工智能的发展也可能会导致一些工作的失业。因此,我们需要不断地研究人工智能的发展,以确保它能够为人类带来更多的好处。”` - }) - }) - - // share api - // chat list - app.get('/apps/:id/coversations', async (req, res) => { - res.send({ - data: coversationList - }) - }) - - - - app.get('/apps/:id/variables', async (req, res) => { - res.send({ - variables: [ - { - key: 'book', - name: '书', - value: '《梦境解析》', - type: 'string' - }, - { - key: 'myName', - name: '称呼', - value: '', - type: 'string' - } - ], - }) - }) - -} - -module.exports = registerAPI - -// const chatList = [ -// { -// id: 1, -// content: 'AI 开场白', -// isAnswer: true, -// }, -// { -// id: 2, -// content: '梦见在山上手撕鬼子,大师解解梦', -// more: { time: '5.6 秒' }, -// }, -// { -// id: 3, -// content: '梦境通常是个人内心深处的反映,很难确定每个人梦境的确切含义,因为它们可能会受到梦境者的文化背景、生活经验和情感状态等多种因素的影响。', -// isAnswer: true, -// more: { time: '99 秒' }, - -// }, -// { -// id: 4, -// content: '梦见在山上手撕鬼子,大师解解梦', -// more: { time: '5.6 秒' }, -// }, -// { -// id: 5, -// content: '梦见在山上手撕鬼子,大师解解梦', -// more: { time: '5.6 秒' }, -// }, -// { -// id: 6, -// content: '梦见在山上手撕鬼子,大师解解梦', -// more: { time: '5.6 秒' }, -// }, -// ] \ No newline at end of file diff --git a/mock-server/api/demo.js b/mock-server/api/demo.js deleted file mode 100644 index 8f8a35079b..0000000000 --- a/mock-server/api/demo.js +++ /dev/null @@ -1,15 +0,0 @@ -const registerAPI = function (app) { - app.get('/demo', async (req, res) => { - res.send({ - des: 'get res' - }) - }) - - app.post('/demo', async (req, res) => { - res.send({ - des: 'post res' - }) - }) -} - -module.exports = registerAPI \ No newline at end of file diff --git a/mock-server/app.js b/mock-server/app.js deleted file mode 100644 index 96eec0ab2a..0000000000 --- a/mock-server/app.js +++ /dev/null @@ -1,42 +0,0 @@ -const express = require('express') -const app = express() -const bodyParser = require('body-parser') -var cors = require('cors') - -const commonAPI = require('./api/common') -const demoAPI = require('./api/demo') -const appsApi = require('./api/apps') -const debugAPI = require('./api/debug') -const datasetsAPI = require('./api/datasets') - -const port = 3001 - -app.use(bodyParser.json()) // for parsing application/json -app.use(bodyParser.urlencoded({ extended: true })) // for parsing application/x-www-form-urlencoded - -const corsOptions = { - origin: true, - credentials: true, -} -app.use(cors(corsOptions)) // for cross origin -app.options('*', cors(corsOptions)) // include before other routes - - -demoAPI(app) -commonAPI(app) -appsApi(app) -debugAPI(app) -datasetsAPI(app) - - -app.get('/', (req, res) => { - res.send('rootpath') -}) - -app.listen(port, () => { - console.log(`Mock run on port ${port}`) -}) - -const sleep = (ms) => { - return new Promise(resolve => setTimeout(resolve, ms)) -} diff --git a/mock-server/package.json b/mock-server/package.json deleted file mode 100644 index 11a68d61e7..0000000000 --- a/mock-server/package.json +++ /dev/null @@ -1,26 +0,0 @@ -{ - "name": "server", - "version": "1.0.0", - "description": "", - "main": "index.js", - "scripts": { - "dev": "nodemon node app.js", - "start": "node app.js", - "tcp": "node tcp.js" - }, - "keywords": [], - "author": "", - "license": "MIT", - "engines": { - "node": ">=16.0.0" - }, - "dependencies": { - "body-parser": "^1.20.2", - "cors": "^2.8.5", - "express": "4.18.2", - "express-jwt": "8.4.1" - }, - "devDependencies": { - "nodemon": "2.0.21" - } -} diff --git a/web/app/(commonLayout)/app/(appDetailLayout)/[appId]/layout.tsx b/web/app/(commonLayout)/app/(appDetailLayout)/[appId]/layout.tsx index 049d908dde..97b0164ea1 100644 --- a/web/app/(commonLayout)/app/(appDetailLayout)/[appId]/layout.tsx +++ b/web/app/(commonLayout)/app/(appDetailLayout)/[appId]/layout.tsx @@ -49,7 +49,7 @@ const AppDetailLayout: FC = (props) => { return null return (
- +
{children}
) diff --git a/web/app/(commonLayout)/apps/AppCard.tsx b/web/app/(commonLayout)/apps/AppCard.tsx index eb62cd8899..aec4d6ab6e 100644 --- a/web/app/(commonLayout)/apps/AppCard.tsx +++ b/web/app/(commonLayout)/apps/AppCard.tsx @@ -47,7 +47,7 @@ const AppCard = ({ <>
- +
{app.name}
diff --git a/web/app/(commonLayout)/apps/Apps.tsx b/web/app/(commonLayout)/apps/Apps.tsx index b11b0da6a0..fd989edde4 100644 --- a/web/app/(commonLayout)/apps/Apps.tsx +++ b/web/app/(commonLayout)/apps/Apps.tsx @@ -17,6 +17,7 @@ const Apps = () => { {apps.map(app => ())} + ) } diff --git a/web/app/(commonLayout)/apps/NewAppCard.tsx b/web/app/(commonLayout)/apps/NewAppCard.tsx index 7fee93534e..ddbb0f03b9 100644 --- a/web/app/(commonLayout)/apps/NewAppCard.tsx +++ b/web/app/(commonLayout)/apps/NewAppCard.tsx @@ -9,7 +9,6 @@ import NewAppDialog from './NewAppDialog' const CreateAppCard = () => { const { t } = useTranslation() const [showNewAppDialog, setShowNewAppDialog] = useState(false) - return ( setShowNewAppDialog(true)}>
diff --git a/web/app/(commonLayout)/apps/NewAppDialog.tsx b/web/app/(commonLayout)/apps/NewAppDialog.tsx index 3b434fa3b2..10966ba4a4 100644 --- a/web/app/(commonLayout)/apps/NewAppDialog.tsx +++ b/web/app/(commonLayout)/apps/NewAppDialog.tsx @@ -17,6 +17,8 @@ import { createApp, fetchAppTemplates } from '@/service/apps' import AppIcon from '@/app/components/base/app-icon' import AppsContext from '@/context/app-context' +import EmojiPicker from '@/app/components/base/emoji-picker' + type NewAppDialogProps = { show: boolean onClose?: () => void @@ -31,6 +33,11 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => { const [newAppMode, setNewAppMode] = useState() const [isWithTemplate, setIsWithTemplate] = useState(false) const [selectedTemplateIndex, setSelectedTemplateIndex] = useState(-1) + + // Emoji Picker + const [showEmojiPicker, setShowEmojiPicker] = useState(false) + const [emoji, setEmoji] = useState({ icon: '🍌', icon_background: '#FFEAD5' }) + const mutateApps = useContextSelector(AppsContext, state => state.mutateApps) const { data: templates, mutate } = useSWR({ url: '/app-templates' }, fetchAppTemplates) @@ -67,6 +74,8 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => { try { const app = await createApp({ name, + icon: emoji.icon, + icon_background: emoji.icon_background, mode: isWithTemplate ? templates.data[selectedTemplateIndex].mode : newAppMode!, config: isWithTemplate ? templates.data[selectedTemplateIndex].model_config : undefined, }) @@ -80,9 +89,20 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => { notify({ type: 'error', message: t('app.newApp.appCreateFailed') }) } isCreatingRef.current = false - }, [isWithTemplate, newAppMode, notify, router, templates, selectedTemplateIndex]) + }, [isWithTemplate, newAppMode, notify, router, templates, selectedTemplateIndex, emoji]) - return ( + return <> + {showEmojiPicker && { + console.log(icon, icon_background) + setEmoji({ icon, icon_background }) + setShowEmojiPicker(false) + }} + onClose={() => { + setEmoji({ icon: '🍌', icon_background: '#FFEAD5' }) + setShowEmojiPicker(false) + }} + />} {

{t('app.newApp.captionName')}

- + { setShowEmojiPicker(true) }} className='cursor-pointer' icon={emoji.icon} background={emoji.icon_background} />
@@ -187,7 +207,7 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => { )}
- ) + } export default NewAppDialog diff --git a/web/app/(commonLayout)/datasets/(datasetDetailLayout)/[datasetId]/layout.tsx b/web/app/(commonLayout)/datasets/(datasetDetailLayout)/[datasetId]/layout.tsx index 1dc6578977..48b0bbc9c5 100644 --- a/web/app/(commonLayout)/datasets/(datasetDetailLayout)/[datasetId]/layout.tsx +++ b/web/app/(commonLayout)/datasets/(datasetDetailLayout)/[datasetId]/layout.tsx @@ -155,6 +155,8 @@ const DatasetDetailLayout: FC = (props) => {
{!hideSideBar && } diff --git a/web/app/api/hello/route.ts b/web/app/api/hello/route.ts deleted file mode 100644 index d3a7036df1..0000000000 --- a/web/app/api/hello/route.ts +++ /dev/null @@ -1,3 +0,0 @@ -export async function GET(_request: Request) { - return new Response('Hello, Next.js!') -} diff --git a/web/app/components/app-sidebar/basic.tsx b/web/app/components/app-sidebar/basic.tsx index 4cefafa0c1..55094c6190 100644 --- a/web/app/components/app-sidebar/basic.tsx +++ b/web/app/components/app-sidebar/basic.tsx @@ -15,7 +15,8 @@ export function randomString(length: number) { export type IAppBasicProps = { iconType?: 'app' | 'api' | 'dataset' - iconUrl?: string + icon?: string, + icon_background?: string, name: string type: string | React.ReactNode hoverTip?: string @@ -41,15 +42,20 @@ const ICON_MAP = { 'dataset': } -export default function AppBasic({ iconUrl, name, type, hoverTip, textStyle, iconType = 'app' }: IAppBasicProps) { +export default function AppBasic({ icon, icon_background, name, type, hoverTip, textStyle, iconType = 'app' }: IAppBasicProps) { return (
- {iconUrl && ( + {icon && icon_background && iconType === 'app' && (
- {/* {name} */} - {ICON_MAP[iconType]} +
)} + {iconType !== 'app' && +
+ {ICON_MAP[iconType]} +
+ + }
{name} diff --git a/web/app/components/app-sidebar/index.tsx b/web/app/components/app-sidebar/index.tsx index cdafcce78e..eb3a444554 100644 --- a/web/app/components/app-sidebar/index.tsx +++ b/web/app/components/app-sidebar/index.tsx @@ -7,6 +7,8 @@ export type IAppDetailNavProps = { iconType?: 'app' | 'dataset' title: string desc: string + icon: string + icon_background: string navigation: Array<{ name: string href: string @@ -16,13 +18,12 @@ export type IAppDetailNavProps = { extraInfo?: React.ReactNode } -const sampleAppIconUrl = 'https://images.unsplash.com/photo-1472099645785-5658abf4ff4e?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=facearea&facepad=2&w=256&h=256&q=80' -const AppDetailNav: FC = ({ title, desc, navigation, extraInfo, iconType = 'app' }) => { +const AppDetailNav: FC = ({ title, desc, icon, icon_background, navigation, extraInfo, iconType = 'app' }) => { return (
- +