mirror of https://github.com/langgenius/dify.git
mrege
This commit is contained in:
commit
ce9c580e30
|
|
@ -0,0 +1,55 @@
|
|||
# コントリビュート
|
||||
|
||||
[Dify](https://dify.ai) に興味を持ち、貢献したいと思うようになったことに感謝します!始める前に、
|
||||
[行動規範](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md)を読み、
|
||||
[既存の問題](https://github.com/langgenius/langgenius-gateway/issues)をチェックしてください。
|
||||
本ドキュメントは、[Dify](https://dify.ai) をビルドしてテストするための開発環境の構築方法を説明するものです。
|
||||
|
||||
### 依存関係のインストール
|
||||
|
||||
[Dify](https://dify.ai)をビルドするには、お使いのマシンに以下の依存関係をインストールし、設定する必要があります:
|
||||
|
||||
- [Git](http://git-scm.com/)
|
||||
- [Docker](https://www.docker.com/)
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/)
|
||||
- [Node.js v18.x (LTS)](http://nodejs.org)
|
||||
- [npm](https://www.npmjs.com/) バージョン 8.x.x もしくは [Yarn](https://yarnpkg.com/)
|
||||
- [Python](https://www.python.org/) バージョン 3.10.x
|
||||
|
||||
## ローカル開発
|
||||
|
||||
開発環境を構築するには、プロジェクトの git リポジトリをフォークし、適切なパッケージマネージャを使用してバックエンドとフロントエンドの依存関係をインストールし、docker-compose スタックを実行するように作成します。
|
||||
|
||||
### リポジトリのフォーク
|
||||
|
||||
[リポジトリ](https://github.com/langgenius/dify) をフォークする必要があります。
|
||||
|
||||
### リポジトリのクローン
|
||||
|
||||
GitHub でフォークしたリポジトリのクローンを作成する:
|
||||
|
||||
```
|
||||
git clone git@github.com:<github_username>/dify.git
|
||||
```
|
||||
|
||||
### バックエンドのインストール
|
||||
|
||||
バックエンドアプリケーションのインストール方法については、[Backend README](api/README.md) を参照してください。
|
||||
|
||||
### フロントエンドのインストール
|
||||
|
||||
フロントエンドアプリケーションのインストール方法については、[Frontend README](web/README.md) を参照してください。
|
||||
|
||||
### ブラウザで dify にアクセス
|
||||
|
||||
[Dify](https://dify.ai) をローカル環境で見ることができるようになりました [http://localhost:3000](http://localhost:3000)。
|
||||
|
||||
## プルリクエストの作成
|
||||
|
||||
変更後、プルリクエスト (PR) をオープンしてください。プルリクエストを提出すると、Dify チーム/コミュニティの他の人があなたと一緒にそれをレビューします。
|
||||
|
||||
マージコンフリクトなどの問題が発生したり、プルリクエストの開き方がわからなくなったりしませんでしたか? [GitHub's pull request tutorial](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests) で、マージコンフリクトやその他の問題を解決する方法をチェックしてみてください。あなたの PR がマージされると、[コントリビュータチャート](https://github.com/langgenius/langgenius-gateway/graphs/contributors)にコントリビュータとして誇らしげに掲載されます。
|
||||
|
||||
## コミュニティチャンネル
|
||||
|
||||
お困りですか?何か質問がありますか? [Discord Community サーバ](https://discord.gg/AhzKf7dNgk)に参加してください。私たちがお手伝いします!
|
||||
|
|
@ -1,7 +1,8 @@
|
|||

|
||||
<p align="center">
|
||||
<a href="./README.md">English</a> |
|
||||
<a href="./README_CN.md">简体中文</a>
|
||||
<a href="./README_CN.md">简体中文</a> |
|
||||
<a href="./README_JA.md">日本語</a>
|
||||
</p>
|
||||
|
||||
[Website](https://dify.ai) • [Docs](https://docs.dify.ai) • [Twitter](https://twitter.com/dify_ai) • [Discord](https://discord.gg/FngNHpbcY7)
|
||||
|
|
|
|||
|
|
@ -1,7 +1,8 @@
|
|||

|
||||
<p align="center">
|
||||
<a href="./README.md">English</a> |
|
||||
<a href="./README_CN.md">简体中文</a>
|
||||
<a href="./README_CN.md">简体中文</a> |
|
||||
<a href="./README_JA.md">日本語</a>
|
||||
</p>
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,116 @@
|
|||

|
||||
<p align="center">
|
||||
<a href="./README.md">English</a> |
|
||||
<a href="./README_CN.md">简体中文</a> |
|
||||
<a href="./README_JA.md">日本語</a>
|
||||
</p>
|
||||
|
||||
[Web サイト](https://dify.ai) • [ドキュメント](https://docs.dify.ai) • [Twitter](https://twitter.com/dify_ai) • [Discord](https://discord.gg/FngNHpbcY7)
|
||||
|
||||
**Dify** は、より多くの人々が持続可能な AI ネイティブアプリケーションを作成できるように設計された、使いやすい LLMOps プラットフォームです。様々なアプリケーションタイプに対応したビジュアルオーケストレーションにより Dify は Backend-as-a-Service API としても機能する、すぐに使えるアプリケーションを提供します。プラグインやデータセットを統合するための1つの API で開発プロセスを統一し、プロンプトエンジニアリング、ビジュアル分析、継続的な改善のための1つのインターフェイスを使って業務を合理化します。
|
||||
|
||||
Difyで作成したアプリケーションは以下の通りです:
|
||||
|
||||
フォームモードとチャット会話モードをサポートする、すぐに使える Web サイト
|
||||
プラグイン機能、コンテキストの強化などを網羅する単一の API により、バックエンドのコーディングの手間を省きます。
|
||||
アプリケーションの視覚的なデータ分析、ログレビュー、アノテーションが可能です。
|
||||
Dify は LangChain と互換性があり、複数の LLM を徐々にサポートします:
|
||||
|
||||
- GPT 3 (text-davinci-003)
|
||||
- GPT 3.5 Turbo(ChatGPT)
|
||||
- GPT-4
|
||||
|
||||
## クラウドサービスの利用
|
||||
|
||||
[Dify.ai](https://dify.ai) をご覧ください
|
||||
|
||||
## Community Edition のインストール
|
||||
|
||||
### システム要件
|
||||
|
||||
Dify をインストールする前に、お使いのマシンが以下の最低システム要件を満たしていることを確認してください:
|
||||
|
||||
- CPU >= 1 Core
|
||||
- RAM >= 4GB
|
||||
|
||||
### クイックスタート
|
||||
|
||||
Dify サーバーを起動する最も簡単な方法は、[docker-compose.yml](docker/docker-compose.yaml) ファイルを実行することです。インストールコマンドを実行する前に、[Docker](https://docs.docker.com/get-docker/) と [Docker Compose](https://docs.docker.com/compose/install/) がお使いのマシンにインストールされていることを確認してください:
|
||||
|
||||
```bash
|
||||
cd docker
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
実行後、ブラウザで [http://localhost/install](http://localhost/install) にアクセスし、初期化インストール作業を開始することができます。
|
||||
|
||||
### 構成
|
||||
|
||||
カスタマイズが必要な場合は、[docker-compose.yml](docker/docker-compose.yaml) ファイルのコメントを参照し、手動で環境設定をお願いします。変更後、再度 'docker-compose up -d' を実行してください。
|
||||
|
||||
## ロードマップ
|
||||
|
||||
開発中の機能:
|
||||
|
||||
- **データセット**, Notionやウェブページからのコンテンツ同期など、より多くのデータセットをサポートします
|
||||
テキスト、ウェブページ、さらには Notion コンテンツなど、より多くのデータセットをサポートする予定です。ユーザーは、自分のデータソースをもとに AI アプリケーションを構築することができます。
|
||||
- **プラグイン**, アプリケーションに ChatGPT プラグイン標準のプラグインを導入する、または Dify 制作のプラグインを利用する
|
||||
今後、ChatGPT 規格に準拠したプラグインや、ディファイ独自のプラグインを公開し、より多くの機能をアプリケーションで実現できるようにします。
|
||||
- **オープンソースモデル**, 例えばモデルプロバイダーとして Llama を採用したり、さらにファインチューニングを行う
|
||||
Llama のような優れたオープンソースモデルを、私たちのプラットフォームのモデルオプションとして提供したり、さらなる微調整のために使用したりすることで、協力していきます。
|
||||
|
||||
|
||||
## Q&A
|
||||
|
||||
**Q: Dify で何ができるのか?**
|
||||
|
||||
A: Dify はシンプルでパワフルな LLM 開発・運用ツールです。商用グレードのアプリケーション、パーソナルアシスタントを構築するために使用することができます。独自のアプリケーションを開発したい場合、LangDifyGenius は OpenAI と統合する際のバックエンド作業を省き、視覚的な操作機能を提供し、GPT モデルを継続的に改善・訓練することが可能です。
|
||||
|
||||
**Q: Dify を使って、自分のモデルを「トレーニング」するにはどうすればいいのでしょうか?**
|
||||
|
||||
A: プロンプトエンジニアリング、コンテキスト拡張、ファインチューニングからなる価値あるアプリケーションです。プロンプトとプログラミング言語を組み合わせたハイブリッドプログラミングアプローチ(テンプレートエンジンのようなもの)で、長文の埋め込みやユーザー入力の YouTube 動画からの字幕取り込みなどを簡単に実現し、これらはすべて LLM が処理するコンテキストとして提出される予定です。また、アプリケーションの操作性を重視し、ユーザーがアプリケーションを使用する際に生成したデータを分析、アノテーション、継続的なトレーニングに利用できるようにしました。適切なツールがなければ、これらのステップに時間がかかることがあります。
|
||||
|
||||
**Q: 自分でアプリケーションを作りたい場合、何を準備すればよいですか?**
|
||||
|
||||
A: すでに OpenAI API Key をお持ちだと思いますが、お持ちでない場合はご登録ください。もし、すでにトレーニングのコンテキストとなるコンテンツをお持ちでしたら、それは素晴らしいことです!
|
||||
|
||||
**Q: インターフェイスにどの言語が使えますか?**
|
||||
|
||||
A: 現在、英語と中国語に対応しており、言語パックを寄贈することも可能です。
|
||||
|
||||
## Star ヒストリー
|
||||
|
||||
[](https://star-history.com/#langgenius/dify&Date)
|
||||
|
||||
## お問合せ
|
||||
|
||||
ご質問、ご提案、パートナーシップに関するお問い合わせは、以下のチャンネルからお気軽にご連絡ください:
|
||||
|
||||
- GitHub Repo で Issue や PR を提出する
|
||||
- [Discord](https://discord.gg/FngNHpbcY7) コミュニティで議論に参加する。
|
||||
- hello@dify.ai にメールを送信します
|
||||
|
||||
私たちは、皆様のお手伝いをさせていただき、より楽しく、より便利な AI アプリケーションを一緒に作っていきたいと思っています!
|
||||
|
||||
## コントリビュート
|
||||
|
||||
適切なレビューを行うため、コミットへの直接アクセスが可能なコントリビュータを含むすべてのコードコントリビュータは、プルリクエストで提出し、マージされる前にコア開発チームによって承認される必要があります。
|
||||
|
||||
私たちはすべてのプルリクエストを歓迎します!協力したい方は、[コントリビューションガイド](CONTRIBUTING.md) をチェックしてみてください。
|
||||
|
||||
## セキュリティ
|
||||
|
||||
プライバシー保護のため、GitHub へのセキュリティ問題の投稿は避けてください。代わりに、あなたの質問を security@dify.ai に送ってください。より詳細な回答を提供します。
|
||||
|
||||
## 引用
|
||||
|
||||
本ソフトウェアは、以下のオープンソースソフトウェアを使用しています:
|
||||
|
||||
- Chase, H. (2022). LangChain [Computer software]. https://github.com/hwchase17/langchain
|
||||
- Liu, J. (2022). LlamaIndex [Computer software]. doi: 10.5281/zenodo.1234.
|
||||
|
||||
詳しくは、各ソフトウェアの公式サイトまたはライセンス文をご参照ください。
|
||||
|
||||
## ライセンス
|
||||
|
||||
このリポジトリは、[Dify Open Source License](LICENSE) のもとで利用できます。
|
||||
|
|
@ -47,6 +47,7 @@ DEFAULTS = {
|
|||
'PDF_PREVIEW': 'True',
|
||||
'LOG_LEVEL': 'INFO',
|
||||
'DISABLE_PROVIDER_CONFIG_VALIDATION': 'False',
|
||||
'DEFAULT_LLM_PROVIDER': 'openai'
|
||||
}
|
||||
|
||||
|
||||
|
|
@ -181,6 +182,10 @@ class Config:
|
|||
# You could disable it for compatibility with certain OpenAPI providers
|
||||
self.DISABLE_PROVIDER_CONFIG_VALIDATION = get_bool_env('DISABLE_PROVIDER_CONFIG_VALIDATION')
|
||||
|
||||
# For temp use only
|
||||
# set default LLM provider, default is 'openai', support `azure_openai`
|
||||
self.DEFAULT_LLM_PROVIDER = get_env('DEFAULT_LLM_PROVIDER')
|
||||
|
||||
class CloudEditionConfig(Config):
|
||||
|
||||
def __init__(self):
|
||||
|
|
|
|||
|
|
@ -82,29 +82,33 @@ class ProviderTokenApi(Resource):
|
|||
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args['token']:
|
||||
raise ValueError('Token is empty')
|
||||
if args['token']:
|
||||
try:
|
||||
ProviderService.validate_provider_configs(
|
||||
tenant=current_user.current_tenant,
|
||||
provider_name=ProviderName(provider),
|
||||
configs=args['token']
|
||||
)
|
||||
token_is_valid = True
|
||||
except ValidateFailedError:
|
||||
token_is_valid = False
|
||||
|
||||
try:
|
||||
ProviderService.validate_provider_configs(
|
||||
base64_encrypted_token = ProviderService.get_encrypted_token(
|
||||
tenant=current_user.current_tenant,
|
||||
provider_name=ProviderName(provider),
|
||||
configs=args['token']
|
||||
)
|
||||
token_is_valid = True
|
||||
except ValidateFailedError:
|
||||
else:
|
||||
base64_encrypted_token = None
|
||||
token_is_valid = False
|
||||
|
||||
tenant = current_user.current_tenant
|
||||
|
||||
base64_encrypted_token = ProviderService.get_encrypted_token(
|
||||
tenant=current_user.current_tenant,
|
||||
provider_name=ProviderName(provider),
|
||||
configs=args['token']
|
||||
)
|
||||
|
||||
provider_model = Provider.query.filter_by(tenant_id=tenant.id, provider_name=provider,
|
||||
provider_type=ProviderType.CUSTOM.value).first()
|
||||
provider_model = db.session.query(Provider).filter(
|
||||
Provider.tenant_id == tenant.id,
|
||||
Provider.provider_name == provider,
|
||||
Provider.provider_type == ProviderType.CUSTOM.value
|
||||
).first()
|
||||
|
||||
# Only allow updating token for CUSTOM provider type
|
||||
if provider_model:
|
||||
|
|
@ -117,6 +121,16 @@ class ProviderTokenApi(Resource):
|
|||
is_valid=token_is_valid)
|
||||
db.session.add(provider_model)
|
||||
|
||||
if provider_model.is_valid:
|
||||
other_providers = db.session.query(Provider).filter(
|
||||
Provider.tenant_id == tenant.id,
|
||||
Provider.provider_name != provider,
|
||||
Provider.provider_type == ProviderType.CUSTOM.value
|
||||
).all()
|
||||
|
||||
for other_provider in other_providers:
|
||||
other_provider.is_valid = False
|
||||
|
||||
db.session.commit()
|
||||
|
||||
if provider in [ProviderName.ANTHROPIC.value, ProviderName.AZURE_OPENAI.value, ProviderName.COHERE.value,
|
||||
|
|
|
|||
|
|
@ -11,9 +11,10 @@ from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_except
|
|||
|
||||
@retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
|
||||
def get_embedding(
|
||||
text: str,
|
||||
engine: Optional[str] = None,
|
||||
openai_api_key: Optional[str] = None,
|
||||
text: str,
|
||||
engine: Optional[str] = None,
|
||||
api_key: Optional[str] = None,
|
||||
**kwargs
|
||||
) -> List[float]:
|
||||
"""Get embedding.
|
||||
|
||||
|
|
@ -25,11 +26,12 @@ def get_embedding(
|
|||
|
||||
"""
|
||||
text = text.replace("\n", " ")
|
||||
return openai.Embedding.create(input=[text], engine=engine, api_key=openai_api_key)["data"][0]["embedding"]
|
||||
return openai.Embedding.create(input=[text], engine=engine, api_key=api_key, **kwargs)["data"][0]["embedding"]
|
||||
|
||||
|
||||
@retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
|
||||
async def aget_embedding(text: str, engine: Optional[str] = None, openai_api_key: Optional[str] = None) -> List[float]:
|
||||
async def aget_embedding(text: str, engine: Optional[str] = None, api_key: Optional[str] = None, **kwargs) -> List[
|
||||
float]:
|
||||
"""Asynchronously get embedding.
|
||||
|
||||
NOTE: Copied from OpenAI's embedding utils:
|
||||
|
|
@ -42,16 +44,17 @@ async def aget_embedding(text: str, engine: Optional[str] = None, openai_api_key
|
|||
# replace newlines, which can negatively affect performance.
|
||||
text = text.replace("\n", " ")
|
||||
|
||||
return (await openai.Embedding.acreate(input=[text], engine=engine, api_key=openai_api_key))["data"][0][
|
||||
return (await openai.Embedding.acreate(input=[text], engine=engine, api_key=api_key, **kwargs))["data"][0][
|
||||
"embedding"
|
||||
]
|
||||
|
||||
|
||||
@retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
|
||||
def get_embeddings(
|
||||
list_of_text: List[str],
|
||||
engine: Optional[str] = None,
|
||||
openai_api_key: Optional[str] = None
|
||||
list_of_text: List[str],
|
||||
engine: Optional[str] = None,
|
||||
api_key: Optional[str] = None,
|
||||
**kwargs
|
||||
) -> List[List[float]]:
|
||||
"""Get embeddings.
|
||||
|
||||
|
|
@ -67,14 +70,14 @@ def get_embeddings(
|
|||
# replace newlines, which can negatively affect performance.
|
||||
list_of_text = [text.replace("\n", " ") for text in list_of_text]
|
||||
|
||||
data = openai.Embedding.create(input=list_of_text, engine=engine, api_key=openai_api_key).data
|
||||
data = openai.Embedding.create(input=list_of_text, engine=engine, api_key=api_key, **kwargs).data
|
||||
data = sorted(data, key=lambda x: x["index"]) # maintain the same order as input.
|
||||
return [d["embedding"] for d in data]
|
||||
|
||||
|
||||
@retry(reraise=True, wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
|
||||
async def aget_embeddings(
|
||||
list_of_text: List[str], engine: Optional[str] = None, openai_api_key: Optional[str] = None
|
||||
list_of_text: List[str], engine: Optional[str] = None, api_key: Optional[str] = None, **kwargs
|
||||
) -> List[List[float]]:
|
||||
"""Asynchronously get embeddings.
|
||||
|
||||
|
|
@ -90,7 +93,7 @@ async def aget_embeddings(
|
|||
# replace newlines, which can negatively affect performance.
|
||||
list_of_text = [text.replace("\n", " ") for text in list_of_text]
|
||||
|
||||
data = (await openai.Embedding.acreate(input=list_of_text, engine=engine, api_key=openai_api_key)).data
|
||||
data = (await openai.Embedding.acreate(input=list_of_text, engine=engine, api_key=api_key, **kwargs)).data
|
||||
data = sorted(data, key=lambda x: x["index"]) # maintain the same order as input.
|
||||
return [d["embedding"] for d in data]
|
||||
|
||||
|
|
@ -98,19 +101,30 @@ async def aget_embeddings(
|
|||
class OpenAIEmbedding(BaseEmbedding):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
mode: str = OpenAIEmbeddingMode.TEXT_SEARCH_MODE,
|
||||
model: str = OpenAIEmbeddingModelType.TEXT_EMBED_ADA_002,
|
||||
deployment_name: Optional[str] = None,
|
||||
openai_api_key: Optional[str] = None,
|
||||
**kwargs: Any,
|
||||
self,
|
||||
mode: str = OpenAIEmbeddingMode.TEXT_SEARCH_MODE,
|
||||
model: str = OpenAIEmbeddingModelType.TEXT_EMBED_ADA_002,
|
||||
deployment_name: Optional[str] = None,
|
||||
openai_api_key: Optional[str] = None,
|
||||
**kwargs: Any,
|
||||
) -> None:
|
||||
"""Init params."""
|
||||
super().__init__(**kwargs)
|
||||
new_kwargs = {}
|
||||
|
||||
if 'embed_batch_size' in kwargs:
|
||||
new_kwargs['embed_batch_size'] = kwargs['embed_batch_size']
|
||||
|
||||
if 'tokenizer' in kwargs:
|
||||
new_kwargs['tokenizer'] = kwargs['tokenizer']
|
||||
|
||||
super().__init__(**new_kwargs)
|
||||
self.mode = OpenAIEmbeddingMode(mode)
|
||||
self.model = OpenAIEmbeddingModelType(model)
|
||||
self.deployment_name = deployment_name
|
||||
self.openai_api_key = openai_api_key
|
||||
self.openai_api_type = kwargs.get('openai_api_type')
|
||||
self.openai_api_version = kwargs.get('openai_api_version')
|
||||
self.openai_api_base = kwargs.get('openai_api_base')
|
||||
|
||||
@handle_llm_exceptions
|
||||
def _get_query_embedding(self, query: str) -> List[float]:
|
||||
|
|
@ -122,7 +136,9 @@ class OpenAIEmbedding(BaseEmbedding):
|
|||
if key not in _QUERY_MODE_MODEL_DICT:
|
||||
raise ValueError(f"Invalid mode, model combination: {key}")
|
||||
engine = _QUERY_MODE_MODEL_DICT[key]
|
||||
return get_embedding(query, engine=engine, openai_api_key=self.openai_api_key)
|
||||
return get_embedding(query, engine=engine, api_key=self.openai_api_key,
|
||||
api_type=self.openai_api_type, api_version=self.openai_api_version,
|
||||
api_base=self.openai_api_base)
|
||||
|
||||
def _get_text_embedding(self, text: str) -> List[float]:
|
||||
"""Get text embedding."""
|
||||
|
|
@ -133,7 +149,9 @@ class OpenAIEmbedding(BaseEmbedding):
|
|||
if key not in _TEXT_MODE_MODEL_DICT:
|
||||
raise ValueError(f"Invalid mode, model combination: {key}")
|
||||
engine = _TEXT_MODE_MODEL_DICT[key]
|
||||
return get_embedding(text, engine=engine, openai_api_key=self.openai_api_key)
|
||||
return get_embedding(text, engine=engine, api_key=self.openai_api_key,
|
||||
api_type=self.openai_api_type, api_version=self.openai_api_version,
|
||||
api_base=self.openai_api_base)
|
||||
|
||||
async def _aget_text_embedding(self, text: str) -> List[float]:
|
||||
"""Asynchronously get text embedding."""
|
||||
|
|
@ -144,7 +162,9 @@ class OpenAIEmbedding(BaseEmbedding):
|
|||
if key not in _TEXT_MODE_MODEL_DICT:
|
||||
raise ValueError(f"Invalid mode, model combination: {key}")
|
||||
engine = _TEXT_MODE_MODEL_DICT[key]
|
||||
return await aget_embedding(text, engine=engine, openai_api_key=self.openai_api_key)
|
||||
return await aget_embedding(text, engine=engine, api_key=self.openai_api_key,
|
||||
api_type=self.openai_api_type, api_version=self.openai_api_version,
|
||||
api_base=self.openai_api_base)
|
||||
|
||||
def _get_text_embeddings(self, texts: List[str]) -> List[List[float]]:
|
||||
"""Get text embeddings.
|
||||
|
|
@ -160,7 +180,9 @@ class OpenAIEmbedding(BaseEmbedding):
|
|||
if key not in _TEXT_MODE_MODEL_DICT:
|
||||
raise ValueError(f"Invalid mode, model combination: {key}")
|
||||
engine = _TEXT_MODE_MODEL_DICT[key]
|
||||
embeddings = get_embeddings(texts, engine=engine, openai_api_key=self.openai_api_key)
|
||||
embeddings = get_embeddings(texts, engine=engine, api_key=self.openai_api_key,
|
||||
api_type=self.openai_api_type, api_version=self.openai_api_version,
|
||||
api_base=self.openai_api_base)
|
||||
return embeddings
|
||||
|
||||
async def _aget_text_embeddings(self, texts: List[str]) -> List[List[float]]:
|
||||
|
|
@ -172,5 +194,7 @@ class OpenAIEmbedding(BaseEmbedding):
|
|||
if key not in _TEXT_MODE_MODEL_DICT:
|
||||
raise ValueError(f"Invalid mode, model combination: {key}")
|
||||
engine = _TEXT_MODE_MODEL_DICT[key]
|
||||
embeddings = await aget_embeddings(texts, engine=engine, openai_api_key=self.openai_api_key)
|
||||
embeddings = await aget_embeddings(texts, engine=engine, api_key=self.openai_api_key,
|
||||
api_type=self.openai_api_type, api_version=self.openai_api_version,
|
||||
api_base=self.openai_api_base)
|
||||
return embeddings
|
||||
|
|
|
|||
|
|
@ -33,8 +33,11 @@ class IndexBuilder:
|
|||
max_chunk_overlap=20
|
||||
)
|
||||
|
||||
provider = LLMBuilder.get_default_provider(tenant_id)
|
||||
|
||||
model_credentials = LLMBuilder.get_model_credentials(
|
||||
tenant_id=tenant_id,
|
||||
model_provider=provider,
|
||||
model_name='text-embedding-ada-002'
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -4,9 +4,14 @@ from langchain.callbacks import CallbackManager
|
|||
from langchain.llms.fake import FakeListLLM
|
||||
|
||||
from core.constant import llm_constant
|
||||
from core.llm.error import ProviderTokenNotInitError
|
||||
from core.llm.provider.base import BaseProvider
|
||||
from core.llm.provider.llm_provider_service import LLMProviderService
|
||||
from core.llm.streamable_azure_chat_open_ai import StreamableAzureChatOpenAI
|
||||
from core.llm.streamable_azure_open_ai import StreamableAzureOpenAI
|
||||
from core.llm.streamable_chat_open_ai import StreamableChatOpenAI
|
||||
from core.llm.streamable_open_ai import StreamableOpenAI
|
||||
from models.provider import ProviderType
|
||||
|
||||
|
||||
class LLMBuilder:
|
||||
|
|
@ -31,16 +36,23 @@ class LLMBuilder:
|
|||
if model_name == 'fake':
|
||||
return FakeListLLM(responses=[])
|
||||
|
||||
provider = cls.get_default_provider(tenant_id)
|
||||
|
||||
mode = cls.get_mode_by_model(model_name)
|
||||
if mode == 'chat':
|
||||
# llm_cls = StreamableAzureChatOpenAI
|
||||
llm_cls = StreamableChatOpenAI
|
||||
if provider == 'openai':
|
||||
llm_cls = StreamableChatOpenAI
|
||||
else:
|
||||
llm_cls = StreamableAzureChatOpenAI
|
||||
elif mode == 'completion':
|
||||
llm_cls = StreamableOpenAI
|
||||
if provider == 'openai':
|
||||
llm_cls = StreamableOpenAI
|
||||
else:
|
||||
llm_cls = StreamableAzureOpenAI
|
||||
else:
|
||||
raise ValueError(f"model name {model_name} is not supported.")
|
||||
|
||||
model_credentials = cls.get_model_credentials(tenant_id, model_name)
|
||||
model_credentials = cls.get_model_credentials(tenant_id, provider, model_name)
|
||||
|
||||
return llm_cls(
|
||||
model_name=model_name,
|
||||
|
|
@ -86,18 +98,31 @@ class LLMBuilder:
|
|||
raise ValueError(f"model name {model_name} is not supported.")
|
||||
|
||||
@classmethod
|
||||
def get_model_credentials(cls, tenant_id: str, model_name: str) -> dict:
|
||||
def get_model_credentials(cls, tenant_id: str, model_provider: str, model_name: str) -> dict:
|
||||
"""
|
||||
Returns the API credentials for the given tenant_id and model_name, based on the model's provider.
|
||||
Raises an exception if the model_name is not found or if the provider is not found.
|
||||
"""
|
||||
if not model_name:
|
||||
raise Exception('model name not found')
|
||||
#
|
||||
# if model_name not in llm_constant.models:
|
||||
# raise Exception('model {} not found'.format(model_name))
|
||||
|
||||
if model_name not in llm_constant.models:
|
||||
raise Exception('model {} not found'.format(model_name))
|
||||
|
||||
model_provider = llm_constant.models[model_name]
|
||||
# model_provider = llm_constant.models[model_name]
|
||||
|
||||
provider_service = LLMProviderService(tenant_id=tenant_id, provider_name=model_provider)
|
||||
return provider_service.get_credentials(model_name)
|
||||
|
||||
@classmethod
|
||||
def get_default_provider(cls, tenant_id: str) -> str:
|
||||
provider = BaseProvider.get_valid_provider(tenant_id)
|
||||
if not provider:
|
||||
raise ProviderTokenNotInitError()
|
||||
|
||||
if provider.provider_type == ProviderType.SYSTEM.value:
|
||||
provider_name = 'openai'
|
||||
else:
|
||||
provider_name = provider.provider_name
|
||||
|
||||
return provider_name
|
||||
|
|
|
|||
|
|
@ -36,10 +36,9 @@ class AzureProvider(BaseProvider):
|
|||
"""
|
||||
Returns the API credentials for Azure OpenAI as a dictionary.
|
||||
"""
|
||||
encrypted_config = self.get_provider_api_key(model_id=model_id)
|
||||
config = json.loads(encrypted_config)
|
||||
config = self.get_provider_api_key(model_id=model_id)
|
||||
config['openai_api_type'] = 'azure'
|
||||
config['deployment_name'] = model_id
|
||||
config['deployment_name'] = model_id.replace('.', '')
|
||||
return config
|
||||
|
||||
def get_provider_name(self):
|
||||
|
|
@ -51,12 +50,11 @@ class AzureProvider(BaseProvider):
|
|||
"""
|
||||
try:
|
||||
config = self.get_provider_api_key()
|
||||
config = json.loads(config)
|
||||
except:
|
||||
config = {
|
||||
'openai_api_type': 'azure',
|
||||
'openai_api_version': '2023-03-15-preview',
|
||||
'openai_api_base': 'https://foo.microsoft.com/bar',
|
||||
'openai_api_base': 'https://<your-domain-prefix>.openai.azure.com/',
|
||||
'openai_api_key': ''
|
||||
}
|
||||
|
||||
|
|
@ -65,7 +63,7 @@ class AzureProvider(BaseProvider):
|
|||
config = {
|
||||
'openai_api_type': 'azure',
|
||||
'openai_api_version': '2023-03-15-preview',
|
||||
'openai_api_base': 'https://foo.microsoft.com/bar',
|
||||
'openai_api_base': 'https://<your-domain-prefix>.openai.azure.com/',
|
||||
'openai_api_key': ''
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ class BaseProvider(ABC):
|
|||
def __init__(self, tenant_id: str):
|
||||
self.tenant_id = tenant_id
|
||||
|
||||
def get_provider_api_key(self, model_id: Optional[str] = None, prefer_custom: bool = True) -> str:
|
||||
def get_provider_api_key(self, model_id: Optional[str] = None, prefer_custom: bool = True) -> Union[str | dict]:
|
||||
"""
|
||||
Returns the decrypted API key for the given tenant_id and provider_name.
|
||||
If the provider is of type SYSTEM and the quota is exceeded, raises a QuotaExceededError.
|
||||
|
|
@ -43,23 +43,35 @@ class BaseProvider(ABC):
|
|||
Returns the Provider instance for the given tenant_id and provider_name.
|
||||
If both CUSTOM and System providers exist, the preferred provider will be returned based on the prefer_custom flag.
|
||||
"""
|
||||
providers = db.session.query(Provider).filter(
|
||||
Provider.tenant_id == self.tenant_id,
|
||||
Provider.provider_name == self.get_provider_name().value
|
||||
).order_by(Provider.provider_type.desc() if prefer_custom else Provider.provider_type).all()
|
||||
return BaseProvider.get_valid_provider(self.tenant_id, self.get_provider_name().value, prefer_custom)
|
||||
|
||||
@classmethod
|
||||
def get_valid_provider(cls, tenant_id: str, provider_name: str = None, prefer_custom: bool = False) -> Optional[Provider]:
|
||||
"""
|
||||
Returns the Provider instance for the given tenant_id and provider_name.
|
||||
If both CUSTOM and System providers exist, the preferred provider will be returned based on the prefer_custom flag.
|
||||
"""
|
||||
query = db.session.query(Provider).filter(
|
||||
Provider.tenant_id == tenant_id
|
||||
)
|
||||
|
||||
if provider_name:
|
||||
query = query.filter(Provider.provider_name == provider_name)
|
||||
|
||||
providers = query.order_by(Provider.provider_type.desc() if prefer_custom else Provider.provider_type).all()
|
||||
|
||||
custom_provider = None
|
||||
system_provider = None
|
||||
|
||||
for provider in providers:
|
||||
if provider.provider_type == ProviderType.CUSTOM.value:
|
||||
if provider.provider_type == ProviderType.CUSTOM.value and provider.is_valid and provider.encrypted_config:
|
||||
custom_provider = provider
|
||||
elif provider.provider_type == ProviderType.SYSTEM.value:
|
||||
elif provider.provider_type == ProviderType.SYSTEM.value and provider.is_valid:
|
||||
system_provider = provider
|
||||
|
||||
if custom_provider and custom_provider.is_valid and custom_provider.encrypted_config:
|
||||
if custom_provider:
|
||||
return custom_provider
|
||||
elif system_provider and system_provider.is_valid:
|
||||
elif system_provider:
|
||||
return system_provider
|
||||
else:
|
||||
return None
|
||||
|
|
@ -80,7 +92,7 @@ class BaseProvider(ABC):
|
|||
try:
|
||||
config = self.get_provider_api_key()
|
||||
except:
|
||||
config = 'THIS-IS-A-MOCK-TOKEN'
|
||||
config = ''
|
||||
|
||||
if obfuscated:
|
||||
return self.obfuscated_token(config)
|
||||
|
|
|
|||
|
|
@ -1,12 +1,50 @@
|
|||
import requests
|
||||
from langchain.schema import BaseMessage, ChatResult, LLMResult
|
||||
from langchain.chat_models import AzureChatOpenAI
|
||||
from typing import Optional, List
|
||||
from typing import Optional, List, Dict, Any
|
||||
|
||||
from pydantic import root_validator
|
||||
|
||||
from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async
|
||||
|
||||
|
||||
class StreamableAzureChatOpenAI(AzureChatOpenAI):
|
||||
@root_validator()
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
try:
|
||||
import openai
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import openai python package. "
|
||||
"Please install it with `pip install openai`."
|
||||
)
|
||||
try:
|
||||
values["client"] = openai.ChatCompletion
|
||||
except AttributeError:
|
||||
raise ValueError(
|
||||
"`openai` has no `ChatCompletion` attribute, this is likely "
|
||||
"due to an old version of the openai package. Try upgrading it "
|
||||
"with `pip install --upgrade openai`."
|
||||
)
|
||||
if values["n"] < 1:
|
||||
raise ValueError("n must be at least 1.")
|
||||
if values["n"] > 1 and values["streaming"]:
|
||||
raise ValueError("n must be 1 when streaming.")
|
||||
return values
|
||||
|
||||
@property
|
||||
def _default_params(self) -> Dict[str, Any]:
|
||||
"""Get the default parameters for calling OpenAI API."""
|
||||
return {
|
||||
**super()._default_params,
|
||||
"engine": self.deployment_name,
|
||||
"api_type": self.openai_api_type,
|
||||
"api_base": self.openai_api_base,
|
||||
"api_version": self.openai_api_version,
|
||||
"api_key": self.openai_api_key,
|
||||
"organization": self.openai_organization if self.openai_organization else None,
|
||||
}
|
||||
|
||||
def get_messages_tokens(self, messages: List[BaseMessage]) -> int:
|
||||
"""Get the number of tokens in a list of messages.
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,64 @@
|
|||
import os
|
||||
|
||||
from langchain.llms import AzureOpenAI
|
||||
from langchain.schema import LLMResult
|
||||
from typing import Optional, List, Dict, Mapping, Any
|
||||
|
||||
from pydantic import root_validator
|
||||
|
||||
from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async
|
||||
|
||||
|
||||
class StreamableAzureOpenAI(AzureOpenAI):
|
||||
openai_api_type: str = "azure"
|
||||
openai_api_version: str = ""
|
||||
|
||||
@root_validator()
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
try:
|
||||
import openai
|
||||
|
||||
values["client"] = openai.Completion
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import openai python package. "
|
||||
"Please install it with `pip install openai`."
|
||||
)
|
||||
if values["streaming"] and values["n"] > 1:
|
||||
raise ValueError("Cannot stream results when n > 1.")
|
||||
if values["streaming"] and values["best_of"] > 1:
|
||||
raise ValueError("Cannot stream results when best_of > 1.")
|
||||
return values
|
||||
|
||||
@property
|
||||
def _invocation_params(self) -> Dict[str, Any]:
|
||||
return {**super()._invocation_params, **{
|
||||
"api_type": self.openai_api_type,
|
||||
"api_base": self.openai_api_base,
|
||||
"api_version": self.openai_api_version,
|
||||
"api_key": self.openai_api_key,
|
||||
"organization": self.openai_organization if self.openai_organization else None,
|
||||
}}
|
||||
|
||||
@property
|
||||
def _identifying_params(self) -> Mapping[str, Any]:
|
||||
return {**super()._identifying_params, **{
|
||||
"api_type": self.openai_api_type,
|
||||
"api_base": self.openai_api_base,
|
||||
"api_version": self.openai_api_version,
|
||||
"api_key": self.openai_api_key,
|
||||
"organization": self.openai_organization if self.openai_organization else None,
|
||||
}}
|
||||
|
||||
@handle_llm_exceptions
|
||||
def generate(
|
||||
self, prompts: List[str], stop: Optional[List[str]] = None
|
||||
) -> LLMResult:
|
||||
return super().generate(prompts, stop)
|
||||
|
||||
@handle_llm_exceptions_async
|
||||
async def agenerate(
|
||||
self, prompts: List[str], stop: Optional[List[str]] = None
|
||||
) -> LLMResult:
|
||||
return await super().agenerate(prompts, stop)
|
||||
|
|
@ -1,12 +1,52 @@
|
|||
import os
|
||||
|
||||
from langchain.schema import BaseMessage, ChatResult, LLMResult
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from typing import Optional, List
|
||||
from typing import Optional, List, Dict, Any
|
||||
|
||||
from pydantic import root_validator
|
||||
|
||||
from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async
|
||||
|
||||
|
||||
class StreamableChatOpenAI(ChatOpenAI):
|
||||
|
||||
@root_validator()
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
try:
|
||||
import openai
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import openai python package. "
|
||||
"Please install it with `pip install openai`."
|
||||
)
|
||||
try:
|
||||
values["client"] = openai.ChatCompletion
|
||||
except AttributeError:
|
||||
raise ValueError(
|
||||
"`openai` has no `ChatCompletion` attribute, this is likely "
|
||||
"due to an old version of the openai package. Try upgrading it "
|
||||
"with `pip install --upgrade openai`."
|
||||
)
|
||||
if values["n"] < 1:
|
||||
raise ValueError("n must be at least 1.")
|
||||
if values["n"] > 1 and values["streaming"]:
|
||||
raise ValueError("n must be 1 when streaming.")
|
||||
return values
|
||||
|
||||
@property
|
||||
def _default_params(self) -> Dict[str, Any]:
|
||||
"""Get the default parameters for calling OpenAI API."""
|
||||
return {
|
||||
**super()._default_params,
|
||||
"api_type": 'openai',
|
||||
"api_base": os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1"),
|
||||
"api_version": None,
|
||||
"api_key": self.openai_api_key,
|
||||
"organization": self.openai_organization if self.openai_organization else None,
|
||||
}
|
||||
|
||||
def get_messages_tokens(self, messages: List[BaseMessage]) -> int:
|
||||
"""Get the number of tokens in a list of messages.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,12 +1,54 @@
|
|||
import os
|
||||
|
||||
from langchain.schema import LLMResult
|
||||
from typing import Optional, List
|
||||
from typing import Optional, List, Dict, Any, Mapping
|
||||
from langchain import OpenAI
|
||||
from pydantic import root_validator
|
||||
|
||||
from core.llm.error_handle_wraps import handle_llm_exceptions, handle_llm_exceptions_async
|
||||
|
||||
|
||||
class StreamableOpenAI(OpenAI):
|
||||
|
||||
@root_validator()
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
try:
|
||||
import openai
|
||||
|
||||
values["client"] = openai.Completion
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import openai python package. "
|
||||
"Please install it with `pip install openai`."
|
||||
)
|
||||
if values["streaming"] and values["n"] > 1:
|
||||
raise ValueError("Cannot stream results when n > 1.")
|
||||
if values["streaming"] and values["best_of"] > 1:
|
||||
raise ValueError("Cannot stream results when best_of > 1.")
|
||||
return values
|
||||
|
||||
@property
|
||||
def _invocation_params(self) -> Dict[str, Any]:
|
||||
return {**super()._invocation_params, **{
|
||||
"api_type": 'openai',
|
||||
"api_base": os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1"),
|
||||
"api_version": None,
|
||||
"api_key": self.openai_api_key,
|
||||
"organization": self.openai_organization if self.openai_organization else None,
|
||||
}}
|
||||
|
||||
@property
|
||||
def _identifying_params(self) -> Mapping[str, Any]:
|
||||
return {**super()._identifying_params, **{
|
||||
"api_type": 'openai',
|
||||
"api_base": os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1"),
|
||||
"api_version": None,
|
||||
"api_key": self.openai_api_key,
|
||||
"organization": self.openai_organization if self.openai_organization else None,
|
||||
}}
|
||||
|
||||
|
||||
@handle_llm_exceptions
|
||||
def generate(
|
||||
self, prompts: List[str], stop: Optional[List[str]] = None
|
||||
|
|
|
|||
|
|
@ -1,117 +0,0 @@
|
|||
# Logs
|
||||
logs
|
||||
*.log
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
lerna-debug.log*
|
||||
|
||||
# Diagnostic reports (https://nodejs.org/api/report.html)
|
||||
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
|
||||
|
||||
# Runtime data
|
||||
pids
|
||||
*.pid
|
||||
*.seed
|
||||
*.pid.lock
|
||||
|
||||
# Directory for instrumented libs generated by jscoverage/JSCover
|
||||
lib-cov
|
||||
|
||||
# Coverage directory used by tools like istanbul
|
||||
coverage
|
||||
*.lcov
|
||||
|
||||
# nyc test coverage
|
||||
.nyc_output
|
||||
|
||||
# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
|
||||
.grunt
|
||||
|
||||
# Bower dependency directory (https://bower.io/)
|
||||
bower_components
|
||||
|
||||
# node-waf configuration
|
||||
.lock-wscript
|
||||
|
||||
# Compiled binary addons (https://nodejs.org/api/addons.html)
|
||||
build/Release
|
||||
|
||||
# Dependency directories
|
||||
node_modules/
|
||||
jspm_packages/
|
||||
|
||||
# TypeScript v1 declaration files
|
||||
typings/
|
||||
|
||||
# TypeScript cache
|
||||
*.tsbuildinfo
|
||||
|
||||
# Optional npm cache directory
|
||||
.npm
|
||||
|
||||
# Optional eslint cache
|
||||
.eslintcache
|
||||
|
||||
# Microbundle cache
|
||||
.rpt2_cache/
|
||||
.rts2_cache_cjs/
|
||||
.rts2_cache_es/
|
||||
.rts2_cache_umd/
|
||||
|
||||
# Optional REPL history
|
||||
.node_repl_history
|
||||
|
||||
# Output of 'npm pack'
|
||||
*.tgz
|
||||
|
||||
# Yarn Integrity file
|
||||
.yarn-integrity
|
||||
|
||||
# dotenv environment variables file
|
||||
.env
|
||||
.env.test
|
||||
|
||||
# parcel-bundler cache (https://parceljs.org/)
|
||||
.cache
|
||||
|
||||
# Next.js build output
|
||||
.next
|
||||
|
||||
# Nuxt.js build / generate output
|
||||
.nuxt
|
||||
dist
|
||||
|
||||
# Gatsby files
|
||||
.cache/
|
||||
# Comment in the public line in if your project uses Gatsby and *not* Next.js
|
||||
# https://nextjs.org/blog/next-9-1#public-directory-support
|
||||
# public
|
||||
|
||||
# vuepress build output
|
||||
.vuepress/dist
|
||||
|
||||
# Serverless directories
|
||||
.serverless/
|
||||
|
||||
# FuseBox cache
|
||||
.fusebox/
|
||||
|
||||
# DynamoDB Local files
|
||||
.dynamodb/
|
||||
|
||||
# TernJS port file
|
||||
.tern-port
|
||||
|
||||
# npm
|
||||
package-lock.json
|
||||
|
||||
# yarn
|
||||
.pnp.cjs
|
||||
.pnp.loader.mjs
|
||||
.yarn/
|
||||
yarn.lock
|
||||
.yarnrc.yml
|
||||
|
||||
# pmpm
|
||||
pnpm-lock.yaml
|
||||
|
|
@ -1 +0,0 @@
|
|||
# Mock Server
|
||||
|
|
@ -1,551 +0,0 @@
|
|||
const chars = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ-_'
|
||||
|
||||
function randomString (length) {
|
||||
let result = ''
|
||||
for (let i = length; i > 0; --i) result += chars[Math.floor(Math.random() * chars.length)]
|
||||
return result
|
||||
}
|
||||
|
||||
// https://www.notion.so/55773516a0194781ae211792a44a3663?pvs=4
|
||||
const VirtualData = new Array(10).fill().map((_, index) => {
|
||||
const date = new Date(Date.now() - index * 24 * 60 * 60 * 1000)
|
||||
return {
|
||||
date: `${date.getFullYear()}-${date.getMonth()}-${date.getDate()}`,
|
||||
conversation_count: Math.floor(Math.random() * 10) + index,
|
||||
terminal_count: Math.floor(Math.random() * 10) + index,
|
||||
token_count: Math.floor(Math.random() * 10) + index,
|
||||
total_price: Math.floor(Math.random() * 10) + index,
|
||||
}
|
||||
})
|
||||
|
||||
const registerAPI = function (app) {
|
||||
const apps = [{
|
||||
id: '1',
|
||||
name: 'chat app',
|
||||
mode: 'chat',
|
||||
description: 'description01',
|
||||
enable_site: true,
|
||||
enable_api: true,
|
||||
api_rpm: 60,
|
||||
api_rph: 3600,
|
||||
is_demo: false,
|
||||
model_config: {
|
||||
provider: 'OPENAI',
|
||||
model_id: 'gpt-3.5-turbo',
|
||||
configs: {
|
||||
prompt_template: '你是我的解梦小助手,请参考 {{book}} 回答我有关梦境的问题。在回答前请称呼我为 {{myName}}。',
|
||||
prompt_variables: [
|
||||
{
|
||||
key: 'book',
|
||||
name: '书',
|
||||
value: '《梦境解析》',
|
||||
type: 'string',
|
||||
description: '请具体说下书名'
|
||||
},
|
||||
{
|
||||
key: 'myName',
|
||||
name: 'your name',
|
||||
value: 'Book',
|
||||
type: 'string',
|
||||
description: 'please tell me your name'
|
||||
}
|
||||
],
|
||||
completion_params: {
|
||||
max_token: 16,
|
||||
temperature: 1, // 0-2
|
||||
top_p: 1,
|
||||
presence_penalty: 1, // -2-2
|
||||
frequency_penalty: 1, // -2-2
|
||||
}
|
||||
}
|
||||
},
|
||||
site: {
|
||||
access_token: '1000',
|
||||
title: 'site 01',
|
||||
author: 'John',
|
||||
default_language: 'zh-Hans-CN',
|
||||
customize_domain: 'http://customize_domain',
|
||||
theme: 'theme',
|
||||
customize_token_strategy: 'must',
|
||||
prompt_public: true
|
||||
}
|
||||
},
|
||||
{
|
||||
id: '2',
|
||||
name: 'completion app',
|
||||
mode: 'completion', // genertation text
|
||||
description: 'description 02', // genertation text
|
||||
enable_site: false,
|
||||
enable_api: false,
|
||||
api_rpm: 60,
|
||||
api_rph: 3600,
|
||||
is_demo: false,
|
||||
model_config: {
|
||||
provider: 'OPENAI',
|
||||
model_id: 'text-davinci-003',
|
||||
configs: {
|
||||
prompt_template: '你是我的翻译小助手,请把以下内容 {{langA}} 翻译成 {{langB}},以下的内容:',
|
||||
prompt_variables: [
|
||||
{
|
||||
key: 'langA',
|
||||
name: '原始语音',
|
||||
value: '中文',
|
||||
type: 'string',
|
||||
description: '这是中文格式的原始语音'
|
||||
},
|
||||
{
|
||||
key: 'langB',
|
||||
name: '目标语言',
|
||||
value: '英语',
|
||||
type: 'string',
|
||||
description: '这是英语格式的目标语言'
|
||||
}
|
||||
],
|
||||
completion_params: {
|
||||
max_token: 16,
|
||||
temperature: 1, // 0-2
|
||||
top_p: 1,
|
||||
presence_penalty: 1, // -2-2
|
||||
frequency_penalty: 1, // -2-2
|
||||
}
|
||||
}
|
||||
},
|
||||
site: {
|
||||
access_token: '2000',
|
||||
title: 'site 02',
|
||||
author: 'Mark',
|
||||
default_language: 'en-US',
|
||||
customize_domain: 'http://customize_domain',
|
||||
theme: 'theme',
|
||||
customize_token_strategy: 'must',
|
||||
prompt_public: false
|
||||
}
|
||||
},
|
||||
]
|
||||
|
||||
const apikeys = [{
|
||||
id: '111121312313132',
|
||||
token: 'sk-DEFGHJKMNPQRSTWXYZabcdefhijk1234',
|
||||
last_used_at: '1679212138000',
|
||||
created_at: '1673316000000'
|
||||
}, {
|
||||
id: '43441242131223123',
|
||||
token: 'sk-EEFGHJKMNPQRSTWXYZabcdefhijk5678',
|
||||
last_used_at: '1679212721000',
|
||||
created_at: '1679212731000'
|
||||
}]
|
||||
|
||||
// create app
|
||||
app.post('/apps', async (req, res) => {
|
||||
apps.push({
|
||||
id: apps.length + 1 + '',
|
||||
...req.body,
|
||||
|
||||
})
|
||||
res.send({
|
||||
result: 'success'
|
||||
})
|
||||
})
|
||||
|
||||
// app list
|
||||
app.get('/apps', async (req, res) => {
|
||||
res.send({
|
||||
data: apps
|
||||
})
|
||||
})
|
||||
|
||||
// app detail
|
||||
app.get('/apps/:id', async (req, res) => {
|
||||
const item = apps.find(item => item.id === req.params.id) || apps[0]
|
||||
res.send(item)
|
||||
})
|
||||
|
||||
// update app name
|
||||
app.post('/apps/:id/name', async (req, res) => {
|
||||
const item = apps.find(item => item.id === req.params.id)
|
||||
item.name = req.body.name
|
||||
res.send(item || null)
|
||||
})
|
||||
|
||||
// update app site-enable status
|
||||
app.post('/apps/:id/site-enable', async (req, res) => {
|
||||
const item = apps.find(item => item.id === req.params.id)
|
||||
console.log(item)
|
||||
item.enable_site = req.body.enable_site
|
||||
res.send(item || null)
|
||||
})
|
||||
|
||||
// update app api-enable status
|
||||
app.post('/apps/:id/api-enable', async (req, res) => {
|
||||
const item = apps.find(item => item.id === req.params.id)
|
||||
console.log(item)
|
||||
item.enable_api = req.body.enable_api
|
||||
res.send(item || null)
|
||||
})
|
||||
|
||||
// update app rate-limit
|
||||
app.post('/apps/:id/rate-limit', async (req, res) => {
|
||||
const item = apps.find(item => item.id === req.params.id)
|
||||
console.log(item)
|
||||
item.api_rpm = req.body.api_rpm
|
||||
item.api_rph = req.body.api_rph
|
||||
res.send(item || null)
|
||||
})
|
||||
|
||||
// update app url including code
|
||||
app.post('/apps/:id/site/access-token-reset', async (req, res) => {
|
||||
const item = apps.find(item => item.id === req.params.id)
|
||||
console.log(item)
|
||||
item.site.access_token = randomString(12)
|
||||
res.send(item || null)
|
||||
})
|
||||
|
||||
// update app config
|
||||
app.post('/apps/:id/site', async (req, res) => {
|
||||
const item = apps.find(item => item.id === req.params.id)
|
||||
console.log(item)
|
||||
item.name = req.body.title
|
||||
item.description = req.body.description
|
||||
item.prompt_public = req.body.prompt_public
|
||||
item.default_language = req.body.default_language
|
||||
res.send(item || null)
|
||||
})
|
||||
|
||||
// get statistics daily-conversations
|
||||
app.get('/apps/:id/statistics/daily-conversations', async (req, res) => {
|
||||
const item = apps.find(item => item.id === req.params.id)
|
||||
if (item) {
|
||||
res.send({
|
||||
data: VirtualData
|
||||
})
|
||||
} else {
|
||||
res.send({
|
||||
data: []
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
// get statistics daily-end-users
|
||||
app.get('/apps/:id/statistics/daily-end-users', async (req, res) => {
|
||||
const item = apps.find(item => item.id === req.params.id)
|
||||
if (item) {
|
||||
res.send({
|
||||
data: VirtualData
|
||||
})
|
||||
} else {
|
||||
res.send({
|
||||
data: []
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
// get statistics token-costs
|
||||
app.get('/apps/:id/statistics/token-costs', async (req, res) => {
|
||||
const item = apps.find(item => item.id === req.params.id)
|
||||
if (item) {
|
||||
res.send({
|
||||
data: VirtualData
|
||||
})
|
||||
} else {
|
||||
res.send({
|
||||
data: []
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
// update app model config
|
||||
app.post('/apps/:id/model-config', async (req, res) => {
|
||||
const item = apps.find(item => item.id === req.params.id)
|
||||
console.log(item)
|
||||
item.model_config = req.body
|
||||
res.send(item || null)
|
||||
})
|
||||
|
||||
|
||||
// get api keys list
|
||||
app.get('/apps/:id/api-keys', async (req, res) => {
|
||||
res.send({
|
||||
data: apikeys
|
||||
})
|
||||
})
|
||||
|
||||
// del api key
|
||||
app.delete('/apps/:id/api-keys/:api_key_id', async (req, res) => {
|
||||
res.send({
|
||||
result: 'success'
|
||||
})
|
||||
})
|
||||
|
||||
// create api key
|
||||
app.post('/apps/:id/api-keys', async (req, res) => {
|
||||
res.send({
|
||||
id: 'e2424241313131',
|
||||
token: 'sk-GEFGHJKMNPQRSTWXYZabcdefhijk0124',
|
||||
created_at: '1679216688962'
|
||||
})
|
||||
})
|
||||
|
||||
// get completion-conversations
|
||||
app.get('/apps/:id/completion-conversations', async (req, res) => {
|
||||
const data = {
|
||||
data: [{
|
||||
id: 1,
|
||||
from_end_user_id: 'user 1',
|
||||
summary: 'summary1',
|
||||
created_at: '2023-10-11',
|
||||
annotated: true,
|
||||
message_count: 100,
|
||||
user_feedback_stats: {
|
||||
like: 4, dislike: 5
|
||||
},
|
||||
admin_feedback_stats: {
|
||||
like: 1, dislike: 2
|
||||
},
|
||||
message: {
|
||||
message: 'message1',
|
||||
query: 'question1',
|
||||
answer: 'answer1'
|
||||
}
|
||||
}, {
|
||||
id: 12,
|
||||
from_end_user_id: 'user 2',
|
||||
summary: 'summary2',
|
||||
created_at: '2023-10-01',
|
||||
annotated: false,
|
||||
message_count: 10,
|
||||
user_feedback_stats: {
|
||||
like: 2, dislike: 20
|
||||
},
|
||||
admin_feedback_stats: {
|
||||
like: 12, dislike: 21
|
||||
},
|
||||
message: {
|
||||
message: 'message2',
|
||||
query: 'question2',
|
||||
answer: 'answer2'
|
||||
}
|
||||
}, {
|
||||
id: 13,
|
||||
from_end_user_id: 'user 3',
|
||||
summary: 'summary3',
|
||||
created_at: '2023-10-11',
|
||||
annotated: false,
|
||||
message_count: 20,
|
||||
user_feedback_stats: {
|
||||
like: 2, dislike: 0
|
||||
},
|
||||
admin_feedback_stats: {
|
||||
like: 0, dislike: 21
|
||||
},
|
||||
message: {
|
||||
message: 'message3',
|
||||
query: 'question3',
|
||||
answer: 'answer3'
|
||||
}
|
||||
}],
|
||||
total: 200
|
||||
}
|
||||
res.send(data)
|
||||
})
|
||||
|
||||
// get chat-conversations
|
||||
app.get('/apps/:id/chat-conversations', async (req, res) => {
|
||||
const data = {
|
||||
data: [{
|
||||
id: 1,
|
||||
from_end_user_id: 'user 1',
|
||||
summary: 'summary1',
|
||||
created_at: '2023-10-11',
|
||||
read_at: '2023-10-12',
|
||||
annotated: true,
|
||||
message_count: 100,
|
||||
user_feedback_stats: {
|
||||
like: 4, dislike: 5
|
||||
},
|
||||
admin_feedback_stats: {
|
||||
like: 1, dislike: 2
|
||||
},
|
||||
message: {
|
||||
message: 'message1',
|
||||
query: 'question1',
|
||||
answer: 'answer1'
|
||||
}
|
||||
}, {
|
||||
id: 12,
|
||||
from_end_user_id: 'user 2',
|
||||
summary: 'summary2',
|
||||
created_at: '2023-10-01',
|
||||
annotated: false,
|
||||
message_count: 10,
|
||||
user_feedback_stats: {
|
||||
like: 2, dislike: 20
|
||||
},
|
||||
admin_feedback_stats: {
|
||||
like: 12, dislike: 21
|
||||
},
|
||||
message: {
|
||||
message: 'message2',
|
||||
query: 'question2',
|
||||
answer: 'answer2'
|
||||
}
|
||||
}, {
|
||||
id: 13,
|
||||
from_end_user_id: 'user 3',
|
||||
summary: 'summary3',
|
||||
created_at: '2023-10-11',
|
||||
annotated: false,
|
||||
message_count: 20,
|
||||
user_feedback_stats: {
|
||||
like: 2, dislike: 0
|
||||
},
|
||||
admin_feedback_stats: {
|
||||
like: 0, dislike: 21
|
||||
},
|
||||
message: {
|
||||
message: 'message3',
|
||||
query: 'question3',
|
||||
answer: 'answer3'
|
||||
}
|
||||
}],
|
||||
total: 200
|
||||
}
|
||||
res.send(data)
|
||||
})
|
||||
|
||||
// get completion-conversation detail
|
||||
app.get('/apps/:id/completion-conversations/:cid', async (req, res) => {
|
||||
const data =
|
||||
{
|
||||
id: 1,
|
||||
from_end_user_id: 'user 1',
|
||||
summary: 'summary1',
|
||||
created_at: '2023-10-11',
|
||||
annotated: true,
|
||||
message: {
|
||||
message: 'question1',
|
||||
// query: 'question1',
|
||||
answer: 'answer1',
|
||||
annotation: {
|
||||
content: '这是一段纠正的内容'
|
||||
}
|
||||
},
|
||||
model_config: {
|
||||
provider: 'openai',
|
||||
model_id: 'model_id',
|
||||
configs: {
|
||||
prompt_template: '你是我的翻译小助手,请把以下内容 {{langA}} 翻译成 {{langB}},以下的内容:{{content}}'
|
||||
}
|
||||
}
|
||||
}
|
||||
res.send(data)
|
||||
})
|
||||
|
||||
// get chat-conversation detail
|
||||
app.get('/apps/:id/chat-conversations/:cid', async (req, res) => {
|
||||
const data =
|
||||
{
|
||||
id: 1,
|
||||
from_end_user_id: 'user 1',
|
||||
summary: 'summary1',
|
||||
created_at: '2023-10-11',
|
||||
annotated: true,
|
||||
message: {
|
||||
message: 'question1',
|
||||
// query: 'question1',
|
||||
answer: 'answer1',
|
||||
created_at: '2023-08-09 13:00',
|
||||
provider_response_latency: 130,
|
||||
message_tokens: 230
|
||||
},
|
||||
model_config: {
|
||||
provider: 'openai',
|
||||
model_id: 'model_id',
|
||||
configs: {
|
||||
prompt_template: '你是我的翻译小助手,请把以下内容 {{langA}} 翻译成 {{langB}},以下的内容:{{content}}'
|
||||
}
|
||||
}
|
||||
}
|
||||
res.send(data)
|
||||
})
|
||||
|
||||
// get chat-conversation message list
|
||||
app.get('/apps/:id/chat-messages', async (req, res) => {
|
||||
const data = {
|
||||
data: [{
|
||||
id: 1,
|
||||
created_at: '2023-10-11 07:09',
|
||||
message: '请说说人为什么会做梦?' + req.query.conversation_id,
|
||||
answer: '梦境通常是个人内心深处的反映,很难确定每个人梦境的确切含义,因为它们可能会受到梦境者的文化背景、生活经验和情感状态等多种因素的影响。',
|
||||
provider_response_latency: 450,
|
||||
answer_tokens: 200,
|
||||
annotation: {
|
||||
content: 'string',
|
||||
account: {
|
||||
id: 'string',
|
||||
name: 'string',
|
||||
email: 'string'
|
||||
}
|
||||
},
|
||||
feedbacks: {
|
||||
rating: 'like',
|
||||
content: 'string',
|
||||
from_source: 'log'
|
||||
}
|
||||
}, {
|
||||
id: 2,
|
||||
created_at: '2023-10-11 8:23',
|
||||
message: '夜里经常做梦会影响次日的精神状态吗?',
|
||||
answer: '总之,这个梦境可能与梦境者的个人经历和情感状态有关,但在一般情况下,它可能表示一种强烈的情感反应,包括愤怒、不满和对于正义和自由的渴望。',
|
||||
provider_response_latency: 400,
|
||||
answer_tokens: 250,
|
||||
annotation: {
|
||||
content: 'string',
|
||||
account: {
|
||||
id: 'string',
|
||||
name: 'string',
|
||||
email: 'string'
|
||||
}
|
||||
},
|
||||
// feedbacks: {
|
||||
// rating: 'like',
|
||||
// content: 'string',
|
||||
// from_source: 'log'
|
||||
// }
|
||||
}, {
|
||||
id: 3,
|
||||
created_at: '2023-10-11 10:20',
|
||||
message: '梦见在山上手撕鬼子,大师解解梦',
|
||||
answer: '但是,一般来说,“手撕鬼子”这个场景可能是梦境者对于过去历史上的战争、侵略以及对于自己国家和族群的保护与维护的情感反应。在梦中,你可能会感到自己充满力量和勇气,去对抗那些看似强大的侵略者。',
|
||||
provider_response_latency: 288,
|
||||
answer_tokens: 100,
|
||||
annotation: {
|
||||
content: 'string',
|
||||
account: {
|
||||
id: 'string',
|
||||
name: 'string',
|
||||
email: 'string'
|
||||
}
|
||||
},
|
||||
feedbacks: {
|
||||
rating: 'dislike',
|
||||
content: 'string',
|
||||
from_source: 'log'
|
||||
}
|
||||
}],
|
||||
limit: 20,
|
||||
has_more: true
|
||||
}
|
||||
res.send(data)
|
||||
})
|
||||
|
||||
app.post('/apps/:id/annotations', async (req, res) => {
|
||||
res.send({ result: 'success' })
|
||||
})
|
||||
|
||||
app.post('/apps/:id/feedbacks', async (req, res) => {
|
||||
res.send({ result: 'success' })
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
module.exports = registerAPI
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
|
||||
const registerAPI = function (app) {
|
||||
app.post('/login', async (req, res) => {
|
||||
res.send({
|
||||
result: 'success'
|
||||
})
|
||||
})
|
||||
|
||||
// get user info
|
||||
app.get('/account/profile', async (req, res) => {
|
||||
res.send({
|
||||
id: '11122222',
|
||||
name: 'Joel',
|
||||
email: 'iamjoel007@gmail.com'
|
||||
})
|
||||
})
|
||||
|
||||
// logout
|
||||
app.get('/logout', async (req, res) => {
|
||||
res.send({
|
||||
result: 'success'
|
||||
})
|
||||
})
|
||||
|
||||
// Langgenius version
|
||||
app.get('/version', async (req, res) => {
|
||||
res.send({
|
||||
current_version: 'v1.0.0',
|
||||
latest_version: 'v1.0.0',
|
||||
upgradeable: true,
|
||||
compatible_upgrade: true
|
||||
})
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
module.exports = registerAPI
|
||||
|
||||
|
|
@ -1,249 +0,0 @@
|
|||
const registerAPI = function (app) {
|
||||
app.get("/datasets/:id/documents", async (req, res) => {
|
||||
if (req.params.id === "0") res.send({ data: [] });
|
||||
else {
|
||||
res.send({
|
||||
data: [
|
||||
{
|
||||
id: 1,
|
||||
name: "Steve Jobs' life",
|
||||
words: "70k",
|
||||
word_count: 100,
|
||||
updated_at: 1681801029,
|
||||
indexing_status: "completed",
|
||||
archived: true,
|
||||
enabled: false,
|
||||
data_source_info: {
|
||||
upload_file: {
|
||||
// id: string
|
||||
// name: string
|
||||
// size: number
|
||||
// mime_type: string
|
||||
// created_at: number
|
||||
// created_by: string
|
||||
extension: "pdf",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: "Steve Jobs' life",
|
||||
word_count: "10k",
|
||||
hit_count: 10,
|
||||
updated_at: 1681801029,
|
||||
indexing_status: "waiting",
|
||||
archived: true,
|
||||
enabled: false,
|
||||
data_source_info: {
|
||||
upload_file: {
|
||||
extension: "json",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
name: "Steve Jobs' life xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
|
||||
word_count: "100k",
|
||||
hit_count: 0,
|
||||
updated_at: 1681801029,
|
||||
indexing_status: "indexing",
|
||||
archived: false,
|
||||
enabled: true,
|
||||
data_source_info: {
|
||||
upload_file: {
|
||||
extension: "txt",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 4,
|
||||
name: "Steve Jobs' life xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
|
||||
word_count: "100k",
|
||||
hit_count: 0,
|
||||
updated_at: 1681801029,
|
||||
indexing_status: "splitting",
|
||||
archived: false,
|
||||
enabled: true,
|
||||
data_source_info: {
|
||||
upload_file: {
|
||||
extension: "md",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 5,
|
||||
name: "Steve Jobs' life",
|
||||
word_count: "100k",
|
||||
hit_count: 0,
|
||||
updated_at: 1681801029,
|
||||
indexing_status: "error",
|
||||
archived: false,
|
||||
enabled: false,
|
||||
data_source_info: {
|
||||
upload_file: {
|
||||
extension: "html",
|
||||
},
|
||||
},
|
||||
},
|
||||
],
|
||||
total: 100,
|
||||
id: req.params.id,
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
app.get("/datasets/:id/documents/:did/segments", async (req, res) => {
|
||||
if (req.params.id === "0") res.send({ data: [] });
|
||||
else {
|
||||
res.send({
|
||||
data: new Array(100).fill({
|
||||
id: 1234,
|
||||
content: `他的坚持让我很为难。众所周知他非常注意保护自己的隐私,而我想他应该从来没有看过我写的书。也许将来的某个时候吧,我还是这么说。但是,到了2009年,他的妻子劳伦·鲍威尔(Laurene Powell)直言不讳地对我说:“如果你真的打算写一本关于史蒂夫的书,最好现在就开始。”他当时刚刚第二次因病休假。我向劳伦坦承,当乔布斯第一次提出这个想法时,我并不知道他病了。几乎没有人知道,她说。他是在接受癌症手术之前给我打的电话,直到今天他还将此事作为一个秘密,她这么解释道。\n
|
||||
他的坚持让我很为难。众所周知他非常注意保护自己的隐私,而我想他应该从来没有看过我写的书。也许将来的某个时候吧,我还是这么说。但是,到了2009年,他的妻子劳伦·鲍威尔(Laurene Powell)直言不讳地对我说:“如果你真的打算写一本关于史蒂夫的书,最好现在就开始。”他当时刚刚第二次因病休假。我向劳伦坦承,当乔布斯第一次提出这个想法时,我并不知道他病了。几乎没有人知道,她说。他是在接受癌症手术之前给我打的电话,直到今天他还将此事作为一个秘密,她这么解释道。`,
|
||||
enabled: true,
|
||||
keyWords: [
|
||||
"劳伦·鲍威尔",
|
||||
"劳伦·鲍威尔",
|
||||
"手术",
|
||||
"秘密",
|
||||
"癌症",
|
||||
"乔布斯",
|
||||
"史蒂夫",
|
||||
"书",
|
||||
"休假",
|
||||
"坚持",
|
||||
"隐私",
|
||||
],
|
||||
word_count: 120,
|
||||
hit_count: 100,
|
||||
status: "ok",
|
||||
index_node_hash: "index_node_hash value",
|
||||
}),
|
||||
limit: 100,
|
||||
has_more: true,
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// get doc detail
|
||||
app.get("/datasets/:id/documents/:did", async (req, res) => {
|
||||
const fixedParams = {
|
||||
// originInfo: {
|
||||
originalFilename: "Original filename",
|
||||
originalFileSize: "16mb",
|
||||
uploadDate: "2023-01-01",
|
||||
lastUpdateDate: "2023-01-05",
|
||||
source: "Source",
|
||||
// },
|
||||
// technicalParameters: {
|
||||
segmentSpecification: "909090",
|
||||
segmentLength: 100,
|
||||
avgParagraphLength: 130,
|
||||
};
|
||||
const bookData = {
|
||||
doc_type: "book",
|
||||
doc_metadata: {
|
||||
title: "机器学习实战",
|
||||
language: "zh",
|
||||
author: "Peter Harrington",
|
||||
publisher: "人民邮电出版社",
|
||||
publicationDate: "2013-01-01",
|
||||
ISBN: "9787115335500",
|
||||
category: "技术",
|
||||
},
|
||||
};
|
||||
const webData = {
|
||||
doc_type: "webPage",
|
||||
doc_metadata: {
|
||||
title: "深度学习入门教程",
|
||||
url: "https://www.example.com/deep-learning-tutorial",
|
||||
language: "zh",
|
||||
publishDate: "2020-05-01",
|
||||
authorPublisher: "张三",
|
||||
topicsKeywords: "深度学习, 人工智能, 教程",
|
||||
description:
|
||||
"这是一篇详细的深度学习入门教程,适用于对人工智能和深度学习感兴趣的初学者。",
|
||||
},
|
||||
};
|
||||
const postData = {
|
||||
doc_type: "socialMediaPost",
|
||||
doc_metadata: {
|
||||
platform: "Twitter",
|
||||
authorUsername: "example_user",
|
||||
publishDate: "2021-08-15",
|
||||
postURL: "https://twitter.com/example_user/status/1234567890",
|
||||
topicsTags:
|
||||
"AI, DeepLearning, Tutorial, Example, Example2, Example3, AI, DeepLearning, Tutorial, Example, Example2, Example3, AI, DeepLearning, Tutorial, Example, Example2, Example3,",
|
||||
},
|
||||
};
|
||||
res.send({
|
||||
id: "550e8400-e29b-41d4-a716-446655440000",
|
||||
position: 1,
|
||||
dataset_id: "550e8400-e29b-41d4-a716-446655440002",
|
||||
data_source_type: "upload_file",
|
||||
data_source_info: {
|
||||
upload_file: {
|
||||
extension: "html",
|
||||
id: "550e8400-e29b-41d4-a716-446655440003",
|
||||
},
|
||||
},
|
||||
dataset_process_rule_id: "550e8400-e29b-41d4-a716-446655440004",
|
||||
batch: "20230410123456123456",
|
||||
name: "example_document",
|
||||
created_from: "web",
|
||||
created_by: "550e8400-e29b-41d4-a716-446655440005",
|
||||
created_api_request_id: "550e8400-e29b-41d4-a716-446655440006",
|
||||
created_at: 1671269696,
|
||||
processing_started_at: 1671269700,
|
||||
word_count: 11,
|
||||
parsing_completed_at: 1671269710,
|
||||
cleaning_completed_at: 1671269720,
|
||||
splitting_completed_at: 1671269730,
|
||||
tokens: 10,
|
||||
indexing_latency: 5.0,
|
||||
completed_at: 1671269740,
|
||||
paused_by: null,
|
||||
paused_at: null,
|
||||
error: null,
|
||||
stopped_at: null,
|
||||
indexing_status: "completed",
|
||||
enabled: true,
|
||||
disabled_at: null,
|
||||
disabled_by: null,
|
||||
archived: false,
|
||||
archived_reason: null,
|
||||
archived_by: null,
|
||||
archived_at: null,
|
||||
updated_at: 1671269740,
|
||||
...(req.params.did === "book"
|
||||
? bookData
|
||||
: req.params.did === "web"
|
||||
? webData
|
||||
: req.params.did === "post"
|
||||
? postData
|
||||
: {}),
|
||||
segment_count: 10,
|
||||
hit_count: 9,
|
||||
status: "ok",
|
||||
});
|
||||
});
|
||||
|
||||
// // logout
|
||||
// app.get("/logout", async (req, res) => {
|
||||
// res.send({
|
||||
// result: "success",
|
||||
// });
|
||||
// });
|
||||
|
||||
// // Langgenius version
|
||||
// app.get("/version", async (req, res) => {
|
||||
// res.send({
|
||||
// current_version: "v1.0.0",
|
||||
// latest_version: "v1.0.0",
|
||||
// upgradeable: true,
|
||||
// compatible_upgrade: true,
|
||||
// });
|
||||
// });
|
||||
};
|
||||
|
||||
module.exports = registerAPI;
|
||||
|
|
@ -1,119 +0,0 @@
|
|||
const registerAPI = function (app) {
|
||||
const coversationList = [
|
||||
{
|
||||
id: '1',
|
||||
name: '梦的解析',
|
||||
inputs: {
|
||||
book: '《梦的解析》',
|
||||
callMe: '大师',
|
||||
},
|
||||
chats: []
|
||||
},
|
||||
{
|
||||
id: '2',
|
||||
name: '生命的起源',
|
||||
inputs: {
|
||||
book: '《x x x》',
|
||||
}
|
||||
},
|
||||
]
|
||||
// site info
|
||||
app.get('/apps/site/info', async (req, res) => {
|
||||
// const id = req.params.id
|
||||
res.send({
|
||||
enable_site: true,
|
||||
appId: '1',
|
||||
site: {
|
||||
title: 'Story Bot',
|
||||
description: '这是一款解梦聊天机器人,你可以选择你喜欢的解梦人进行解梦,这句话是客户端应用说明',
|
||||
},
|
||||
prompt_public: true, //id === '1',
|
||||
prompt_template: '你是我的解梦小助手,请参考 {{book}} 回答我有关梦境的问题。在回答前请称呼我为 {{myName}}。',
|
||||
})
|
||||
})
|
||||
|
||||
app.post('/apps/:id/chat-messages', async (req, res) => {
|
||||
const conversationId = req.body.conversation_id ? req.body.conversation_id : Date.now() + ''
|
||||
res.send({
|
||||
id: Date.now() + '',
|
||||
conversation_id: Date.now() + '',
|
||||
answer: 'balabababab'
|
||||
})
|
||||
})
|
||||
|
||||
app.post('/apps/:id/completion-messages', async (req, res) => {
|
||||
res.send({
|
||||
id: Date.now() + '',
|
||||
answer: `做为一个AI助手,我可以为你提供随机生成的段落,这些段落可以用于测试、占位符、或者其他目的。以下是一个随机生成的段落:
|
||||
|
||||
“随着科技的不断发展,越来越多的人开始意识到人工智能的重要性。人工智能已经成为我们生活中不可或缺的一部分,它可以帮助我们完成很多繁琐的工作,也可以为我们提供更智能、更便捷的服务。虽然人工智能带来了很多好处,但它也面临着很多挑战。例如,人工智能的算法可能会出现偏见,导致对某些人群不公平。此外,人工智能的发展也可能会导致一些工作的失业。因此,我们需要不断地研究人工智能的发展,以确保它能够为人类带来更多的好处。”`
|
||||
})
|
||||
})
|
||||
|
||||
// share api
|
||||
// chat list
|
||||
app.get('/apps/:id/coversations', async (req, res) => {
|
||||
res.send({
|
||||
data: coversationList
|
||||
})
|
||||
})
|
||||
|
||||
|
||||
|
||||
app.get('/apps/:id/variables', async (req, res) => {
|
||||
res.send({
|
||||
variables: [
|
||||
{
|
||||
key: 'book',
|
||||
name: '书',
|
||||
value: '《梦境解析》',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
key: 'myName',
|
||||
name: '称呼',
|
||||
value: '',
|
||||
type: 'string'
|
||||
}
|
||||
],
|
||||
})
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
module.exports = registerAPI
|
||||
|
||||
// const chatList = [
|
||||
// {
|
||||
// id: 1,
|
||||
// content: 'AI 开场白',
|
||||
// isAnswer: true,
|
||||
// },
|
||||
// {
|
||||
// id: 2,
|
||||
// content: '梦见在山上手撕鬼子,大师解解梦',
|
||||
// more: { time: '5.6 秒' },
|
||||
// },
|
||||
// {
|
||||
// id: 3,
|
||||
// content: '梦境通常是个人内心深处的反映,很难确定每个人梦境的确切含义,因为它们可能会受到梦境者的文化背景、生活经验和情感状态等多种因素的影响。',
|
||||
// isAnswer: true,
|
||||
// more: { time: '99 秒' },
|
||||
|
||||
// },
|
||||
// {
|
||||
// id: 4,
|
||||
// content: '梦见在山上手撕鬼子,大师解解梦',
|
||||
// more: { time: '5.6 秒' },
|
||||
// },
|
||||
// {
|
||||
// id: 5,
|
||||
// content: '梦见在山上手撕鬼子,大师解解梦',
|
||||
// more: { time: '5.6 秒' },
|
||||
// },
|
||||
// {
|
||||
// id: 6,
|
||||
// content: '梦见在山上手撕鬼子,大师解解梦',
|
||||
// more: { time: '5.6 秒' },
|
||||
// },
|
||||
// ]
|
||||
|
|
@ -1,15 +0,0 @@
|
|||
const registerAPI = function (app) {
|
||||
app.get('/demo', async (req, res) => {
|
||||
res.send({
|
||||
des: 'get res'
|
||||
})
|
||||
})
|
||||
|
||||
app.post('/demo', async (req, res) => {
|
||||
res.send({
|
||||
des: 'post res'
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
module.exports = registerAPI
|
||||
|
|
@ -1,42 +0,0 @@
|
|||
const express = require('express')
|
||||
const app = express()
|
||||
const bodyParser = require('body-parser')
|
||||
var cors = require('cors')
|
||||
|
||||
const commonAPI = require('./api/common')
|
||||
const demoAPI = require('./api/demo')
|
||||
const appsApi = require('./api/apps')
|
||||
const debugAPI = require('./api/debug')
|
||||
const datasetsAPI = require('./api/datasets')
|
||||
|
||||
const port = 3001
|
||||
|
||||
app.use(bodyParser.json()) // for parsing application/json
|
||||
app.use(bodyParser.urlencoded({ extended: true })) // for parsing application/x-www-form-urlencoded
|
||||
|
||||
const corsOptions = {
|
||||
origin: true,
|
||||
credentials: true,
|
||||
}
|
||||
app.use(cors(corsOptions)) // for cross origin
|
||||
app.options('*', cors(corsOptions)) // include before other routes
|
||||
|
||||
|
||||
demoAPI(app)
|
||||
commonAPI(app)
|
||||
appsApi(app)
|
||||
debugAPI(app)
|
||||
datasetsAPI(app)
|
||||
|
||||
|
||||
app.get('/', (req, res) => {
|
||||
res.send('rootpath')
|
||||
})
|
||||
|
||||
app.listen(port, () => {
|
||||
console.log(`Mock run on port ${port}`)
|
||||
})
|
||||
|
||||
const sleep = (ms) => {
|
||||
return new Promise(resolve => setTimeout(resolve, ms))
|
||||
}
|
||||
|
|
@ -1,26 +0,0 @@
|
|||
{
|
||||
"name": "server",
|
||||
"version": "1.0.0",
|
||||
"description": "",
|
||||
"main": "index.js",
|
||||
"scripts": {
|
||||
"dev": "nodemon node app.js",
|
||||
"start": "node app.js",
|
||||
"tcp": "node tcp.js"
|
||||
},
|
||||
"keywords": [],
|
||||
"author": "",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=16.0.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"body-parser": "^1.20.2",
|
||||
"cors": "^2.8.5",
|
||||
"express": "4.18.2",
|
||||
"express-jwt": "8.4.1"
|
||||
},
|
||||
"devDependencies": {
|
||||
"nodemon": "2.0.21"
|
||||
}
|
||||
}
|
||||
|
|
@ -49,7 +49,7 @@ const AppDetailLayout: FC<IAppDetailLayoutProps> = (props) => {
|
|||
return null
|
||||
return (
|
||||
<div className={cn(s.app, 'flex', 'overflow-hidden')}>
|
||||
<AppSideBar title={response.name} desc={appModeName} navigation={navigation} />
|
||||
<AppSideBar title={response.name} icon={response.icon} icon_background={response.icon_background} desc={appModeName} navigation={navigation} />
|
||||
<div className="bg-white grow">{children}</div>
|
||||
</div>
|
||||
)
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ const AppCard = ({
|
|||
<>
|
||||
<Link href={`/app/${app.id}/overview`} className={style.listItem}>
|
||||
<div className={style.listItemTitle}>
|
||||
<AppIcon size='small' />
|
||||
<AppIcon size='small' icon={app.icon} background={app.icon_background}/>
|
||||
<div className={style.listItemHeading}>
|
||||
<div className={style.listItemHeadingContent}>{app.name}</div>
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -17,6 +17,7 @@ const Apps = () => {
|
|||
{apps.map(app => (<AppCard key={app.id} app={app} />))}
|
||||
<NewAppCard />
|
||||
</nav>
|
||||
|
||||
)
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -9,7 +9,6 @@ import NewAppDialog from './NewAppDialog'
|
|||
const CreateAppCard = () => {
|
||||
const { t } = useTranslation()
|
||||
const [showNewAppDialog, setShowNewAppDialog] = useState(false)
|
||||
|
||||
return (
|
||||
<a className={classNames(style.listItem, style.newItemCard)} onClick={() => setShowNewAppDialog(true)}>
|
||||
<div className={style.listItemTitle}>
|
||||
|
|
|
|||
|
|
@ -17,6 +17,8 @@ import { createApp, fetchAppTemplates } from '@/service/apps'
|
|||
import AppIcon from '@/app/components/base/app-icon'
|
||||
import AppsContext from '@/context/app-context'
|
||||
|
||||
import EmojiPicker from '@/app/components/base/emoji-picker'
|
||||
|
||||
type NewAppDialogProps = {
|
||||
show: boolean
|
||||
onClose?: () => void
|
||||
|
|
@ -31,6 +33,11 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => {
|
|||
const [newAppMode, setNewAppMode] = useState<AppMode>()
|
||||
const [isWithTemplate, setIsWithTemplate] = useState(false)
|
||||
const [selectedTemplateIndex, setSelectedTemplateIndex] = useState<number>(-1)
|
||||
|
||||
// Emoji Picker
|
||||
const [showEmojiPicker, setShowEmojiPicker] = useState(false)
|
||||
const [emoji, setEmoji] = useState({ icon: '🍌', icon_background: '#FFEAD5' })
|
||||
|
||||
const mutateApps = useContextSelector(AppsContext, state => state.mutateApps)
|
||||
|
||||
const { data: templates, mutate } = useSWR({ url: '/app-templates' }, fetchAppTemplates)
|
||||
|
|
@ -67,6 +74,8 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => {
|
|||
try {
|
||||
const app = await createApp({
|
||||
name,
|
||||
icon: emoji.icon,
|
||||
icon_background: emoji.icon_background,
|
||||
mode: isWithTemplate ? templates.data[selectedTemplateIndex].mode : newAppMode!,
|
||||
config: isWithTemplate ? templates.data[selectedTemplateIndex].model_config : undefined,
|
||||
})
|
||||
|
|
@ -80,9 +89,20 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => {
|
|||
notify({ type: 'error', message: t('app.newApp.appCreateFailed') })
|
||||
}
|
||||
isCreatingRef.current = false
|
||||
}, [isWithTemplate, newAppMode, notify, router, templates, selectedTemplateIndex])
|
||||
}, [isWithTemplate, newAppMode, notify, router, templates, selectedTemplateIndex, emoji])
|
||||
|
||||
return (
|
||||
return <>
|
||||
{showEmojiPicker && <EmojiPicker
|
||||
onSelect={(icon, icon_background) => {
|
||||
console.log(icon, icon_background)
|
||||
setEmoji({ icon, icon_background })
|
||||
setShowEmojiPicker(false)
|
||||
}}
|
||||
onClose={() => {
|
||||
setEmoji({ icon: '🍌', icon_background: '#FFEAD5' })
|
||||
setShowEmojiPicker(false)
|
||||
}}
|
||||
/>}
|
||||
<Dialog
|
||||
show={show}
|
||||
title={t('app.newApp.startToCreate')}
|
||||
|
|
@ -96,7 +116,7 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => {
|
|||
<h3 className={style.newItemCaption}>{t('app.newApp.captionName')}</h3>
|
||||
|
||||
<div className='flex items-center justify-between gap-3 mb-8'>
|
||||
<AppIcon size='large' />
|
||||
<AppIcon size='large' onClick={() => { setShowEmojiPicker(true) }} className='cursor-pointer' icon={emoji.icon} background={emoji.icon_background} />
|
||||
<input ref={nameInputRef} className='h-10 px-3 text-sm font-normal bg-gray-100 rounded-lg grow' />
|
||||
</div>
|
||||
|
||||
|
|
@ -187,7 +207,7 @@ const NewAppDialog = ({ show, onClose }: NewAppDialogProps) => {
|
|||
)}
|
||||
</div>
|
||||
</Dialog>
|
||||
)
|
||||
</>
|
||||
}
|
||||
|
||||
export default NewAppDialog
|
||||
|
|
|
|||
|
|
@ -155,6 +155,8 @@ const DatasetDetailLayout: FC<IAppDetailLayoutProps> = (props) => {
|
|||
<div className='flex' style={{ height: 'calc(100vh - 56px)' }}>
|
||||
{!hideSideBar && <AppSideBar
|
||||
title={datasetRes?.name || '--'}
|
||||
icon={datasetRes?.icon || 'https://static.dify.ai/images/dataset-default-icon.png'}
|
||||
icon_background={datasetRes?.icon_background || '#F5F5F5'}
|
||||
desc={datasetRes?.description || '--'}
|
||||
navigation={navigation}
|
||||
extraInfo={<ExtraInfo />}
|
||||
|
|
|
|||
|
|
@ -1,3 +0,0 @@
|
|||
export async function GET(_request: Request) {
|
||||
return new Response('Hello, Next.js!')
|
||||
}
|
||||
|
|
@ -15,7 +15,8 @@ export function randomString(length: number) {
|
|||
|
||||
export type IAppBasicProps = {
|
||||
iconType?: 'app' | 'api' | 'dataset'
|
||||
iconUrl?: string
|
||||
icon?: string,
|
||||
icon_background?: string,
|
||||
name: string
|
||||
type: string | React.ReactNode
|
||||
hoverTip?: string
|
||||
|
|
@ -41,15 +42,20 @@ const ICON_MAP = {
|
|||
'dataset': <AppIcon innerIcon={DatasetSvg} className='!border-[0.5px] !border-indigo-100 !bg-indigo-25' />
|
||||
}
|
||||
|
||||
export default function AppBasic({ iconUrl, name, type, hoverTip, textStyle, iconType = 'app' }: IAppBasicProps) {
|
||||
export default function AppBasic({ icon, icon_background, name, type, hoverTip, textStyle, iconType = 'app' }: IAppBasicProps) {
|
||||
return (
|
||||
<div className="flex items-start">
|
||||
{iconUrl && (
|
||||
{icon && icon_background && iconType === 'app' && (
|
||||
<div className='flex-shrink-0 mr-3'>
|
||||
{/* <img className="inline-block rounded-lg h-9 w-9" src={iconUrl} alt={name} /> */}
|
||||
{ICON_MAP[iconType]}
|
||||
<AppIcon icon={icon} background={icon_background} />
|
||||
</div>
|
||||
)}
|
||||
{iconType !== 'app' &&
|
||||
<div className='flex-shrink-0 mr-3'>
|
||||
{ICON_MAP[iconType]}
|
||||
</div>
|
||||
|
||||
}
|
||||
<div className="group">
|
||||
<div className={`flex flex-row items-center text-sm font-semibold text-gray-700 group-hover:text-gray-900 ${textStyle?.main}`}>
|
||||
{name}
|
||||
|
|
|
|||
|
|
@ -7,6 +7,8 @@ export type IAppDetailNavProps = {
|
|||
iconType?: 'app' | 'dataset'
|
||||
title: string
|
||||
desc: string
|
||||
icon: string
|
||||
icon_background: string
|
||||
navigation: Array<{
|
||||
name: string
|
||||
href: string
|
||||
|
|
@ -16,13 +18,12 @@ export type IAppDetailNavProps = {
|
|||
extraInfo?: React.ReactNode
|
||||
}
|
||||
|
||||
const sampleAppIconUrl = 'https://images.unsplash.com/photo-1472099645785-5658abf4ff4e?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=facearea&facepad=2&w=256&h=256&q=80'
|
||||
|
||||
const AppDetailNav: FC<IAppDetailNavProps> = ({ title, desc, navigation, extraInfo, iconType = 'app' }) => {
|
||||
const AppDetailNav: FC<IAppDetailNavProps> = ({ title, desc, icon, icon_background, navigation, extraInfo, iconType = 'app' }) => {
|
||||
return (
|
||||
<div className="flex flex-col w-56 overflow-y-auto bg-white border-r border-gray-200 shrink-0">
|
||||
<div className="flex flex-shrink-0 p-4">
|
||||
<AppBasic iconType={iconType} iconUrl={sampleAppIconUrl} name={title} type={desc} />
|
||||
<AppBasic iconType={iconType} icon={icon} icon_background={icon_background} name={title} type={desc} />
|
||||
</div>
|
||||
<nav className="flex-1 p-4 space-y-1 bg-white">
|
||||
{navigation.map((item, index) => {
|
||||
|
|
|
|||
|
|
@ -29,9 +29,6 @@ export type IAppCardProps = {
|
|||
onGenerateCode?: () => Promise<any>
|
||||
}
|
||||
|
||||
// todo: get image url from appInfo
|
||||
const defaultUrl = 'https://images.unsplash.com/photo-1472099645785-5658abf4ff4e?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=facearea&facepad=2&w=256&h=256&q=80'
|
||||
|
||||
function AppCard({
|
||||
appInfo,
|
||||
cardType = 'app',
|
||||
|
|
@ -104,7 +101,8 @@ function AppCard({
|
|||
<div className="mb-2.5 flex flex-row items-start justify-between">
|
||||
<AppBasic
|
||||
iconType={isApp ? 'app' : 'api'}
|
||||
iconUrl={defaultUrl}
|
||||
icon={appInfo.icon}
|
||||
icon_background={appInfo.icon_background}
|
||||
name={basicName}
|
||||
type={
|
||||
isApp
|
||||
|
|
|
|||
|
|
@ -2,6 +2,11 @@ import type { FC } from 'react'
|
|||
import classNames from 'classnames'
|
||||
import style from './style.module.css'
|
||||
|
||||
import data from '@emoji-mart/data'
|
||||
import { init } from 'emoji-mart'
|
||||
|
||||
init({ data })
|
||||
|
||||
export type AppIconProps = {
|
||||
size?: 'tiny' | 'small' | 'medium' | 'large'
|
||||
rounded?: boolean
|
||||
|
|
@ -9,14 +14,17 @@ export type AppIconProps = {
|
|||
background?: string
|
||||
className?: string
|
||||
innerIcon?: React.ReactNode
|
||||
onClick?: () => void
|
||||
}
|
||||
|
||||
const AppIcon: FC<AppIconProps> = ({
|
||||
size = 'medium',
|
||||
rounded = false,
|
||||
icon,
|
||||
background,
|
||||
className,
|
||||
innerIcon,
|
||||
onClick,
|
||||
}) => {
|
||||
return (
|
||||
<span
|
||||
|
|
@ -29,8 +37,9 @@ const AppIcon: FC<AppIconProps> = ({
|
|||
style={{
|
||||
background,
|
||||
}}
|
||||
onClick={onClick}
|
||||
>
|
||||
{innerIcon ? innerIcon : <>🤖</>}
|
||||
{innerIcon ? innerIcon : icon && icon !== '' ? <em-emoji id={icon} /> : <em-emoji id={'banana'} />}
|
||||
</span>
|
||||
)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,204 @@
|
|||
'use client'
|
||||
import data from '@emoji-mart/data'
|
||||
import { init, SearchIndex } from 'emoji-mart'
|
||||
// import AppIcon from '@/app/components/base/app-icon'
|
||||
import cn from 'classnames'
|
||||
import Divider from '@/app/components/base/divider'
|
||||
|
||||
import Button from '@/app/components/base/button'
|
||||
import s from './style.module.css'
|
||||
import { useState, FC, ChangeEvent } from 'react'
|
||||
import {
|
||||
MagnifyingGlassIcon
|
||||
} from '@heroicons/react/24/outline'
|
||||
import React from 'react'
|
||||
import Modal from '@/app/components/base/modal'
|
||||
|
||||
declare global {
|
||||
namespace JSX {
|
||||
interface IntrinsicElements {
|
||||
'em-emoji': React.DetailedHTMLProps<
|
||||
React.HTMLAttributes<HTMLElement>,
|
||||
HTMLElement
|
||||
>;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
init({ data })
|
||||
|
||||
async function search(value: string) {
|
||||
const emojis = await SearchIndex.search(value) || []
|
||||
|
||||
const results = emojis.map((emoji: any) => {
|
||||
return emoji.skins[0].native
|
||||
})
|
||||
return results
|
||||
}
|
||||
|
||||
const backgroundColors = [
|
||||
'#FFEAD5',
|
||||
'#E4FBCC',
|
||||
'#D3F8DF',
|
||||
'#E0F2FE',
|
||||
|
||||
'#E0EAFF',
|
||||
'#EFF1F5',
|
||||
'#FBE8FF',
|
||||
'#FCE7F6',
|
||||
|
||||
'#FEF7C3',
|
||||
'#E6F4D7',
|
||||
'#D5F5F6',
|
||||
'#D1E9FF',
|
||||
|
||||
'#D1E0FF',
|
||||
'#D5D9EB',
|
||||
'#ECE9FE',
|
||||
'#FFE4E8',
|
||||
]
|
||||
interface IEmojiPickerProps {
|
||||
isModal?: boolean
|
||||
onSelect?: (emoji: string, background: string) => void
|
||||
onClose?: () => void
|
||||
}
|
||||
|
||||
const EmojiPicker: FC<IEmojiPickerProps> = ({
|
||||
isModal = true,
|
||||
onSelect,
|
||||
onClose
|
||||
|
||||
}) => {
|
||||
const { categories } = data as any
|
||||
const [selectedEmoji, setSelectedEmoji] = useState('')
|
||||
const [selectedBackground, setSelectedBackground] = useState(backgroundColors[0])
|
||||
|
||||
const [searchedEmojis, setSearchedEmojis] = useState([])
|
||||
const [isSearching, setIsSearching] = useState(false)
|
||||
|
||||
return isModal ? <Modal
|
||||
onClose={() => { }}
|
||||
isShow
|
||||
closable={false}
|
||||
className={cn(s.container, '!w-[362px] !p-0')}
|
||||
>
|
||||
<div className='flex flex-col items-center w-full p-3'>
|
||||
<div className="relative w-full">
|
||||
<div className="absolute inset-y-0 left-0 flex items-center pl-3 pointer-events-none">
|
||||
<MagnifyingGlassIcon className="w-5 h-5 text-gray-400" aria-hidden="true" />
|
||||
</div>
|
||||
<input
|
||||
type="search"
|
||||
id="search"
|
||||
className='block w-full h-10 px-3 pl-10 text-sm font-normal bg-gray-100 rounded-lg'
|
||||
placeholder="Search emojis..."
|
||||
onChange={async (e: ChangeEvent<HTMLInputElement>) => {
|
||||
if (e.target.value === '') {
|
||||
setIsSearching(false)
|
||||
return
|
||||
} else {
|
||||
setIsSearching(true)
|
||||
const emojis = await search(e.target.value)
|
||||
setSearchedEmojis(emojis)
|
||||
}
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
<Divider className='m-0 mb-3' />
|
||||
|
||||
<div className="w-full max-h-[200px] overflow-x-hidden overflow-y-auto px-3">
|
||||
{isSearching && <>
|
||||
<div key={`category-search`} className='flex flex-col'>
|
||||
<p className='font-medium uppercase text-xs text-[#101828] mb-1'>Search</p>
|
||||
<div className='w-full h-full grid grid-cols-8 gap-1'>
|
||||
{searchedEmojis.map((emoji: string, index: number) => {
|
||||
return <div
|
||||
key={`emoji-search-${index}`}
|
||||
className='inline-flex w-10 h-10 rounded-lg items-center justify-center'
|
||||
onClick={() => {
|
||||
setSelectedEmoji(emoji)
|
||||
}}
|
||||
>
|
||||
<div className='cursor-pointer w-8 h-8 p-1 flex items-center justify-center rounded-lg hover:ring-1 ring-offset-1 ring-gray-300'>
|
||||
<em-emoji id={emoji} />
|
||||
</div>
|
||||
</div>
|
||||
})}
|
||||
</div>
|
||||
</div>
|
||||
</>}
|
||||
|
||||
|
||||
{categories.map((category: any, index: number) => {
|
||||
return <div key={`category-${index}`} className='flex flex-col'>
|
||||
<p className='font-medium uppercase text-xs text-[#101828] mb-1'>{category.id}</p>
|
||||
<div className='w-full h-full grid grid-cols-8 gap-1'>
|
||||
{category.emojis.map((emoji: string, index: number) => {
|
||||
return <div
|
||||
key={`emoji-${index}`}
|
||||
className='inline-flex w-10 h-10 rounded-lg items-center justify-center'
|
||||
onClick={() => {
|
||||
setSelectedEmoji(emoji)
|
||||
}}
|
||||
>
|
||||
<div className='cursor-pointer w-8 h-8 p-1 flex items-center justify-center rounded-lg hover:ring-1 ring-offset-1 ring-gray-300'>
|
||||
<em-emoji id={emoji} />
|
||||
</div>
|
||||
</div>
|
||||
})}
|
||||
|
||||
</div>
|
||||
</div>
|
||||
})}
|
||||
</div>
|
||||
|
||||
{/* Color Select */}
|
||||
<div className={cn('flex flex-col p-3 ', selectedEmoji == '' ? 'opacity-25' : '')}>
|
||||
<p className='font-medium uppercase text-xs text-[#101828] mb-2'>Choose Style</p>
|
||||
<div className='w-full h-full grid grid-cols-8 gap-1'>
|
||||
{backgroundColors.map((color) => {
|
||||
return <div
|
||||
key={color}
|
||||
className={
|
||||
cn(
|
||||
'cursor-pointer',
|
||||
`hover:ring-1 ring-offset-1`,
|
||||
'inline-flex w-10 h-10 rounded-lg items-center justify-center',
|
||||
color === selectedBackground ? `ring-1 ring-gray-300` : '',
|
||||
)}
|
||||
onClick={() => {
|
||||
setSelectedBackground(color)
|
||||
}}
|
||||
>
|
||||
<div className={cn(
|
||||
'w-8 h-8 p-1 flex items-center justify-center rounded-lg',
|
||||
)
|
||||
} style={{ background: color }}>
|
||||
{selectedEmoji !== '' && <em-emoji id={selectedEmoji} />}
|
||||
</div>
|
||||
</div>
|
||||
})}
|
||||
</div>
|
||||
</div>
|
||||
<Divider className='m-0' />
|
||||
<div className='w-full flex items-center justify-center p-3 gap-2'>
|
||||
<Button type="default" className='w-full' onClick={() => {
|
||||
onClose && onClose()
|
||||
}}>
|
||||
Cancel
|
||||
</Button>
|
||||
<Button
|
||||
disabled={selectedEmoji == ''}
|
||||
type="primary"
|
||||
className='w-full'
|
||||
onClick={() => {
|
||||
onSelect && onSelect(selectedEmoji, selectedBackground)
|
||||
}}>
|
||||
OK
|
||||
</Button>
|
||||
</div>
|
||||
</Modal> : <>
|
||||
</>
|
||||
}
|
||||
export default EmojiPicker
|
||||
|
|
@ -0,0 +1,12 @@
|
|||
.container {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: flex-start;
|
||||
width: 362px;
|
||||
max-height: 552px;
|
||||
|
||||
border: 0.5px solid #EAECF0;
|
||||
box-shadow: 0px 12px 16px -4px rgba(16, 24, 40, 0.08), 0px 4px 6px -2px rgba(16, 24, 40, 0.03);
|
||||
border-radius: 12px;
|
||||
background: #fff;
|
||||
}
|
||||
|
|
@ -25,51 +25,51 @@ export default function Modal({
|
|||
closable = false,
|
||||
}: IModal) {
|
||||
return (
|
||||
<Transition appear show={isShow} as={Fragment}>
|
||||
<Dialog as="div" className="relative z-10" onClose={onClose}>
|
||||
<Transition.Child
|
||||
as={Fragment}
|
||||
enter="ease-out duration-300"
|
||||
enterFrom="opacity-0"
|
||||
enterTo="opacity-100"
|
||||
leave="ease-in duration-200"
|
||||
leaveFrom="opacity-100"
|
||||
leaveTo="opacity-0"
|
||||
>
|
||||
<div className="fixed inset-0 bg-black bg-opacity-25" />
|
||||
</Transition.Child>
|
||||
<Transition appear show={isShow} as={Fragment}>
|
||||
<Dialog as="div" className="relative z-10" onClose={onClose}>
|
||||
<Transition.Child
|
||||
as={Fragment}
|
||||
enter="ease-out duration-300"
|
||||
enterFrom="opacity-0"
|
||||
enterTo="opacity-100"
|
||||
leave="ease-in duration-200"
|
||||
leaveFrom="opacity-100"
|
||||
leaveTo="opacity-0"
|
||||
>
|
||||
<div className="fixed inset-0 bg-black bg-opacity-25" />
|
||||
</Transition.Child>
|
||||
|
||||
<div className="fixed inset-0 overflow-y-auto">
|
||||
<div className={`flex min-h-full items-center justify-center p-4 text-center ${wrapperClassName}`}>
|
||||
<Transition.Child
|
||||
as={Fragment}
|
||||
enter="ease-out duration-300"
|
||||
enterFrom="opacity-0 scale-95"
|
||||
enterTo="opacity-100 scale-100"
|
||||
leave="ease-in duration-200"
|
||||
leaveFrom="opacity-100 scale-100"
|
||||
leaveTo="opacity-0 scale-95"
|
||||
>
|
||||
<Dialog.Panel className={`w-full max-w-md transform overflow-hidden rounded-2xl bg-white p-6 text-left align-middle shadow-xl transition-all ${className}`}>
|
||||
{title && <Dialog.Title
|
||||
as="h3"
|
||||
className="text-lg font-medium leading-6 text-gray-900"
|
||||
>
|
||||
{title}
|
||||
</Dialog.Title>}
|
||||
{description && <Dialog.Description className='text-gray-500 text-xs font-normal mt-2'>
|
||||
{description}
|
||||
</Dialog.Description>}
|
||||
{closable
|
||||
&& <div className='absolute top-6 right-6 w-5 h-5 rounded-2xl flex items-center justify-center hover:cursor-pointer hover:bg-gray-100'>
|
||||
<XMarkIcon className='w-4 h-4 text-gray-500' onClick={onClose} />
|
||||
</div>}
|
||||
{children}
|
||||
</Dialog.Panel>
|
||||
</Transition.Child>
|
||||
</div>
|
||||
</div>
|
||||
</Dialog>
|
||||
</Transition>
|
||||
<div className="fixed inset-0 overflow-y-auto">
|
||||
<div className="flex min-h-full items-center justify-center p-4 text-center">
|
||||
<Transition.Child
|
||||
as={Fragment}
|
||||
enter="ease-out duration-300"
|
||||
enterFrom="opacity-0 scale-95"
|
||||
enterTo="opacity-100 scale-100"
|
||||
leave="ease-in duration-200"
|
||||
leaveFrom="opacity-100 scale-100"
|
||||
leaveTo="opacity-0 scale-95"
|
||||
>
|
||||
<Dialog.Panel className={`w-full max-w-md transform overflow-hidden rounded-2xl bg-white p-6 text-left align-middle shadow-xl transition-all ${className}`}>
|
||||
{title && <Dialog.Title
|
||||
as="h3"
|
||||
className="text-lg font-medium leading-6 text-gray-900"
|
||||
>
|
||||
{title}
|
||||
</Dialog.Title>}
|
||||
{description && <Dialog.Description className='text-gray-500 text-xs font-normal mt-2'>
|
||||
{description}
|
||||
</Dialog.Description>}
|
||||
{closable
|
||||
&& <div className='absolute top-6 right-6 w-5 h-5 rounded-2xl flex items-center justify-center hover:cursor-pointer hover:bg-gray-100'>
|
||||
<XMarkIcon className='w-4 h-4 text-gray-500' onClick={onClose} />
|
||||
</div>}
|
||||
{children}
|
||||
</Dialog.Panel>
|
||||
</Transition.Child>
|
||||
</div>
|
||||
</div>
|
||||
</Dialog>
|
||||
</Transition>
|
||||
)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ type IDocumentsProps = {
|
|||
datasetId: string
|
||||
}
|
||||
|
||||
export const fetcher = (url: string) => get(url, {}, { isMock: true })
|
||||
export const fetcher = (url: string) => get(url, {}, {})
|
||||
|
||||
const Documents: FC<IDocumentsProps> = ({ datasetId }) => {
|
||||
const { t } = useTranslation()
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ const AzureProvider = ({
|
|||
const [token, setToken] = useState(provider.token as ProviderAzureToken || {})
|
||||
const handleFocus = () => {
|
||||
if (token === provider.token) {
|
||||
token.azure_api_key = ''
|
||||
token.openai_api_key = ''
|
||||
setToken({...token})
|
||||
onTokenChange({...token})
|
||||
}
|
||||
|
|
@ -35,31 +35,17 @@ const AzureProvider = ({
|
|||
<div className='px-4 py-3'>
|
||||
<ProviderInput
|
||||
className='mb-4'
|
||||
name={t('common.provider.azure.resourceName')}
|
||||
placeholder={t('common.provider.azure.resourceNamePlaceholder')}
|
||||
value={token.azure_api_base}
|
||||
onChange={(v) => handleChange('azure_api_base', v)}
|
||||
/>
|
||||
<ProviderInput
|
||||
className='mb-4'
|
||||
name={t('common.provider.azure.deploymentId')}
|
||||
placeholder={t('common.provider.azure.deploymentIdPlaceholder')}
|
||||
value={token.azure_api_type}
|
||||
onChange={v => handleChange('azure_api_type', v)}
|
||||
/>
|
||||
<ProviderInput
|
||||
className='mb-4'
|
||||
name={t('common.provider.azure.apiVersion')}
|
||||
placeholder={t('common.provider.azure.apiVersionPlaceholder')}
|
||||
value={token.azure_api_version}
|
||||
onChange={v => handleChange('azure_api_version', v)}
|
||||
name={t('common.provider.azure.apiBase')}
|
||||
placeholder={t('common.provider.azure.apiBasePlaceholder')}
|
||||
value={token.openai_api_base}
|
||||
onChange={(v) => handleChange('openai_api_base', v)}
|
||||
/>
|
||||
<ProviderValidateTokenInput
|
||||
className='mb-4'
|
||||
name={t('common.provider.azure.apiKey')}
|
||||
placeholder={t('common.provider.azure.apiKeyPlaceholder')}
|
||||
value={token.azure_api_key}
|
||||
onChange={v => handleChange('azure_api_key', v)}
|
||||
value={token.openai_api_key}
|
||||
onChange={v => handleChange('openai_api_key', v)}
|
||||
onFocus={handleFocus}
|
||||
onValidatedStatus={onValidatedStatus}
|
||||
providerName={provider.provider_name}
|
||||
|
|
@ -72,4 +58,4 @@ const AzureProvider = ({
|
|||
)
|
||||
}
|
||||
|
||||
export default AzureProvider
|
||||
export default AzureProvider
|
||||
|
|
|
|||
|
|
@ -33,12 +33,12 @@ const ProviderItem = ({
|
|||
const { notify } = useContext(ToastContext)
|
||||
const [token, setToken] = useState<ProviderAzureToken | string>(
|
||||
provider.provider_name === 'azure_openai'
|
||||
? { azure_api_base: '', azure_api_type: '', azure_api_version: '', azure_api_key: '' }
|
||||
? { openai_api_base: '', openai_api_key: '' }
|
||||
: ''
|
||||
)
|
||||
const id = `${provider.provider_name}-${provider.provider_type}`
|
||||
const isOpen = id === activeId
|
||||
const providerKey = provider.provider_name === 'azure_openai' ? (provider.token as ProviderAzureToken)?.azure_api_key : provider.token
|
||||
const providerKey = provider.provider_name === 'azure_openai' ? (provider.token as ProviderAzureToken)?.openai_api_key : provider.token
|
||||
const comingSoon = false
|
||||
const isValid = provider.is_valid
|
||||
|
||||
|
|
@ -135,4 +135,4 @@ const ProviderItem = ({
|
|||
)
|
||||
}
|
||||
|
||||
export default ProviderItem
|
||||
export default ProviderItem
|
||||
|
|
|
|||
|
|
@ -84,11 +84,13 @@ const Header: FC<IHeaderProps> = ({ appItems, curApp, userProfile, onLogout, lan
|
|||
text={t('common.menus.apps')}
|
||||
activeSegment={['apps', 'app']}
|
||||
link='/apps'
|
||||
curNav={curApp && { id: curApp.id, name: curApp.name }}
|
||||
curNav={curApp && { id: curApp.id, name: curApp.name ,icon: curApp.icon, icon_background: curApp.icon_background}}
|
||||
navs={appItems.map(item => ({
|
||||
id: item.id,
|
||||
name: item.name,
|
||||
link: `/app/${item.id}/overview`
|
||||
link: `/app/${item.id}/overview`,
|
||||
icon: item.icon,
|
||||
icon_background: item.icon_background
|
||||
}))}
|
||||
createText={t('common.menus.newApp')}
|
||||
onCreate={() => setShowNewAppDialog(true)}
|
||||
|
|
@ -106,11 +108,13 @@ const Header: FC<IHeaderProps> = ({ appItems, curApp, userProfile, onLogout, lan
|
|||
text={t('common.menus.datasets')}
|
||||
activeSegment='datasets'
|
||||
link='/datasets'
|
||||
curNav={currentDataset && { id: currentDataset.id, name: currentDataset.name }}
|
||||
curNav={currentDataset && { id: currentDataset.id, name: currentDataset.name, icon: currentDataset.icon, icon_background: currentDataset.icon_background }}
|
||||
navs={datasets.map(dataset => ({
|
||||
id: dataset.id,
|
||||
name: dataset.name,
|
||||
link: `/datasets/${dataset.id}/documents`
|
||||
link: `/datasets/${dataset.id}/documents`,
|
||||
icon: dataset.icon,
|
||||
icon_background: dataset.icon_background
|
||||
}))}
|
||||
createText={t('common.menus.newDataset')}
|
||||
onCreate={() => router.push('/datasets/create')}
|
||||
|
|
|
|||
|
|
@ -10,6 +10,8 @@ type NavItem = {
|
|||
id: string
|
||||
name: string
|
||||
link: string
|
||||
icon: string
|
||||
icon_background: string
|
||||
}
|
||||
export interface INavSelectorProps {
|
||||
navs: NavItem[]
|
||||
|
|
@ -66,7 +68,7 @@ const NavSelector = ({ curNav, navs, createText, onCreate }: INavSelectorProps)
|
|||
<Menu.Item key={nav.id}>
|
||||
<div className={itemClassName} onClick={() => router.push(nav.link)}>
|
||||
<div className='relative w-6 h-6 mr-2 bg-[#D5F5F6] rounded-[6px]'>
|
||||
<AppIcon size='tiny' />
|
||||
<AppIcon size='tiny' icon={nav.icon} background={nav.icon_background}/>
|
||||
<div className='flex justify-center items-center absolute -right-0.5 -bottom-0.5 w-2.5 h-2.5 bg-white rounded'>
|
||||
<Indicator />
|
||||
</div>
|
||||
|
|
@ -102,4 +104,4 @@ const NavSelector = ({ curNav, navs, createText, onCreate }: INavSelectorProps)
|
|||
)
|
||||
}
|
||||
|
||||
export default NavSelector
|
||||
export default NavSelector
|
||||
|
|
|
|||
|
|
@ -458,6 +458,8 @@ const Main: FC<IMainProps> = ({
|
|||
<div className='bg-gray-100'>
|
||||
<Header
|
||||
title={siteInfo.title}
|
||||
icon={siteInfo.icon || ''}
|
||||
icon_background={siteInfo.icon_background || '#FFEAD5'}
|
||||
isMobile={isMobile}
|
||||
onShowSideBar={showSidebar}
|
||||
onCreateNewChat={() => handleConversationIdChange('-1')}
|
||||
|
|
|
|||
|
|
@ -7,6 +7,8 @@ import {
|
|||
} from '@heroicons/react/24/solid'
|
||||
export type IHeaderProps = {
|
||||
title: string
|
||||
icon: string
|
||||
icon_background: string
|
||||
isMobile?: boolean
|
||||
onShowSideBar?: () => void
|
||||
onCreateNewChat?: () => void
|
||||
|
|
@ -14,6 +16,8 @@ export type IHeaderProps = {
|
|||
const Header: FC<IHeaderProps> = ({
|
||||
title,
|
||||
isMobile,
|
||||
icon,
|
||||
icon_background,
|
||||
onShowSideBar,
|
||||
onCreateNewChat,
|
||||
}) => {
|
||||
|
|
@ -28,7 +32,7 @@ const Header: FC<IHeaderProps> = ({
|
|||
</div>
|
||||
) : <div></div>}
|
||||
<div className='flex items-center space-x-2'>
|
||||
<AppIcon size="small" />
|
||||
<AppIcon size="small" icon={icon} background={icon_background} />
|
||||
<div className=" text-sm text-gray-800 font-bold">{title}</div>
|
||||
</div>
|
||||
{isMobile ? (
|
||||
|
|
|
|||
|
|
@ -31,9 +31,6 @@ if (process.env.NEXT_PUBLIC_API_PREFIX && process.env.NEXT_PUBLIC_PUBLIC_API_PRE
|
|||
export const API_PREFIX: string = apiPrefix;
|
||||
export const PUBLIC_API_PREFIX: string = publicApiPrefix;
|
||||
|
||||
// mock server
|
||||
export const MOCK_API_PREFIX = 'http://127.0.0.1:3001'
|
||||
|
||||
const EDITION = process.env.NEXT_PUBLIC_EDITION || globalThis.document?.body?.getAttribute('data-public-edition')
|
||||
export const IS_CE_EDITION = EDITION === 'SELF_HOSTED'
|
||||
|
||||
|
|
|
|||
|
|
@ -150,12 +150,8 @@ const translation = {
|
|||
editKey: 'Edit',
|
||||
invalidApiKey: 'Invalid API key',
|
||||
azure: {
|
||||
resourceName: 'Resource Name',
|
||||
resourceNamePlaceholder: 'The name of your Azure OpenAI Resource.',
|
||||
deploymentId: 'Deployment ID',
|
||||
deploymentIdPlaceholder: 'The deployment name you chose when you deployed the model.',
|
||||
apiVersion: 'API Version',
|
||||
apiVersionPlaceholder: 'The API version to use for this operation.',
|
||||
apiBase: 'API Base',
|
||||
apiBasePlaceholder: 'The API Base URL of your Azure OpenAI Resource.',
|
||||
apiKey: 'API Key',
|
||||
apiKeyPlaceholder: 'Enter your API key here',
|
||||
helpTip: 'Learn Azure OpenAI Service',
|
||||
|
|
|
|||
|
|
@ -151,14 +151,10 @@ const translation = {
|
|||
editKey: '编辑',
|
||||
invalidApiKey: '无效的 API 密钥',
|
||||
azure: {
|
||||
resourceName: 'Resource Name',
|
||||
resourceNamePlaceholder: 'The name of your Azure OpenAI Resource.',
|
||||
deploymentId: 'Deployment ID',
|
||||
deploymentIdPlaceholder: 'The deployment name you chose when you deployed the model.',
|
||||
apiVersion: 'API Version',
|
||||
apiVersionPlaceholder: 'The API version to use for this operation.',
|
||||
apiBase: 'API Base',
|
||||
apiBasePlaceholder: '输入您的 Azure OpenAI API Base 地址',
|
||||
apiKey: 'API Key',
|
||||
apiKeyPlaceholder: 'Enter your API key here',
|
||||
apiKeyPlaceholder: '输入你的 API 密钥',
|
||||
helpTip: '了解 Azure OpenAI Service',
|
||||
},
|
||||
openaiHosted: {
|
||||
|
|
|
|||
|
|
@ -55,10 +55,8 @@ export type Member = Pick<UserProfileResponse, 'id' | 'name' | 'email' | 'last_l
|
|||
}
|
||||
|
||||
export type ProviderAzureToken = {
|
||||
azure_api_base: string
|
||||
azure_api_key: string
|
||||
azure_api_type: string
|
||||
azure_api_version: string
|
||||
openai_api_base: string
|
||||
openai_api_key: string
|
||||
}
|
||||
export type Provider = {
|
||||
provider_name: string
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@ import { AppMode } from './app'
|
|||
export type DataSet = {
|
||||
id: string
|
||||
name: string
|
||||
icon: string
|
||||
icon_background: string
|
||||
description: string
|
||||
permission: 'only_me' | 'all_team_members'
|
||||
data_source_type: 'upload_file'
|
||||
|
|
|
|||
|
|
@ -11,6 +11,8 @@ export type ConversationItem = {
|
|||
|
||||
export type SiteInfo = {
|
||||
title: string
|
||||
icon: string
|
||||
icon_background: string
|
||||
description: string
|
||||
default_language: Locale
|
||||
prompt_public: boolean
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@
|
|||
"fix": "next lint --fix"
|
||||
},
|
||||
"dependencies": {
|
||||
"@emoji-mart/data": "^1.1.2",
|
||||
"@formatjs/intl-localematcher": "^0.2.32",
|
||||
"@headlessui/react": "^1.7.13",
|
||||
"@heroicons/react": "^2.0.16",
|
||||
|
|
@ -33,6 +34,7 @@
|
|||
"dayjs": "^1.11.7",
|
||||
"echarts": "^5.4.1",
|
||||
"echarts-for-react": "^3.0.2",
|
||||
"emoji-mart": "^5.5.2",
|
||||
"eslint": "8.36.0",
|
||||
"eslint-config-next": "13.2.4",
|
||||
"i18next": "^22.4.13",
|
||||
|
|
|
|||
|
|
@ -16,8 +16,8 @@ export const fetchAppTemplates: Fetcher<AppTemplatesResponse, { url: string }> =
|
|||
return get(url) as Promise<AppTemplatesResponse>
|
||||
}
|
||||
|
||||
export const createApp: Fetcher<AppDetailResponse, { name: string; mode: AppMode; icon?: string, icon_background?: string, config?: ModelConfig }> = ({ name, icon, icon_background, mode, config }) => {
|
||||
return post('apps', { body: { name, mode, icon, icon_background, model_config: config } }) as Promise<AppDetailResponse>
|
||||
export const createApp: Fetcher<AppDetailResponse, { name: string; icon: string, icon_background: string, mode: AppMode; config?: ModelConfig }> = ({ name, icon, icon_background, mode, config }) => {
|
||||
return post('apps', { body: { name, icon, icon_background, mode, model_config: config } }) as Promise<AppDetailResponse>
|
||||
}
|
||||
|
||||
export const deleteApp: Fetcher<CommonResponse, string> = (appID) => {
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
import { API_PREFIX, MOCK_API_PREFIX, PUBLIC_API_PREFIX, IS_CE_EDITION } from '@/config'
|
||||
import { API_PREFIX, PUBLIC_API_PREFIX, IS_CE_EDITION } from '@/config'
|
||||
import Toast from '@/app/components/base/toast'
|
||||
|
||||
const TIME_OUT = 100000
|
||||
|
|
@ -33,7 +33,6 @@ export type IOnError = (msg: string) => void
|
|||
|
||||
type IOtherOptions = {
|
||||
isPublicAPI?: boolean
|
||||
isMock?: boolean
|
||||
needAllResponseContent?: boolean
|
||||
onData?: IOnData // for stream
|
||||
onError?: IOnError
|
||||
|
|
@ -116,7 +115,14 @@ const handleStream = (response: any, onData: IOnData, onCompleted?: IOnCompleted
|
|||
read()
|
||||
}
|
||||
|
||||
const baseFetch = (url: string, fetchOptions: any, { isPublicAPI = false, isMock = false, needAllResponseContent }: IOtherOptions) => {
|
||||
const baseFetch = (
|
||||
url: string,
|
||||
fetchOptions: any,
|
||||
{
|
||||
isPublicAPI = false,
|
||||
needAllResponseContent
|
||||
}: IOtherOptions
|
||||
) => {
|
||||
const options = Object.assign({}, baseOptions, fetchOptions)
|
||||
if (isPublicAPI) {
|
||||
const sharedToken = globalThis.location.pathname.split('/').slice(-1)[0]
|
||||
|
|
@ -124,9 +130,6 @@ const baseFetch = (url: string, fetchOptions: any, { isPublicAPI = false, isMock
|
|||
}
|
||||
|
||||
let urlPrefix = isPublicAPI ? PUBLIC_API_PREFIX : API_PREFIX
|
||||
if (isMock)
|
||||
urlPrefix = MOCK_API_PREFIX
|
||||
|
||||
let urlWithPrefix = `${urlPrefix}${url.startsWith('/') ? url : `/${url}`}`
|
||||
|
||||
const { method, params, body } = options
|
||||
|
|
|
|||
|
|
@ -190,6 +190,12 @@ export type App = {
|
|||
id: string
|
||||
/** Name */
|
||||
name: string
|
||||
|
||||
/** Icon */
|
||||
icon: string
|
||||
/** Icon Background */
|
||||
icon_background: string
|
||||
|
||||
/** Mode */
|
||||
mode: AppMode
|
||||
/** Enable web app */
|
||||
|
|
|
|||
Loading…
Reference in New Issue