Commit Graph

51 Commits

Author SHA1 Message Date
-LAN- 559ab46ee1
fix: Removes redundant token calculations and updates dependencies
Eliminates unnecessary pre-calculation of token limits and recalculation of max tokens
across multiple app runners, simplifying the logic for prompt handling.

Updates tiktoken library from version 0.8.0 to 0.9.0 for improved tokenization performance.

Increases default token limit in TokenBufferMemory to accommodate larger prompt messages.

These changes streamline the token management process and leverage the latest
improvements in the tiktoken library.

Fixes potential token overflow issues and prepares the system for handling larger
inputs more efficiently.

Relates to internal optimization tasks.

Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-28 15:39:12 +08:00
-LAN- 413dfd5628
feat: add completion mode and context size options for LLM configuration (#13325)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 15:08:53 +08:00
-LAN- f9515901cc
fix: Azure AI Foundry model cannot be used in the workflow (#13323)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 14:52:57 +08:00
-LAN- 04d13a8116
feat(credits): Allow to configure model-credit mapping (#13274)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 11:01:31 +08:00
-LAN- b47669b80b
fix: deduct LLM quota after processing invoke result (#13075)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-02 12:05:11 +08:00
yihong 56e15d09a9
feat: mypy for all type check (#10921) 2024-12-24 18:38:51 +08:00
JasonVV 4b1e13e982
Fix 11979 (#11984) 2024-12-23 14:30:04 +08:00
-LAN- 996a9135f6
feat(llm_node): support order in text and files (#11837)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2024-12-20 14:12:50 +08:00
Novice 79a710ce98
Feat: continue on error (#11458)
Co-authored-by: Novice Lee <novicelee@NovicedeMacBook-Pro.local>
Co-authored-by: Novice Lee <novicelee@NoviPro.local>
2024-12-11 14:22:42 +08:00
yihong 716576043d
fix: issue 11247 that Completion mode content maybe list or str (#11504)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2024-12-10 23:22:14 +08:00
-LAN- 464e6354c5
feat: correct the prompt grammar. (#11328)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2024-12-04 15:12:47 +08:00
-LAN- 223a30401c
fix: LLM invoke error should not be raised (#11141)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2024-11-26 20:56:48 +08:00
-LAN- 044e7b63c2
fix(llm_node): Ignore file if not supported. (#11114) 2024-11-26 14:14:14 +08:00
-LAN- 5b7b328193
feat: Allow to contains files in the system prompt even model not support. (#11111) 2024-11-26 13:45:49 +08:00
-LAN- cbb4e95928
fix(llm_node): Ignore user query when memory is disabled. (#11106) 2024-11-26 13:07:32 +08:00
-LAN- 20c091a5e7
fix: user query be ignored if query_prompt_template is an empty string (#11103) 2024-11-26 12:47:59 +08:00
-LAN- 60b5dac3ab
fix: query will be None if the query_prompt_template not exists (#11031)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2024-11-24 21:06:51 +08:00
非法操作 08ac36812b
feat: support LLM process document file (#10966)
Co-authored-by: -LAN- <laipz8200@outlook.com>
2024-11-22 19:32:44 +08:00
-LAN- c5f7d650b5
feat: Allow using file variables directly in the LLM node and support more file types. (#10679)
Co-authored-by: Joel <iamjoel007@gmail.com>
2024-11-22 16:30:22 +08:00
非法操作 033ab5490b
feat: support LLM understand video (#9828) 2024-11-08 13:22:52 +08:00
-LAN- 38bca6731c
refactor(workflow): introduce specific error handling for LLM nodes (#10221) 2024-11-04 15:22:58 +08:00
-LAN- 8b5ea39916
chore(llm_node): remove unnecessary type ignore for context assignment (#10216) 2024-11-04 15:22:31 +08:00
-LAN- 3b53e06e0d
fix(workflow): refine variable type checks in LLMNode (#10051) 2024-10-30 16:23:12 +08:00
-LAN- eb87e690ed
fix(llm-node): handle NoneSegment variables properly (#9978) 2024-10-30 08:46:11 +08:00
-LAN- d018b32d0b
fix(workflow): enhance prompt handling with vision support (#9790) 2024-10-24 17:52:11 +08:00
-LAN- 8f670f31b8
refactor(variables): replace deprecated 'get_any' with 'get' method (#9584) 2024-10-22 10:49:19 +08:00
-LAN- 5838345f48
fix(entities): add validator for `VisionConfig` to handle None values (#9598) 2024-10-22 10:49:03 +08:00
-LAN- 2e657b7b12
fix(workflow): handle NoneSegments in variable extraction (#9585) 2024-10-22 08:59:04 +08:00
-LAN- c063617553
fix(workflow): improve database session handling and variable management (#9581) 2024-10-22 00:42:40 +08:00
-LAN- e61752bd3a
feat/enhance the multi-modal support (#8818) 2024-10-21 10:43:49 +08:00
Bowen Liang 40fb4d16ef
chore: refurbish Python code by applying refurb linter rules (#8296) 2024-09-12 15:50:49 +08:00
Jyong bb3002b173
revert page column (#8217) 2024-09-10 18:21:22 +08:00
Bowen Liang 2cf1187b32
chore(api/core): apply ruff reformatting (#7624) 2024-09-10 17:00:20 +08:00
takatost dabfd74622
feat: Parallel Execution of Nodes in Workflows (#8192)
Co-authored-by: StyleZhang <jasonapring2015@outlook.com>
Co-authored-by: Yi <yxiaoisme@gmail.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
2024-09-10 15:23:16 +08:00
Byeongjin Kang d489b8b3e0
feat: return page number of pdf documents upon retrieval (#7749) 2024-09-05 16:43:26 +08:00
Joe fee4d3f6ca
feat: ops trace add llm model (#7306) 2024-09-04 10:39:00 +08:00
orangeclk f53454f81d
add finish_reason to the LLM node output (#7498) 2024-08-21 17:29:30 +08:00
-LAN- 4f5f27cf2b
refactor(api/core/workflow/enums.py): Rename SystemVariable to SystemVariableKey. (#7445) 2024-08-20 17:52:06 +08:00
-LAN- 8f16165f92
chore(api/core): Improve FileVar's type hint and imports. (#7290) 2024-08-15 12:43:18 +08:00
-LAN- 32dc963556
feat(api/workflow): Add `Conversation.dialogue_count` (#7275) 2024-08-15 10:53:05 +08:00
-LAN- ad7552ea8d
fix(api/core/workflow/nodes/llm/llm_node.py): Fix LLM Node error. (#6576) 2024-07-23 17:09:16 +08:00
takatost 6b5fac3004
fix: fetch context error in llm node (#6562) 2024-07-23 15:04:51 +08:00
-LAN- 5e6fc58db3
Feat/environment variables in workflow (#6515)
Co-authored-by: JzoNg <jzongcode@gmail.com>
2024-07-22 15:29:39 +08:00
rerorero 3a423e8ce7
fix: visioin model always with low quality (#5253) 2024-06-16 09:46:17 +08:00
Yeuoly 8578ee0864
feat: support LLM jinja2 template prompt (#3968)
Co-authored-by: Joel <iamjoel007@gmail.com>
2024-05-10 18:08:32 +08:00
takatost 12435774ca
feat: query prompt template support in chatflow (#3791)
Co-authored-by: Joel <iamjoel007@gmail.com>
2024-04-25 18:01:53 +08:00
takatost 3da179f77b
feat: add conversation_id and user_id in chatflow/workflow system vars (#3771)
Co-authored-by: Joel <iamjoel007@gmail.com>
2024-04-24 17:20:01 +08:00
takatost b890c11c14
feat: filter empty content messages in llm node (#3547) 2024-04-17 13:30:33 +08:00
takatost 1219e41d29
fix: array[string] context in llm node invalid (#3518) 2024-04-16 14:39:14 +08:00
takatost cfb5ccc7d3
fix: image was sent to an unsupported LLM when sending second message (#3268) 2024-04-10 10:29:52 +08:00