dify/api/core/model_runtime/model_providers
Kalo Chin 2681bafb76
fix: handle document fetching from URL in Anthropic LLM model, solving base64 decoding error (#11858)
2024-12-20 18:23:42 +08:00
..
__base
anthropic fix: handle document fetching from URL in Anthropic LLM model, solving base64 decoding error (#11858) 2024-12-20 18:23:42 +08:00
azure_ai_studio
azure_openai fix: better gard nan value from numpy for issue #11827 (#11864) 2024-12-20 09:28:32 +08:00
baichuan fix: volcengine_maas and baichuan message error (#11625) 2024-12-16 13:05:27 +08:00
bedrock fix: add safe dictionary access for bedrock credentials (#11860) 2024-12-20 12:13:39 +09:00
chatglm
cohere fix: better gard nan value from numpy for issue #11827 (#11864) 2024-12-20 09:28:32 +08:00
deepseek fix: deepseek reports an error when using Response Format #11677 (#11678) 2024-12-16 12:58:03 +08:00
fireworks
fishaudio
gitee_ai feat: add gitee ai vl models (#11697) 2024-12-16 18:45:26 +08:00
google feat: add gemini-2.0-flash-thinking-exp-1219 (#11863) 2024-12-20 09:26:31 +08:00
gpustack fix: int None will cause error for context size (#11055) 2024-11-25 21:04:16 +08:00
groq fix: name of llama-3.3-70b-specdec (#11596) 2024-12-12 16:33:49 +08:00
huggingface_hub
huggingface_tei
hunyuan feat:add hunyuan model(hunyuan-role, hunyuan-large, hunyuan-large-rol… (#11766) 2024-12-18 15:25:53 +08:00
jina fix: int None will cause error for context size (#11055) 2024-11-25 21:04:16 +08:00
leptonai
localai
minimax fix: add the missing abab6.5t-chat model of Minimax (#11484) 2024-12-09 17:59:20 +08:00
mistralai [Pixtral] Add new model ; add vision (#11231) 2024-12-11 10:14:16 +08:00
mixedbread
moonshot fix: use `removeprefix()` instead of `lstrip()` to remove the `data:` prefix (#11272) 2024-12-03 09:16:25 +08:00
nomic
novita
nvidia
nvidia_nim
oci
ollama feat: support json_schema for ollama models (#11449) 2024-12-08 08:36:12 +08:00
openai fix: better gard nan value from numpy for issue #11827 (#11864) 2024-12-20 09:28:32 +08:00
openai_api_compatible fix: better error message for stream (#11635) 2024-12-15 17:16:04 +08:00
openllm
openrouter
perfxcloud fix: int None will cause error for context size (#11055) 2024-11-25 21:04:16 +08:00
replicate fix: remove ruff ignore SIM300 (#11810) 2024-12-19 18:30:51 +08:00
sagemaker
siliconflow fix: silicon change its model fix #11844 (#11847) 2024-12-19 20:50:09 +08:00
spark
stepfun fix: use `removeprefix()` instead of `lstrip()` to remove the `data:` prefix (#11272) 2024-12-03 09:16:25 +08:00
tencent
togetherai
tongyi chore: the consistency of MultiModalPromptMessageContent (#11721) 2024-12-17 15:01:38 +08:00
triton_inference_server
upstage fix: better gard nan value from numpy for issue #11827 (#11864) 2024-12-20 09:28:32 +08:00
vertex_ai fix: better memory usage from 800+ to 500+ (#11796) 2024-12-20 14:51:43 +08:00
vessl_ai
volcengine_maas feat(ark): add doubao-pro-256k and doubao-embedding-large (#11831) 2024-12-19 17:49:31 +08:00
voyage fix: int None will cause error for context size (#11055) 2024-11-25 21:04:16 +08:00
wenxin fix: better wenxin rerank handler, close #11252 (#11283) 2024-12-03 13:57:16 +08:00
x feat: add grok-2-1212 and grok-2-vision-1212 (#11672) 2024-12-15 21:18:24 +08:00
xinference
yi
zhinao
zhipuai feat: add zhipu glm_4v_flash (#11440) 2024-12-07 22:27:57 +08:00
__init__.py
_position.yaml
model_provider_factory.py