dify/api/core/llm_generator
GareArc 2d60be311d
fix: extract model_provider from model_config in prompt generation trace
The model_provider field in prompt generation traces was being incorrectly
extracted by parsing the model name (e.g., 'deepseek-chat'), which resulted
in an empty string when the model name didn't contain a '/' character.

Now extracts the provider directly from the model_config parameter, with
a fallback to the old parsing logic for backward compatibility.

Changes:
- Update _emit_prompt_generation_trace to accept model_config parameter
- Extract provider from model_config.get('provider') when available
- Update all 6 call sites to pass model_config
- Maintain backward compatibility with fallback logic
2026-02-05 20:15:11 -08:00
..
output_parser Ensure suggested questions parser returns typed sequence (#27104) 2025-10-20 13:01:09 +08:00
__init__.py FEAT: NEW WORKFLOW ENGINE (#3160) 2024-04-08 18:51:46 +08:00
entities.py refactor: rm some dict api/controllers/console/app/generator.py api/core/llm_generator/llm_generator.py (#31709) 2026-01-30 17:37:20 +09:00
llm_generator.py fix: extract model_provider from model_config in prompt generation trace 2026-02-05 20:15:11 -08:00
prompts.py fix: summary index bug (#31810) 2026-02-02 09:45:17 +08:00