dify/api/core/llm_generator
GareArc 22c8d8d772
feat(telemetry): add prompt generation telemetry to Enterprise OTEL
- Add PromptGenerationTraceInfo trace entity with operation_type field
- Implement telemetry for rule-generate, code-generate, structured-output, instruction-modify operations
- Emit metrics: tokens (total/input/output), duration histogram, requests counter, errors counter
- Emit structured logs with model info and operation context
- Content redaction controlled by ENTERPRISE_INCLUDE_CONTENT env var
- Fix user_id propagation in TraceTask kwargs
- Fix latency calculation when llm_result is None

No spans exported - metrics and logs only for lightweight observability.
2026-02-05 20:14:49 -08:00
..
output_parser Ensure suggested questions parser returns typed sequence (#27104) 2025-10-20 13:01:09 +08:00
__init__.py FEAT: NEW WORKFLOW ENGINE (#3160) 2024-04-08 18:51:46 +08:00
entities.py refactor: rm some dict api/controllers/console/app/generator.py api/core/llm_generator/llm_generator.py (#31709) 2026-01-30 17:37:20 +09:00
llm_generator.py feat(telemetry): add prompt generation telemetry to Enterprise OTEL 2026-02-05 20:14:49 -08:00
prompts.py fix: summary index bug (#31810) 2026-02-02 09:45:17 +08:00