dify/api/core/llm_generator
GareArc c56e5a5b71
feat(telemetry): add prompt generation telemetry to Enterprise OTEL
- Add PromptGenerationTraceInfo trace entity with operation_type field
- Implement telemetry for rule-generate, code-generate, structured-output, instruction-modify operations
- Emit metrics: tokens (total/input/output), duration histogram, requests counter, errors counter
- Emit structured logs with model info and operation context
- Content redaction controlled by ENTERPRISE_INCLUDE_CONTENT env var
- Fix user_id propagation in TraceTask kwargs
- Fix latency calculation when llm_result is None

No spans exported - metrics and logs only for lightweight observability.
2026-02-04 00:38:17 -08:00
..
output_parser Ensure suggested questions parser returns typed sequence (#27104) 2025-10-20 13:01:09 +08:00
__init__.py FEAT: NEW WORKFLOW ENGINE (#3160) 2024-04-08 18:51:46 +08:00
llm_generator.py feat(telemetry): add prompt generation telemetry to Enterprise OTEL 2026-02-04 00:38:17 -08:00
prompts.py feat: support suggested_questions_after_answer to be configed (#29254) 2025-12-08 10:27:02 +08:00