dify/api/core/workflow/nodes/llm
GareArc d858cbdc0c
feat(telemetry): add input/output token split to enterprise OTEL traces
- Add PROMPT_TOKENS and COMPLETION_TOKENS to WorkflowNodeExecutionMetadataKey
- Store prompt/completion tokens in node execution metadata JSON (no schema change)
- Calculate workflow-level token split by summing node executions on-the-fly
- Export gen_ai.usage.input_tokens and output_tokens to enterprise telemetry
- Add semantic convention constants for token attributes
- Maintain backward compatibility (historical data shows null)

BREAKING: None
MIGRATION: None (uses JSON metadata, no schema changes)
2026-02-03 19:27:11 -08:00
..
__init__.py feat/enhance the multi-modal support (#8818) 2024-10-21 10:43:49 +08:00
entities.py feat: knowledge pipeline (#25360) 2025-09-18 12:49:10 +08:00
exc.py remove bare list, dict, Sequence, None, Any (#25058) 2025-09-06 03:32:23 +08:00
file_saver.py Fix: Remove workflow/nodes from pyright exclusion (#26461) 2025-09-30 15:39:04 +08:00
llm_utils.py feat: credit pool (#30720) 2026-01-08 13:17:30 +08:00
node.py feat(telemetry): add input/output token split to enterprise OTEL traces 2026-02-03 19:27:11 -08:00