dify/api/services/errors/evaluation.py
FFXN f9b76f0f52
feat: evaluation (#35251)
Co-authored-by: jyong <718720800@qq.com>
Co-authored-by: Yansong Zhang <916125788@qq.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: hj24 <mambahj24@gmail.com>
Co-authored-by: hj24 <huangjian@dify.ai>
Co-authored-by: Joel <iamjoel007@gmail.com>
Co-authored-by: Stephen Zhou <38493346+hyoban@users.noreply.github.com>
Co-authored-by: CodingOnStar <hanxujiang@dify.com>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
2026-04-15 16:09:40 +08:00

22 lines
860 B
Python

from services.errors.base import BaseServiceError
class EvaluationFrameworkNotConfiguredError(BaseServiceError):
def __init__(self, description: str | None = None):
super().__init__(description or "Evaluation framework is not configured. Set EVALUATION_FRAMEWORK env var.")
class EvaluationNotFoundError(BaseServiceError):
def __init__(self, description: str | None = None):
super().__init__(description or "Evaluation not found.")
class EvaluationDatasetInvalidError(BaseServiceError):
def __init__(self, description: str | None = None):
super().__init__(description or "Evaluation dataset is invalid.")
class EvaluationMaxConcurrentRunsError(BaseServiceError):
def __init__(self, description: str | None = None):
super().__init__(description or "Maximum number of concurrent evaluation runs reached.")