Accuracy
Fraction of correctly classified samples over all predictions.
Accuracy is the simplest classification metric: the number of correct predictions divided by the total number of samples. It is well-suited for balanced datasets but can be misleading when class distributions are skewed — a model that always predicts the majority class would still score high without learning anything useful.
::
Accuracy = correct predictions / total samples
Range: [0, 1], higher is better (MAXIMIZE = True).
References
Methods
score(true_labels: 'DashAIDataset', probs_pred_labels: 'np.ndarray') -> float
AccuracyCalculate the accuracy between true labels and predicted labels.
Parameters
- true_labels : DashAIDataset
- A DashAI dataset with labels.
- probs_pred_labels : np.ndarray
- A two-dimensional matrix in which each column represents a class and the row values represent the probability that an example belongs to the class associated with the column.
Returns
- float
- Accuracy score between true labels and predicted labels
get_metadata(cls: 'BaseMetric') -> Dict[str, Any]
BaseMetricGet metadata values for the current metric.
Returns
- Dict[str, Any]
- Dictionary with the metadata
is_multiclass(true_labels: 'np.ndarray') -> bool
ClassificationMetricDetermine if the classification problem is multiclass (more than 2 classes).
Parameters
- true_labels : np.ndarray
- Array of true labels.
Returns
- bool
- True if the problem has more than 2 unique classes, False otherwise.
Compatible with
TabularClassificationTaskImageClassificationTaskTextClassificationTask