Skip to main content

F1

Metric
DashAI.back.metrics.classification.F1

Harmonic mean of precision and recall.

The F1 score balances precision (how many predicted positives are actually positive) and recall (how many actual positives were found). It is the preferred metric for imbalanced classification tasks where both false positives and false negatives carry equal cost.

For binary tasks the standard F1 is used. For multiclass tasks, macro averaging (unweighted mean over all classes) is applied so that minority classes are not drowned out by majority classes.

::

F1 = 2 · (Precision x Recall) / (Precision + Recall)

Range: [0, 1], higher is better (MAXIMIZE = True).

References

Methods

score(true_labels: 'DashAIDataset', probs_pred_labels: 'np.ndarray', multiclass: Optional[bool] = None) -> float

Defined on F1

Calculate f1 score between true labels and predicted labels.

Parameters

true_labels : DashAIDataset
A DashAI dataset with labels.
probs_pred_labels : np.ndarray
A two-dimensional matrix in which each column represents a class and the row values represent the probability that an example belongs to the class associated with the column.
multiclass : bool, optional
Whether the task is a multiclass classification. If None, it will be determined automatically from the number of unique labels.

Returns

float
f1 score between true labels and predicted labels

get_metadata(cls: 'BaseMetric') -> Dict[str, Any]

Defined on BaseMetric

Get metadata values for the current metric.

Returns

Dict[str, Any]
Dictionary with the metadata

is_multiclass(true_labels: 'np.ndarray') -> bool

Defined on ClassificationMetric

Determine if the classification problem is multiclass (more than 2 classes).

Parameters

true_labels : np.ndarray
Array of true labels.

Returns

bool
True if the problem has more than 2 unique classes, False otherwise.

Compatible with