Skip to main content

Recall

Metric
DashAI.back.metrics.classification.Recall

Fraction of actual positives that are correctly identified.

Recall (also called sensitivity or true positive rate) measures the ability of the classifier to find all positive samples. It is the metric of choice when the cost of false negatives is high — e.g. in medical screening, missing a disease is more costly than a false alarm.

For binary tasks the standard binary recall is used. For multiclass tasks, macro averaging (unweighted mean over all classes) is applied.

::

Recall = TP / (TP + FN)

Range: [0, 1], higher is better (MAXIMIZE = True).

References

Methods

score(true_labels: 'DashAIDataset', probs_pred_labels: 'np.ndarray', multiclass: Optional[bool] = None) -> float

Defined on Recall

Calculate recall between true labels and predicted labels.

Parameters

true_labels : DashAIDataset
A DashAI dataset with labels.
probs_pred_labels : np.ndarray
A two-dimensional matrix in which each column represents a class and the row values represent the probability that an example belongs to the class associated with the column.
multiclass : bool, optional
Whether the task is a multiclass classification. If None, it will be determined automatically from the number of unique labels.

Returns

float
recall score between true labels and predicted labels

get_metadata(cls: 'BaseMetric') -> Dict[str, Any]

Defined on BaseMetric

Get metadata values for the current metric.

Returns

Dict[str, Any]
Dictionary with the metadata

is_multiclass(true_labels: 'np.ndarray') -> bool

Defined on ClassificationMetric

Determine if the classification problem is multiclass (more than 2 classes).

Parameters

true_labels : np.ndarray
Array of true labels.

Returns

bool
True if the problem has more than 2 unique classes, False otherwise.

Compatible with