Recall
IntermediateOf true positives, the fraction correctly identified; sensitive to false negatives.
AdvertisementAd space — term-top
Why It Matters
Recall is critical in fields where identifying all positive cases is essential, such as healthcare and security. High recall ensures that important cases are not overlooked, making it a key metric for evaluating the effectiveness of classification models.
Recall, also known as the true positive rate or sensitivity, is defined as the ratio of true positive predictions to the total number of actual positive instances in the dataset, mathematically expressed as Recall = TP / (TP + FN). This metric is crucial in scenarios where the cost of false negatives is high, as it measures the model's ability to identify all relevant instances. Recall is sensitive to the number of false negatives; a high recall indicates that most actual positives are correctly identified, while a low recall suggests that many positives are missed. In the context of the confusion matrix, recall is derived from the counts of TP and FN, making it an essential component in evaluating classification performance, particularly in applications such as disease detection and fraud identification, where failing to identify a positive case can have serious consequences.