Recall

Intermediate

Of true positives, the fraction correctly identified; sensitive to false negatives.

AdvertisementAd space — term-top

Why It Matters

Recall is critical in fields where identifying all positive cases is essential, such as healthcare and security. High recall ensures that important cases are not overlooked, making it a key metric for evaluating the effectiveness of classification models.

Recall, also known as the true positive rate or sensitivity, is defined as the ratio of true positive predictions to the total number of actual positive instances in the dataset, mathematically expressed as Recall = TP / (TP + FN). This metric is crucial in scenarios where the cost of false negatives is high, as it measures the model's ability to identify all relevant instances. Recall is sensitive to the number of false negatives; a high recall indicates that most actual positives are correctly identified, while a low recall suggests that many positives are missed. In the context of the confusion matrix, recall is derived from the counts of TP and FN, making it an essential component in evaluating classification performance, particularly in applications such as disease detection and fraud identification, where failing to identify a positive case can have serious consequences.

Keywords

Domains

Related Terms

Welcome to AI Glossary

The free, self-building AI dictionary. Help us keep it free—click an ad once in a while!

Search

Type any question or keyword into the search bar at the top.

Browse

Tap a letter in the A–Z bar to browse terms alphabetically, or filter by domain, industry, or difficulty level.

3D WordGraph

Fly around the interactive 3D graph to explore how AI concepts connect. Click any word to read its full definition.