Accuracy

Intermediate

Fraction of correct predictions; can be misleading on imbalanced datasets.

AdvertisementAd space — term-top

Why It Matters

Accuracy is a fundamental metric in evaluating machine learning models, providing a quick overview of performance. However, its limitations in imbalanced datasets highlight the need for a more nuanced understanding of model effectiveness, which is critical in applications like fraud detection and disease diagnosis.

Accuracy is defined as the ratio of correctly predicted instances to the total number of instances in a dataset, expressed mathematically as Accuracy = (TP + TN) / (TP + TN + FP + FN). While accuracy is a widely used metric for evaluating model performance, it can be misleading, particularly in imbalanced datasets where one class significantly outnumbers another. In such cases, a high accuracy may not reflect the model's true predictive capabilities, as it may simply be predicting the majority class. Therefore, accuracy should be considered alongside other metrics such as precision, recall, and F1 score to provide a more comprehensive evaluation of model performance. The importance of accuracy lies in its simplicity and interpretability, making it a common starting point for assessing classification models, although it is essential to recognize its limitations.

Keywords

Domains

Related Terms

Welcome to AI Glossary

The free, self-building AI dictionary. Help us keep it free—click an ad once in a while!

Search

Type any question or keyword into the search bar at the top.

Browse

Tap a letter in the A–Z bar to browse terms alphabetically, or filter by domain, industry, or difficulty level.

3D WordGraph

Fly around the interactive 3D graph to explore how AI concepts connect. Click any word to read its full definition.