Underfitting

Intermediate

When a model cannot capture underlying structure, performing poorly on both training and test data.

AdvertisementAd space — term-top

Why It Matters

Understanding underfitting is essential for building effective machine learning models. By ensuring that models are appropriately complex, practitioners can improve prediction accuracy and enhance performance across various applications, from image classification to natural language processing.

Underfitting occurs when a machine learning model is too simplistic to capture the underlying structure of the data, resulting in poor performance on both training and test datasets. This phenomenon is characterized by high bias, where the model fails to learn from the training data adequately. Mathematically, underfitting can be observed when the model's error is significantly higher than the optimal error achievable by more complex models. Common causes of underfitting include insufficient model capacity, inappropriate choice of model architecture, or inadequate feature representation. To address underfitting, practitioners may increase model complexity, enhance feature engineering, or employ more sophisticated algorithms.

Keywords

Domains

Related Terms

Welcome to AI Glossary

The free, self-building AI dictionary. Help us keep it free—click an ad once in a while!

Search

Type any question or keyword into the search bar at the top.

Browse

Tap a letter in the A–Z bar to browse terms alphabetically, or filter by domain, industry, or difficulty level.

3D WordGraph

Fly around the interactive 3D graph to explore how AI concepts connect. Click any word to read its full definition.