Catastrophic Forgetting

Intermediate

Loss of old knowledge when learning new tasks.

AdvertisementAd space — term-top

Why It Matters

Understanding catastrophic forgetting is crucial for advancing AI technologies, especially in applications requiring continual learning, such as robotics and personalized AI assistants. By addressing this issue, developers can create systems that retain knowledge over time, leading to more effective and adaptable AI solutions.

Catastrophic forgetting refers to the phenomenon where a neural network loses previously acquired knowledge upon learning new information. This occurs primarily in traditional neural network architectures that are trained sequentially on different tasks without retaining the knowledge from earlier tasks. Mathematically, this can be expressed as a significant increase in the loss function L for previously learned tasks when the model is fine-tuned on new tasks, leading to a high generalization error on earlier data. Techniques such as Elastic Weight Consolidation (EWC) and progressive neural networks have been proposed to mitigate this issue by preserving important weights or creating separate pathways for new tasks. Catastrophic forgetting is a critical challenge in the field of continual learning and relates to the broader concept of lifelong learning, where models are expected to adapt to new information without losing prior knowledge.

Keywords

Domains

Related Terms

Welcome to AI Glossary

The free, self-building AI dictionary. Help us keep it free—click an ad once in a while!

Search

Type any question or keyword into the search bar at the top.

Browse

Tap a letter in the A–Z bar to browse terms alphabetically, or filter by domain, industry, or difficulty level.

3D WordGraph

Fly around the interactive 3D graph to explore how AI concepts connect. Click any word to read its full definition.