Catastrophic Forgetting
IntermediateLoss of old knowledge when learning new tasks.
AdvertisementAd space — term-top
Why It Matters
Understanding catastrophic forgetting is crucial for advancing AI technologies, especially in applications requiring continual learning, such as robotics and personalized AI assistants. By addressing this issue, developers can create systems that retain knowledge over time, leading to more effective and adaptable AI solutions.
Catastrophic forgetting refers to the phenomenon where a neural network loses previously acquired knowledge upon learning new information. This occurs primarily in traditional neural network architectures that are trained sequentially on different tasks without retaining the knowledge from earlier tasks. Mathematically, this can be expressed as a significant increase in the loss function L for previously learned tasks when the model is fine-tuned on new tasks, leading to a high generalization error on earlier data. Techniques such as Elastic Weight Consolidation (EWC) and progressive neural networks have been proposed to mitigate this issue by preserving important weights or creating separate pathways for new tasks. Catastrophic forgetting is a critical challenge in the field of continual learning and relates to the broader concept of lifelong learning, where models are expected to adapt to new information without losing prior knowledge.