Dropout

Intermediate

Randomly zeroing activations during training to reduce co-adaptation and overfitting.

AdvertisementAd space — term-top

Why It Matters

Dropout is a key technique for improving the generalization of neural networks, making them less likely to overfit to training data. This is especially important in real-world applications where models must perform well on new data. By incorporating dropout, developers can create more reliable AI systems across various domains, such as image recognition and natural language processing.

Dropout is a regularization technique used in neural networks to prevent overfitting by randomly setting a fraction of the input units to zero during training. This stochastic approach forces the network to learn redundant representations, as it cannot rely on any specific neuron being present. Mathematically, if a neuron is dropped with probability p, the remaining neurons are scaled by 1/(1-p) during training to maintain the expected output. Dropout has been shown to improve generalization performance on unseen data and is commonly applied in fully connected layers of deep learning models. The technique is particularly effective in large networks, where the risk of overfitting is higher due to the increased number of parameters. Dropout can be seen as an ensemble method, as it effectively trains multiple sub-networks within the same architecture.

Keywords

Domains

Related Terms

Welcome to AI Glossary

The free, self-building AI dictionary. Help us keep it free—click an ad once in a while!

Search

Type any question or keyword into the search bar at the top.

Browse

Tap a letter in the A–Z bar to browse terms alphabetically, or filter by domain, industry, or difficulty level.

3D WordGraph

Fly around the interactive 3D graph to explore how AI concepts connect. Click any word to read its full definition.