Logits

Intermediate

Raw model outputs before converting to probabilities; manipulated during decoding and calibration.

AdvertisementAd space — term-top

Why It Matters

Logits are fundamental in the functioning of neural networks, serving as the basis for generating probabilities in classification tasks. Their manipulation during the decoding process is crucial for achieving high-quality outputs in AI applications, impacting everything from language translation to image recognition.

Logits are the raw, unnormalized outputs produced by a neural network before they are transformed into probabilities through the softmax function. Mathematically, logits can be represented as the output of the last layer of a neural network, typically denoted as z, where z = W * x + b, with W representing the weight matrix, x the input vector, and b the bias vector. Logits serve as the basis for calculating probabilities, where the softmax function converts them into a probability distribution over the output classes. This transformation is crucial for tasks such as classification and sequence generation, as it allows the model to make probabilistic predictions. Logits are also manipulated during decoding processes, such as in beam search or sampling methods, to influence the selection of outputs based on their relative likelihoods.

Keywords

Domains

Related Terms

Welcome to AI Glossary

The free, self-building AI dictionary. Help us keep it free—click an ad once in a while!

Search

Type any question or keyword into the search bar at the top.

Browse

Tap a letter in the A–Z bar to browse terms alphabetically, or filter by domain, industry, or difficulty level.

3D WordGraph

Fly around the interactive 3D graph to explore how AI concepts connect. Click any word to read its full definition.