Results for "probabilistic latent"
Diffusion performed in latent space for efficiency.
Modeling environment evolution in latent space.
Autoencoder using probabilistic latent variables and KL regularization.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
Model that compresses input into latent space and reconstructs it.
Probabilistic energy-based neural network with hidden variables.
Automatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
A proper scoring rule measuring squared error of predicted probabilities for binary outcomes.
Probabilistic model for sequential data with latent states.
Eliminating variables by integrating over them.
Inferring human goals from behavior.
Probabilistic graphical model for structured prediction.
Diffusion model trained to remove noise step by step.
Exact likelihood generative models using invertible transforms.
Generative model that learns to reverse a gradual noise process.
Models time evolution via hidden states.
Decomposes a matrix into orthogonal components; used in embeddings and compression.
Stored compute or algorithms enabling rapid jumps.
The degree to which predicted probabilities match true frequencies (e.g., 0.8 means ~80% correct).
Penalizes confident wrong predictions heavily; standard for classification and language modeling.
A model that assigns probabilities to sequences of tokens; often trained by next-token prediction.
Scales logits before sampling; higher increases randomness/diversity, lower increases determinism.
Samples from the smallest set of tokens whose probabilities sum to p, adapting set size by context.
Raw model outputs before converting to probabilities; manipulated during decoding and calibration.
Exponential of average negative log-likelihood; lower means better predictive fit, not necessarily better utility.
Bayesian parameter estimation using the mode of the posterior distribution.
AI subfield dealing with understanding and generating human language, including syntax, semantics, and pragmatics.
Categorizing AI applications by impact and regulatory risk.
Estimating parameters by maximizing likelihood of observed data.
Models that define an energy landscape rather than explicit probabilities.