Results for "distribution shift"
Distribution Shift
IntermediateTrain/test environment mismatch.
Distribution shift is like when you practice basketball in a gym but then have to play in a different setting, like outdoors on a windy day. The conditions have changed, and your skills might not work as well. In AI, this happens when a model is trained on one type of data but then faces differen...
Differences between training and deployed patient populations.
Train/test environment mismatch.
A mismatch between training and deployment data distributions that can degrade model performance.
Shift in model outputs.
Shift in feature distribution over time.
Sampling from easier distribution with reweighting.
Updated belief after observing data.
Sum of independent variables converges to normal distribution.
Bayesian parameter estimation using the mode of the posterior distribution.
Techniques that stabilize and speed training by normalizing activations; LayerNorm is common in Transformers.
Generates audio waveforms from spectrograms.
Running models locally.
Deep learning system for protein structure prediction.
Measures divergence between true and predicted probability distributions.
Measures how one probability distribution diverges from another.
Models that learn to generate samples resembling training data.
Autoencoder using probabilistic latent variables and KL regularization.
Belief before observing data.
How well a model performs on new data drawn from the same (or similar) distribution as training.
Graphical model expressing factorization of a probability distribution.
Diffusion model trained to remove noise step by step.
Learns the score (∇ log p(x)) for generative sampling.
Generator produces limited variety of outputs.
Describes likelihoods of random variable outcomes.
Eliminating variables by integrating over them.
Differences between training and inference conditions.
Stochastic generation strategies that trade determinism for diversity; key knobs include temperature and nucleus sampling.
Scales logits before sampling; higher increases randomness/diversity, lower increases determinism.
Converts logits to probabilities by exponentiation and normalization; common in classification and LMs.
A measure of randomness or uncertainty in a probability distribution.