Results for "uncertainty measure"
A measure of randomness or uncertainty in a probability distribution.
Selecting the most informative samples to label (e.g., uncertainty sampling) to reduce labeling cost.
Reduction in uncertainty achieved by observing a variable; used in decision trees and active learning.
Optimization under uncertainty.
Quantifies shared information between random variables.
Measures how much information an observable random variable carries about unknown parameters.
Measure of consistency across labelers; low agreement indicates ambiguous tasks or poor guidelines.
Measure of spread around the mean.
Measures a model’s ability to fit random noise; used to bound generalization error.
Ability to replicate results given same code/data; harder in distributed training and nondeterministic ops.
Updating beliefs about parameters using observed evidence and prior distributions.
Models evaluating and improving their own outputs.
Formal framework for sequential decision-making under uncertainty.
Framework for identifying, measuring, and mitigating model risks.
Autoencoder using probabilistic latent variables and KL regularization.
Decomposing goals into sub-tasks.
Variable whose values depend on chance.
Updated belief after observing data.
Belief before observing data.
Maintaining alignment under new conditions.
Train/test environment mismatch.
Control that remains stable under model uncertainty.
Differences between simulated and real physics.
Estimating robot position within a map.
Acting to minimize surprise or free energy.
Inferring and aligning with human preferences.
A scalar measure optimized during training, typically expected loss over data, sometimes with regularization terms.
How well a model performs on new data drawn from the same (or similar) distribution as training.
Training objective where the model predicts the next token given previous tokens (causal modeling).
Measures divergence between true and predicted probability distributions.