Results for "belief divergence"
Measures how one probability distribution diverges from another.
Updated belief after observing data.
Simplified Boltzmann Machine with bipartite structure.
Gradients grow too large, causing divergence; mitigated by clipping, normalization, careful init.
Gradually increasing learning rate at training start to avoid divergence.
Differences between training and inference conditions.
Graphical model expressing factorization of a probability distribution.
Belief before observing data.
Understanding objects exist when unseen.
Inferring the agent’s internal state from noisy sensor data.
Iterative method that updates parameters in the direction of negative gradient to minimize loss.
Measures divergence between true and predicted probability distributions.
Probabilistic energy-based neural network with hidden variables.
Learns the score (∇ log p(x)) for generative sampling.
Generative model that learns to reverse a gradual noise process.
Autoencoder using probabilistic latent variables and KL regularization.
Generator produces limited variety of outputs.
Shift in feature distribution over time.
Sensitivity of a function to input perturbations.
Model optimizes objectives misaligned with human values.
AI used without governance approval.
Performance drop when moving from simulation to reality.
Learning policies from expert demonstrations.
Groups adopting extreme positions.
Training a smaller “student” model to mimic a larger “teacher,” often improving efficiency while retaining performance.