Results for "goal divergence"
Measures how one probability distribution diverges from another.
Finding routes from start to goal.
Optimal pathfinding algorithm.
Planning via artificial force fields.
Gradients grow too large, causing divergence; mitigated by clipping, normalization, careful init.
Gradually increasing learning rate at training start to avoid divergence.
Differences between training and inference conditions.
Tendency for agents to pursue resources regardless of final goal.
Optimizing continuous action sequences.
Goals useful regardless of final objective.
Iterative method that updates parameters in the direction of negative gradient to minimize loss.
Measures divergence between true and predicted probability distributions.
Probabilistic energy-based neural network with hidden variables.
Simplified Boltzmann Machine with bipartite structure.
Generative model that learns to reverse a gradual noise process.
Learns the score (∇ log p(x)) for generative sampling.
Autoencoder using probabilistic latent variables and KL regularization.
Generator produces limited variety of outputs.
Shift in feature distribution over time.
Sensitivity of a function to input perturbations.
Model optimizes objectives misaligned with human values.
AI used without governance approval.
Performance drop when moving from simulation to reality.
Learning policies from expert demonstrations.
Groups adopting extreme positions.
Training a smaller “student” model to mimic a larger “teacher,” often improving efficiency while retaining performance.
Learning structure from unlabeled data, such as discovering groups, compressing representations, or modeling data distributions.
Techniques that discourage overly complex solutions to improve generalization (reduce overfitting).
Designing input features to expose useful structure (e.g., ratios, lags, aggregations), often crucial outside deep learning.
Average of squared residuals; common regression objective.