Results for "dynamics learning"
Optimizes future actions using a model of dynamics.
RL without explicit dynamics model.
Learned model of environment dynamics.
Learning a function from input-output pairs (labeled data), optimizing performance on predicting outputs for unseen inputs.
Learning structure from unlabeled data, such as discovering groups, compressing representations, or modeling data distributions.
Learning from data by constructing “pseudo-labels” (e.g., next-token prediction, masked modeling) without manual annotation.
A learning paradigm where an agent interacts with an environment and learns to choose actions to maximize cumulative reward.
Learning where data arrives sequentially and the model updates continuously, often under changing distributions.
Reusing knowledge from a source task/domain to improve learning on a target task/domain, typically via pretrained models.
Automatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
Adjusting learning rate over training to improve convergence.
Learning from data generated by a different policy.
Learning only from current policy’s data.
Learning policies from expert demonstrations.
Learning without catastrophic forgetting.
Equations governing how system states change over time.
Motion considering forces and mass.
Modeling interactions with environment.
Motion of solid objects under forces.
Predicts next state given current state and action.
Modeling environment evolution in latent space.
Collective behavior without central control.
Designing input features to expose useful structure (e.g., ratios, lags, aggregations), often crucial outside deep learning.
Gradients shrink through layers, slowing learning in early layers; mitigated by ReLU, residuals, normalization.
Reinforcement learning from human feedback: uses preference data to train a reward model and optimize the policy.
Systematic error introduced by simplifying assumptions in a learning algorithm.
Reduction in uncertainty achieved by observing a variable; used in decision trees and active learning.
Gradually increasing learning rate at training start to avoid divergence.
Built-in assumptions guiding learning efficiency and generalization.
Combines value estimation (critic) with policy learning (actor).