Results for "learning like humans"
Methods like Adam adjusting learning rates dynamically.
Training with a small labeled dataset plus a larger unlabeled dataset, leveraging assumptions like smoothness/cluster structure.
System design where humans validate or guide model outputs, especially for high-stakes decisions.
Humans assist or override autonomous behavior.
Ensuring robots do not harm humans.
AI capable of performing most intellectual tasks humans can.
Designing AI to cooperate with humans and each other.
A table summarizing classification outcomes, foundational for metrics like precision, recall, specificity.
Generating human-like speech from text.
Privacy risk analysis under GDPR-like laws.
Human-like understanding of physical behavior.
Learning a function from input-output pairs (labeled data), optimizing performance on predicting outputs for unseen inputs.
Learning structure from unlabeled data, such as discovering groups, compressing representations, or modeling data distributions.
Learning from data by constructing “pseudo-labels” (e.g., next-token prediction, masked modeling) without manual annotation.
A learning paradigm where an agent interacts with an environment and learns to choose actions to maximize cumulative reward.
Learning where data arrives sequentially and the model updates continuously, often under changing distributions.
Reusing knowledge from a source task/domain to improve learning on a target task/domain, typically via pretrained models.
Automatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
Adjusting learning rate over training to improve convergence.
Learning from data generated by a different policy.
Learning only from current policy’s data.
Learning policies from expert demonstrations.
Learning without catastrophic forgetting.
Designing input features to expose useful structure (e.g., ratios, lags, aggregations), often crucial outside deep learning.
Gradients shrink through layers, slowing learning in early layers; mitigated by ReLU, residuals, normalization.
Reinforcement learning from human feedback: uses preference data to train a reward model and optimize the policy.
Systematic error introduced by simplifying assumptions in a learning algorithm.
Reduction in uncertainty achieved by observing a variable; used in decision trees and active learning.
Gradually increasing learning rate at training start to avoid divergence.
Built-in assumptions guiding learning efficiency and generalization.