Results for "learning like humans"
Humans assist or override autonomous behavior.
Designing AI to cooperate with humans and each other.
Ensuring robots do not harm humans.
Human-like understanding of physical behavior.
Techniques to understand model decisions (global or local), important in high-stakes and regulated settings.
System design where humans validate or guide model outputs, especially for high-stakes decisions.
Tendency to trust automated suggestions even when incorrect; mitigated by UI design, training, and checks.
AI capable of performing most intellectual tasks humans can.
Incremental capability growth.
Generating human-like speech from text.
Learning structure from unlabeled data, such as discovering groups, compressing representations, or modeling data distributions.
Controls the size of parameter updates; too high diverges, too low trains slowly or gets stuck.
Training with a small labeled dataset plus a larger unlabeled dataset, leveraging assumptions like smoothness/cluster structure.
Methods like Adam adjusting learning rates dynamically.
Deep learning system for protein structure prediction.
AI limited to specific domains.
Configuration choices not learned directly (or not typically learned) that govern training or architecture.
Activation max(0, x); improves gradient flow and training speed in deep nets.
Ensuring AI systems pursue intended human goals.
Rules and controls around generation (filters, validators, structured outputs) to reduce unsafe or invalid behavior.
Running models locally.
Control shared between human and agent.
A continuous vector encoding of an item (word, image, user) such that semantic similarity corresponds to geometric closeness.
A table summarizing classification outcomes, foundational for metrics like precision, recall, specificity.
Automated detection/prevention of disallowed outputs (toxicity, self-harm, illegal instruction, etc.).
An RNN variant using gates to mitigate vanishing gradients and capture longer context.
Studying internal mechanisms or input influence on outputs (e.g., saliency maps, SHAP, attention analysis).
Protecting data during network transfer and while stored; essential for ML pipelines handling sensitive data.
A dataset + metric suite for comparing models; can be gamed or misaligned with real-world goals.
Systematic review of model/data processes to ensure performance, fairness, security, and policy compliance.