Results for "learning like humans"
Optimization problems where any local minimum is global.
Removing weights or neurons to shrink models and improve efficiency; can be structured or unstructured.
Changing speaker characteristics while preserving content.
Assigning labels per pixel (semantic) or per instance (instance segmentation) to map object boundaries.
Maps audio signals to linguistic units.
AI subfield dealing with understanding and generating human language, including syntax, semantics, and pragmatics.
Generates audio waveforms from spectrograms.
Converting audio speech into text, often using encoder-decoder or transducer architectures.
Centralized AI expertise group.
Predicting disease progression or survival.
System-level design for general intelligence.
Mechanism that computes context-aware mixtures of representations; scales well and captures long-range dependencies.
Maximum number of tokens the model can attend to in one forward pass; constrains long-document reasoning.
A model that assigns probabilities to sequences of tokens; often trained by next-token prediction.
Breaking documents into pieces for retrieval; chunk size/overlap strongly affect RAG quality.
Samples from the smallest set of tokens whose probabilities sum to p, adapting set size by context.
Optimization using curvature information; often expensive at scale.
Prevents attention to future tokens during training/inference.
Attention mechanisms that reduce quadratic complexity.
Recovering 3D structure from images.
Differences between training and inference conditions.
Distributed agents producing emergent intelligence.
Ability to inspect and verify AI decisions.
Requirement to provide explanations.
Privacy risk analysis under GDPR-like laws.
Control that remains stable under model uncertainty.
Mathematical representation of friction forces.
Finding routes from start to goal.
Fabrication of cases or statutes by LLMs.
Credit models with interpretable logic.