Results for "representation learning"
Representation Learning
IntermediateAutomatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
Representation learning is like teaching a computer to understand the essence of data without needing someone to explain every detail. Imagine trying to recognize different animals in pictures. Instead of manually pointing out features like fur color or size, a representation learning model can a...
Internal representation of the agent itself.
Automatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
Diffusion performed in latent space for efficiency.
Model that compresses input into latent space and reconstructs it.
Structured graph encoding facts as entity–relation–entity triples.
Mathematical representation of friction forces.
Internal representation of environment layout.
Learning from data by constructing “pseudo-labels” (e.g., next-token prediction, masked modeling) without manual annotation.
Inferring and aligning with human preferences.
Learned model of environment dynamics.
Visualization of optimization landscape.
Predicts next state given current state and action.
Mathematical foundation for ML involving vector spaces, matrices, and linear transformations.
AI supporting legal research, drafting, and analysis.
A continuous vector encoding of an item (word, image, user) such that semantic similarity corresponds to geometric closeness.
When a model cannot capture underlying structure, performing poorly on both training and test data.
A table summarizing classification outcomes, foundational for metrics like precision, recall, specificity.
Converting text into discrete units (tokens) for modeling; subword tokenizers balance vocabulary size and coverage.
Plots true positive rate vs false positive rate across thresholds; summarizes separability.
The shape of the loss function over parameter space.
Assigning labels per pixel (semantic) or per instance (instance segmentation) to map object boundaries.
Graphs containing multiple node or edge types with different semantics.
AI subfield dealing with understanding and generating human language, including syntax, semantics, and pragmatics.
Combining signals from multiple modalities.
Maps audio signals to linguistic units.
Modeling environment evolution in latent space.
Predicting borrower default risk.
AI proposing scientific hypotheses.
Often more informative than ROC on imbalanced datasets; focuses on positive class performance.
Constraining outputs to retrieved or provided sources, often with citation, to improve factual reliability.