Results for "dimensionality reduction"
A narrow hidden layer forcing compact representations.
Learning structure from unlabeled data, such as discovering groups, compressing representations, or modeling data distributions.
Number of linearly independent rows or columns.
Decomposes a matrix into orthogonal components; used in embeddings and compression.
A measurable property or attribute used as model input (raw or engineered), such as age, pixel intensity, or token ID.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
All possible configurations an agent may encounter.
Model that compresses input into latent space and reconstructs it.
Vector whose direction remains unchanged under linear transformation.
Modeling environment evolution in latent space.
Reduction in uncertainty achieved by observing a variable; used in decision trees and active learning.
Networks using convolution operations with weight sharing and locality, effective for images and signals.
Encodes token position explicitly, often via sinusoids.
Architecture based on self-attention and feedforward layers; foundation of modern LLMs and many multimodal models.
Eliminating variables by integrating over them.
A single attention mechanism within multi-head attention.
Space of all possible robot configurations.
Attention between different modalities.
Quantifies shared information between random variables.
Generative model that learns to reverse a gradual noise process.
Approximating expectations via random sampling.
Sampling from easier distribution with reweighting.
Storing results to reduce compute.
Risk threatening humanity’s survival.