Results for "latent space"
Latent Space
IntermediateThe internal space where learned representations live; operations here often correlate with semantics or generative factors.
Latent space is like a hidden room in a house where all the important features of the house are stored, but you can't see them directly. Imagine if you could take a picture of a house and then represent its features—like the number of rooms, the size of the yard, and the color of the walls—using ...
Diffusion performed in latent space for efficiency.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
Modeling environment evolution in latent space.
Model that compresses input into latent space and reconstructs it.
Autoencoder using probabilistic latent variables and KL regularization.
Space of all possible robot configurations.
Set of all actions available to the agent.
Models time evolution via hidden states.
Automatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
Set of vectors closed under addition and scalar multiplication.
A continuous vector encoding of an item (word, image, user) such that semantic similarity corresponds to geometric closeness.
All possible configurations an agent may encounter.
Computing collision-free trajectories.
Fast approximation of costly simulations.
Probabilistic energy-based neural network with hidden variables.
Probabilistic model for sequential data with latent states.
Generative model that learns to reverse a gradual noise process.
Exact likelihood generative models using invertible transforms.
Decomposes a matrix into orthogonal components; used in embeddings and compression.
Eliminating variables by integrating over them.
Inferring human goals from behavior.
Stored compute or algorithms enabling rapid jumps.
A parameterized mapping from inputs to outputs; includes architecture + learned parameters.
Designing input features to expose useful structure (e.g., ratios, lags, aggregations), often crucial outside deep learning.
A datastore optimized for similarity search over embeddings, enabling semantic retrieval at scale.
The shape of the loss function over parameter space.
A narrow minimum often associated with poorer generalization.
A wide basin often correlated with better generalization.
Encodes positional information via rotation in embedding space.
Formal framework for sequential decision-making under uncertainty.