Results for "latent ability"
Diffusion performed in latent space for efficiency.
Modeling environment evolution in latent space.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
Model that compresses input into latent space and reconstructs it.
Autoencoder using probabilistic latent variables and KL regularization.
Automatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
Exact likelihood generative models using invertible transforms.
Scalar summary of ROC; measures ranking ability, not calibration.
Probabilistic energy-based neural network with hidden variables.
Probabilistic model for sequential data with latent states.
Generative model that learns to reverse a gradual noise process.
Models time evolution via hidden states.
Decomposes a matrix into orthogonal components; used in embeddings and compression.
Eliminating variables by integrating over them.
Inferring human goals from behavior.
Stored compute or algorithms enabling rapid jumps.
The set of tokens a model can represent; impacts efficiency, multilinguality, and handling of rare strings.
Designing input features to expose useful structure (e.g., ratios, lags, aggregations), often crucial outside deep learning.
Ability to replicate results given same code/data; harder in distributed training and nondeterministic ops.
Plots true positive rate vs false positive rate across thresholds; summarizes separability.
Ability to correctly detect disease.
Nonlinear functions enabling networks to approximate complex mappings; ReLU variants dominate modern DL.
A measure of a model class’s expressive capacity based on its ability to shatter datasets.
Measures a model’s ability to fit random noise; used to bound generalization error.
Methods that learn training procedures or initializations so models can adapt quickly to new tasks with little data.
A branch of ML using multi-layer neural networks to learn hierarchical representations, often excelling in vision, speech, and language.
Configuration choices not learned directly (or not typically learned) that govern training or architecture.
Learning where data arrives sequentially and the model updates continuously, often under changing distributions.
How well a model performs on new data drawn from the same (or similar) distribution as training.
Of true positives, the fraction correctly identified; sensitive to false negatives.