Results for "hidden variables"
Networks with recurrent connections for sequences; largely supplanted by Transformers for many tasks.
Neural networks can approximate any continuous function under certain conditions.
Converting audio speech into text, often using encoder-decoder or transducer architectures.
A narrow hidden layer forcing compact representations.
Maps audio signals to linguistic units.
Temporal and pitch characteristics of speech.
Learning from data by constructing “pseudo-labels” (e.g., next-token prediction, masked modeling) without manual annotation.
The relationship between inputs and outputs changes over time, requiring monitoring and model updates.
Automatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
Configuration choices not learned directly (or not typically learned) that govern training or architecture.
Framework for reasoning about cause-effect relationships beyond correlation, often using structural assumptions and experiments.
Generative model that learns to reverse a gradual noise process.
Exact likelihood generative models using invertible transforms.
Models effects of interventions (do(X=x)).
What would have happened under different conditions.
Expected causal effect of a treatment.
Shift in model outputs.
Mathematical foundation for ML involving vector spaces, matrices, and linear transformations.
Matrix of first-order derivatives for vector-valued functions.
Direction of steepest ascent of a function.
Average value under a distribution.
Sample mean converges to expected value.
The physical system being controlled.
System returns to equilibrium after disturbance.
Equations governing how system states change over time.
Study of motion without considering forces.
Predicting disease progression or survival.
Measures a model’s ability to fit random noise; used to bound generalization error.