Results for "statistical learning"
Research ensuring AI remains safe.
A conceptual framework describing error as the sum of systematic error (bias) and sensitivity to data (variance).
Updating a pretrained model’s weights on task-specific data to improve performance or adapt style/behavior.
Measures a model’s ability to fit random noise; used to bound generalization error.
A continuous vector encoding of an item (word, image, user) such that semantic similarity corresponds to geometric closeness.
The learned numeric values of a model adjusted during training to minimize a loss function.
A scalar measure optimized during training, typically expected loss over data, sometimes with regularization terms.
When a model fits noise/idiosyncrasies of training data and performs poorly on unseen data.
When a model cannot capture underlying structure, performing poorly on both training and test data.
How well a model performs on new data drawn from the same (or similar) distribution as training.
Separating data into training (fit), validation (tune), and test (final estimate) to avoid leakage and optimism bias.
A table summarizing classification outcomes, foundational for metrics like precision, recall, specificity.
Scalar summary of ROC; measures ranking ability, not calibration.
Plots true positive rate vs false positive rate across thresholds; summarizes separability.
A proper scoring rule measuring squared error of predicted probabilities for binary outcomes.
Penalizes confident wrong predictions heavily; standard for classification and language modeling.
Average of squared residuals; common regression objective.
Uses an exponential moving average of gradients to speed convergence and reduce oscillation.
One complete traversal of the training dataset during training.
Halting training when validation performance stops improving to reduce overfitting.
A parameterized function composed of interconnected units organized in layers with nonlinear activations.
Nonlinear functions enabling networks to approximate complex mappings; ReLU variants dominate modern DL.
Methods to set starting weights to preserve signal/gradient scales across layers.
Randomly zeroing activations during training to reduce co-adaptation and overfitting.
Networks using convolution operations with weight sharing and locality, effective for images and signals.
Networks with recurrent connections for sequences; largely supplanted by Transformers for many tasks.
Converting text into discrete units (tokens) for modeling; subword tokenizers balance vocabulary size and coverage.
An RNN variant using gates to mitigate vanishing gradients and capture longer context.
A datastore optimized for similarity search over embeddings, enabling semantic retrieval at scale.
Architecture based on self-attention and feedforward layers; foundation of modern LLMs and many multimodal models.