Results for "label prediction"
Differences between training and deployed patient populations.
Shift in model outputs.
Pixel-wise classification of image regions.
Training objective where the model predicts the next token given previous tokens (causal modeling).
A mismatch between training and deployment data distributions that can degrade model performance.
A structured collection of examples used to train/evaluate models; quality, bias, and coverage often dominate outcomes.
Penalizes confident wrong predictions heavily; standard for classification and language modeling.
Measure of consistency across labelers; low agreement indicates ambiguous tasks or poor guidelines.
Selecting the most informative samples to label (e.g., uncertainty sampling) to reduce labeling cost.
Assigning category labels to images.
Assigning labels per pixel (semantic) or per instance (instance segmentation) to map object boundaries.
Train/test environment mismatch.
A model that assigns probabilities to sequences of tokens; often trained by next-token prediction.
Probabilistic graphical model for structured prediction.
Monte Carlo method for state estimation.
Low-latency prediction per request.
Learning by minimizing prediction error.
Predicting case success probabilities.
Deep learning system for protein structure prediction.
A function measuring prediction error (and sometimes calibration), guiding gradient-based optimization.
Networks with recurrent connections for sequences; largely supplanted by Transformers for many tasks.
Studying internal mechanisms or input influence on outputs (e.g., saliency maps, SHAP, attention analysis).
Feature attribution method grounded in cooperative game theory for explaining predictions in tabular settings.
Local surrogate explanation method approximating model behavior near a specific input.
Inputs crafted to cause model errors or unsafe behavior, often imperceptible in vision or subtle in text.
Error due to sensitivity to fluctuations in the training dataset.
Systematic error introduced by simplifying assumptions in a learning algorithm.
GNN framework where nodes iteratively exchange and aggregate messages from neighbors.
Extension of convolution to graph domains using adjacency structure.
Graphs containing multiple node or edge types with different semantics.