Results for "multiple samples"
Number of samples per gradient update; impacts compute efficiency, generalization, and stability.
Selecting the most informative samples to label (e.g., uncertainty sampling) to reduce labeling cost.
Ordering training samples from easier to harder to improve convergence or generalization.
Samples from the k highest-probability tokens to limit unlikely outputs.
Samples from the smallest set of tokens whose probabilities sum to p, adapting set size by context.
A model is PAC-learnable if it can, with high probability, learn an approximately correct hypothesis from finite samples.
Models that learn to generate samples resembling training data.
Training one model on multiple tasks simultaneously to improve generalization through shared structure.
A robust evaluation technique that trains/evaluates across multiple splits to estimate performance variability.
Models that process or generate multiple modalities, enabling vision-language tasks, speech, video understanding, etc.
Optimization with multiple local minima/saddle points; typical in neural networks.
Multiple agents interacting cooperatively or competitively.
Graphs containing multiple node or edge types with different semantics.
Combining signals from multiple modalities.
Multiple examples included in prompt.
Sampling multiple outputs and selecting consensus.