Results for "multiple samples"
Sampling multiple outputs and selecting consensus.
Two-network setup where generator fools a discriminator.
Training one model on multiple tasks simultaneously to improve generalization through shared structure.
Models that process or generate multiple modalities, enabling vision-language tasks, speech, video understanding, etc.
Number of samples per gradient update; impacts compute efficiency, generalization, and stability.
Ordering training samples from easier to harder to improve convergence or generalization.
Models that learn to generate samples resembling training data.
Learns the score (∇ log p(x)) for generative sampling.
Autoencoder using probabilistic latent variables and KL regularization.
Approximating expectations via random sampling.
Selecting the most informative samples to label (e.g., uncertainty sampling) to reduce labeling cost.
Generator produces limited variety of outputs.
Optimization with multiple local minima/saddle points; typical in neural networks.
Allows model to attend to information from different subspaces simultaneously.
Multiple agents interacting cooperatively or competitively.
Graphs containing multiple node or edge types with different semantics.
Combining signals from multiple modalities.
Multiple examples included in prompt.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
Minimizing average loss on training data; can overfit when data is limited or biased.
Model-generated content that is fluent but unsupported by evidence or incorrect; mitigated by grounding and verification.
Samples from the k highest-probability tokens to limit unlikely outputs.
Artificially created data used to train/test models; helpful for privacy and coverage, risky if unrealistic.
Samples from the smallest set of tokens whose probabilities sum to p, adapting set size by context.
Maliciously inserting or altering training data to implant backdoors or degrade performance.
A model is PAC-learnable if it can, with high probability, learn an approximately correct hypothesis from finite samples.
Generative model that learns to reverse a gradual noise process.
Controls amount of noise added at each diffusion step.
Monte Carlo method for state estimation.
Sampling from easier distribution with reweighting.