Results for "samples"
Two-network setup where generator fools a discriminator.
Number of samples per gradient update; impacts compute efficiency, generalization, and stability.
Ordering training samples from easier to harder to improve convergence or generalization.
Models that learn to generate samples resembling training data.
Learns the score (∇ log p(x)) for generative sampling.
Autoencoder using probabilistic latent variables and KL regularization.
Approximating expectations via random sampling.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
Minimizing average loss on training data; can overfit when data is limited or biased.
Selecting the most informative samples to label (e.g., uncertainty sampling) to reduce labeling cost.
Model-generated content that is fluent but unsupported by evidence or incorrect; mitigated by grounding and verification.
Samples from the k highest-probability tokens to limit unlikely outputs.
Artificially created data used to train/test models; helpful for privacy and coverage, risky if unrealistic.
Samples from the smallest set of tokens whose probabilities sum to p, adapting set size by context.
Maliciously inserting or altering training data to implant backdoors or degrade performance.
A model is PAC-learnable if it can, with high probability, learn an approximately correct hypothesis from finite samples.
Generative model that learns to reverse a gradual noise process.
Controls amount of noise added at each diffusion step.
Generator produces limited variety of outputs.
Monte Carlo method for state estimation.
Sampling from easier distribution with reweighting.
Sampling multiple outputs and selecting consensus.
Sampling-based motion planner.
Measures a model’s ability to fit random noise; used to bound generalization error.