Results for "generated samples"
Two-network setup where generator fools a discriminator.
Artificially created data used to train/test models; helpful for privacy and coverage, risky if unrealistic.
Generator produces limited variety of outputs.
Model-generated content that is fluent but unsupported by evidence or incorrect; mitigated by grounding and verification.
Artificial sensor data generated in simulation.
Number of samples per gradient update; impacts compute efficiency, generalization, and stability.
Ordering training samples from easier to harder to improve convergence or generalization.
Models that learn to generate samples resembling training data.
Learns the score (∇ log p(x)) for generative sampling.
Autoencoder using probabilistic latent variables and KL regularization.
Approximating expectations via random sampling.
Controls amount of noise added at each diffusion step.
Sampling multiple outputs and selecting consensus.
Model trained to predict human preferences (or utility) for candidate outputs; used in RLHF-style pipelines.
Scales logits before sampling; higher increases randomness/diversity, lower increases determinism.
Model trained on its own outputs degrades quality.
Learning from data generated by a different policy.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
Minimizing average loss on training data; can overfit when data is limited or biased.
Selecting the most informative samples to label (e.g., uncertainty sampling) to reduce labeling cost.
Samples from the k highest-probability tokens to limit unlikely outputs.
Samples from the smallest set of tokens whose probabilities sum to p, adapting set size by context.
Maliciously inserting or altering training data to implant backdoors or degrade performance.
A model is PAC-learnable if it can, with high probability, learn an approximately correct hypothesis from finite samples.
Generative model that learns to reverse a gradual noise process.
Monte Carlo method for state estimation.
Sampling from easier distribution with reweighting.
Sampling-based motion planner.
Measures a model’s ability to fit random noise; used to bound generalization error.
Generates sequences one token at a time, conditioning on past tokens.