Results for "single example"
One example included to guide output.
Inputs crafted to cause model errors or unsafe behavior, often imperceptible in vision or subtle in text.
A formal privacy framework ensuring outputs do not reveal much about any single individual’s data contribution.
Stress-testing models for failures, vulnerabilities, policy violations, and harmful behaviors before release.
Generating speech audio from text, with control over prosody, speaker identity, and style.
Model exploits poorly specified objectives.
Tendency for agents to pursue resources regardless of final goal.
Model optimizes objectives misaligned with human values.
Agents fail to coordinate optimally.
Training one model on multiple tasks simultaneously to improve generalization through shared structure.
A robust evaluation technique that trains/evaluates across multiple splits to estimate performance variability.
Harmonic mean of precision and recall; useful when balancing false positives/negatives matters.
Scalar summary of ROC; measures ranking ability, not calibration.
Maximum number of tokens the model can attend to in one forward pass; constrains long-document reasoning.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
Identifying and localizing objects in images, often with confidence scores and bounding rectangles.
Bayesian parameter estimation using the mode of the posterior distribution.
Neural networks can approximate any continuous function under certain conditions.
A single attention mechanism within multi-head attention.
Generator produces limited variety of outputs.
Attention between different modalities.
Low-latency prediction per request.
Running predictions on large datasets periodically.