Results for "ungrounded output"
Small prompt changes cause large output changes.
Learning a function from input-output pairs (labeled data), optimizing performance on predicting outputs for unseen inputs.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
Forcing predictable formats for downstream systems; reduces parsing errors and supports validation/guardrails.
Probabilistic graphical model for structured prediction.
Explicit output constraints (format, tone).
Asking model to review and improve output.
Using output to adjust future inputs.
Differences between training and deployed patient populations.
A parameterized mapping from inputs to outputs; includes architecture + learned parameters.
A function measuring prediction error (and sometimes calibration), guiding gradient-based optimization.
Automatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
A parameterized function composed of interconnected units organized in layers with nonlinear activations.
Crafting prompts to elicit desired behavior, often using role, structure, constraints, and examples.
Feature attribution method grounded in cooperative game theory for explaining predictions in tabular settings.
Model-generated content that is fluent but unsupported by evidence or incorrect; mitigated by grounding and verification.
Raw model outputs before converting to probabilities; manipulated during decoding and calibration.
A single attention mechanism within multi-head attention.
One example included to guide output.
Temporary reasoning space (often hidden).
A formal privacy framework ensuring outputs do not reveal much about any single individual’s data contribution.
Enables external computation or lookup.
Training a smaller “student” model to mimic a larger “teacher,” often improving efficiency while retaining performance.
Learning where data arrives sequentially and the model updates continuously, often under changing distributions.
Reusing knowledge from a source task/domain to improve learning on a target task/domain, typically via pretrained models.
Training one model on multiple tasks simultaneously to improve generalization through shared structure.
A structured collection of examples used to train/evaluate models; quality, bias, and coverage often dominate outcomes.
The learned numeric values of a model adjusted during training to minimize a loss function.
The degree to which predicted probabilities match true frequencies (e.g., 0.8 means ~80% correct).
A proper scoring rule measuring squared error of predicted probabilities for binary outcomes.