Results for "target leakage"
When information from evaluation data improperly influences training, inflating reported performance.
Recovering training data from gradients.
Extracting system prompts or hidden instructions.
Reusing knowledge from a source task/domain to improve learning on a target task/domain, typically via pretrained models.
Sampling from easier distribution with reweighting.
Changing speaker characteristics while preserving content.
Separating data into training (fit), validation (tune), and test (final estimate) to avoid leakage and optimism bias.
Inferring sensitive features of training data.
The relationship between inputs and outputs changes over time, requiring monitoring and model updates.
A parameterized mapping from inputs to outputs; includes architecture + learned parameters.
A scalar measure optimized during training, typically expected loss over data, sometimes with regularization terms.
Reconstructing a model or its capabilities via API queries or leaked artifacts.
Reduction in uncertainty achieved by observing a variable; used in decision trees and active learning.
Gradually increasing learning rate at training start to avoid divergence.
Learning from data generated by a different policy.
Generator produces limited variety of outputs.
Shift in model outputs.
Train/test environment mismatch.
Learning only from current policy’s data.