Results for "low-rank adaptation"
PEFT method injecting trainable low-rank matrices into layers, enabling efficient fine-tuning.
Number of linearly independent rows or columns.
Techniques that fine-tune small additional components rather than all weights to reduce compute and storage.
Low-latency prediction per request.
Controls the size of parameter updates; too high diverges, too low trains slowly or gets stuck.
Measure of consistency across labelers; low agreement indicates ambiguous tasks or poor guidelines.
Reusing knowledge from a source task/domain to improve learning on a target task/domain, typically via pretrained models.
Training one model on multiple tasks simultaneously to improve generalization through shared structure.
Methods that learn training procedures or initializations so models can adapt quickly to new tasks with little data.
A mismatch between training and deployment data distributions that can degrade model performance.
Randomly zeroing activations during training to reduce co-adaptation and overfitting.
Maintaining alignment under new conditions.
Train/test environment mismatch.
Performance drop when moving from simulation to reality.
Combining simulation and real-world data.
Learning by minimizing prediction error.
Software regulated as a medical device.
Rate at which AI capabilities improve.
Differences between training and deployed patient populations.
Of predicted positives, the fraction that are truly positive; sensitive to false positives.
Of true positives, the fraction correctly identified; sensitive to false negatives.
Of true negatives, the fraction correctly identified.
Samples from the k highest-probability tokens to limit unlikely outputs.
A wide basin often correlated with better generalization.
Sampling from easier distribution with reweighting.
Classifying models by impact level.
Control shared between human and agent.
Ultra-low-latency algorithmic trading.
Measures a model’s ability to fit random noise; used to bound generalization error.