Results for "domain adaptation"
Reusing knowledge from a source task/domain to improve learning on a target task/domain, typically via pretrained models.
A mismatch between training and deployment data distributions that can degrade model performance.
PEFT method injecting trainable low-rank matrices into layers, enabling efficient fine-tuning.
Performance drop when moving from simulation to reality.
Randomizing simulation parameters to improve real-world transfer.
Training one model on multiple tasks simultaneously to improve generalization through shared structure.
Techniques that fine-tune small additional components rather than all weights to reduce compute and storage.
Maintaining alignment under new conditions.
Train/test environment mismatch.
Combining simulation and real-world data.
Differences between training and deployed patient populations.
Updating a pretrained model’s weights on task-specific data to improve performance or adapt style/behavior.
Lowest possible loss.
Methods that learn training procedures or initializations so models can adapt quickly to new tasks with little data.
Randomly zeroing activations during training to reduce co-adaptation and overfitting.
Learning by minimizing prediction error.
Software regulated as a medical device.
Rate at which AI capabilities improve.
Designing input features to expose useful structure (e.g., ratios, lags, aggregations), often crucial outside deep learning.
Optimization problems where any local minimum is global.
Optimization with multiple local minima/saddle points; typical in neural networks.
Graphs containing multiple node or edge types with different semantics.
Extension of convolution to graph domains using adjacency structure.
Assigning category labels to images.
Transformer applied to image patches.
Generates audio waveforms from spectrograms.
Running predictions on large datasets periodically.
Task instruction without examples.
Centralized AI expertise group.
Motion of solid objects under forces.