Results for "data → model"
Randomizing simulation parameters to improve real-world transfer.
Estimating robot position within a map.
AI systems assisting clinicians with diagnosis or treatment decisions.
AI that ranks patients by urgency.
AI-assisted review of legal documents.
AI predicting crime patterns (highly controversial).
Predicting case success probabilities.
Identifying suspicious transactions.
AI applied to scientific problems.
AI proposing scientific hypotheses.
A robust evaluation technique that trains/evaluates across multiple splits to estimate performance variability.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
Error due to sensitivity to fluctuations in the training dataset.
The range of functions a model can represent.
Models accessible only via service APIs.
Multiple examples included in prompt.
Classifying models by impact level.
Training one model on multiple tasks simultaneously to improve generalization through shared structure.
A scalar measure optimized during training, typically expected loss over data, sometimes with regularization terms.
Fine-tuning on (prompt, response) pairs to align a model with instruction-following behaviors.
Automated testing and deployment processes for models and data workflows, extending DevOps to ML artifacts.
Inputs crafted to cause model errors or unsafe behavior, often imperceptible in vision or subtle in text.
Using production outcomes to improve models.
Model relies on irrelevant signals.
Model behaves well during training but not deployment.
Coordinating models, tools, and logic.
Fast approximation of costly simulations.
Updating a pretrained model’s weights on task-specific data to improve performance or adapt style/behavior.
Reconstructing a model or its capabilities via API queries or leaked artifacts.
Running new model alongside production without user impact.