Results for "data → model"
Processes and controls for data quality, access, lineage, retention, and compliance across the AI lifecycle.
Tracking where data came from and how it was transformed; key for debugging and compliance.
Artificially created data used to train/test models; helpful for privacy and coverage, risky if unrealistic.
Training across many devices/silos without centralizing raw data; aggregates updates, not data.
Expanding training data via transformations (flips, noise, paraphrases) to improve robustness.
Increasing performance via more data.
Protecting data during network transfer and while stored; essential for ML pipelines handling sensitive data.
When information from evaluation data improperly influences training, inflating reported performance.
Shift in feature distribution over time.
Human or automated process of assigning targets; quality, consistency, and guidelines matter heavily.
Maliciously inserting or altering training data to implant backdoors or degrade performance.
Inferring sensitive features of training data.
Privacy risk analysis under GDPR-like laws.
When a model fits noise/idiosyncrasies of training data and performs poorly on unseen data.
Learning from data by constructing “pseudo-labels” (e.g., next-token prediction, masked modeling) without manual annotation.
Diffusion model trained to remove noise step by step.
Learning structure from unlabeled data, such as discovering groups, compressing representations, or modeling data distributions.
Information that can identify an individual (directly or indirectly); requires careful handling and compliance.
Generative model that learns to reverse a gradual noise process.
Learning where data arrives sequentially and the model updates continuously, often under changing distributions.
A mismatch between training and deployment data distributions that can degrade model performance.
Empirical laws linking model size, data, compute to performance.
Designing input features to expose useful structure (e.g., ratios, lags, aggregations), often crucial outside deep learning.
Combining simulation and real-world data.
Training with a small labeled dataset plus a larger unlabeled dataset, leveraging assumptions like smoothness/cluster structure.
Recovering training data from gradients.
Diffusion performed in latent space for efficiency.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
Sequential data indexed by time.
Artificial sensor data generated in simulation.