Results for "temporal data"
CNNs applied to time series.
Sequential data indexed by time.
Temporal and pitch characteristics of speech.
Networks with recurrent connections for sequences; largely supplanted by Transformers for many tasks.
Predicting future values from past observations.
Repeating temporal patterns.
Interpreting human gestures.
Prevents attention to future tokens during training/inference.
Combines value estimation (critic) with policy learning (actor).
Pixel motion estimation between frames.
Number of steps considered in planning.
Modifying reward to accelerate learning.
Processes and controls for data quality, access, lineage, retention, and compliance across the AI lifecycle.
Tracking where data came from and how it was transformed; key for debugging and compliance.
Artificially created data used to train/test models; helpful for privacy and coverage, risky if unrealistic.
Expanding training data via transformations (flips, noise, paraphrases) to improve robustness.
Training across many devices/silos without centralizing raw data; aggregates updates, not data.
Increasing performance via more data.
Protecting data during network transfer and while stored; essential for ML pipelines handling sensitive data.
Human or automated process of assigning targets; quality, consistency, and guidelines matter heavily.
When information from evaluation data improperly influences training, inflating reported performance.
Shift in feature distribution over time.
Maliciously inserting or altering training data to implant backdoors or degrade performance.
Privacy risk analysis under GDPR-like laws.
Learning structure from unlabeled data, such as discovering groups, compressing representations, or modeling data distributions.
When a model fits noise/idiosyncrasies of training data and performs poorly on unseen data.
Information that can identify an individual (directly or indirectly); requires careful handling and compliance.
Inferring sensitive features of training data.
Learning from data by constructing “pseudo-labels” (e.g., next-token prediction, masked modeling) without manual annotation.
Training with a small labeled dataset plus a larger unlabeled dataset, leveraging assumptions like smoothness/cluster structure.