Results for "full pass through data"

69 results

Context Window Intermediate

Maximum number of tokens the model can attend to in one forward pass; constrains long-document reasoning.

Transformers & LLMs
Multitask Learning Intermediate

Training one model on multiple tasks simultaneously to improve generalization through shared structure.

Machine Learning
Vanishing Gradient Intermediate

Gradients shrink through layers, slowing learning in early layers; mitigated by ReLU, residuals, normalization.

Foundations & Theory
Embodied AI Advanced

AI systems that perceive and act in the physical world through sensors and actuators.

Robotics & Embodied AI
Unsupervised Learning Intermediate

Learning structure from unlabeled data, such as discovering groups, compressing representations, or modeling data distributions.

Machine Learning
Empirical Risk Minimization Intermediate

Minimizing average loss on training data; can overfit when data is limited or biased.

Optimization
Overfitting Intermediate

When a model fits noise/idiosyncrasies of training data and performs poorly on unseen data.

Foundations & Theory
Federated Learning Intermediate

Training across many devices/silos without centralizing raw data; aggregates updates, not data.

Foundations & Theory
Encryption in Transit/At Rest Intermediate

Protecting data during network transfer and while stored; essential for ML pipelines handling sensitive data.

Security & Privacy
Data Leakage Intermediate

When information from evaluation data improperly influences training, inflating reported performance.

Foundations & Theory
Data Augmentation Intermediate

Expanding training data via transformations (flips, noise, paraphrases) to improve robustness.

Foundations & Theory
Synthetic Data Intermediate

Artificially created data used to train/test models; helpful for privacy and coverage, risky if unrealistic.

Foundations & Theory
Data Governance Intermediate

Processes and controls for data quality, access, lineage, retention, and compliance across the AI lifecycle.

Foundations & Theory
Data Lineage Intermediate

Tracking where data came from and how it was transformed; key for debugging and compliance.

Foundations & Theory
Data Poisoning Intermediate

Maliciously inserting or altering training data to implant backdoors or degrade performance.

Foundations & Theory
Data Scaling Intermediate

Increasing performance via more data.

AI Economics & Strategy
Artificial Intelligence Intermediate

The field of building systems that perform tasks associated with human intelligence—perception, reasoning, language, planning, and decision-making—via algori...

Foundations & Theory
Machine Learning Intermediate

A subfield of AI where models learn patterns from data to make predictions or decisions, improving with experience rather than explicit rule-coding.

Machine Learning
Supervised Learning Intermediate

Learning a function from input-output pairs (labeled data), optimizing performance on predicting outputs for unseen inputs.

Machine Learning
Self-Supervised Learning Intermediate

Learning from data by constructing “pseudo-labels” (e.g., next-token prediction, masked modeling) without manual annotation.

Machine Learning
Online Learning Intermediate

Learning where data arrives sequentially and the model updates continuously, often under changing distributions.

Machine Learning
Meta-Learning Intermediate

Methods that learn training procedures or initializations so models can adapt quickly to new tasks with little data.

Machine Learning
Domain Shift Intermediate

A mismatch between training and deployment data distributions that can degrade model performance.

MLOps & Infrastructure
Objective Function Intermediate

A scalar measure optimized during training, typically expected loss over data, sometimes with regularization terms.

Optimization
Underfitting Intermediate

When a model cannot capture underlying structure, performing poorly on both training and test data.

Foundations & Theory
Bias–Variance Tradeoff Intermediate

A conceptual framework describing error as the sum of systematic error (bias) and sensitivity to data (variance).

Foundations & Theory
Generalization Intermediate

How well a model performs on new data drawn from the same (or similar) distribution as training.

Foundations & Theory
Train/Validation/Test Split Intermediate

Separating data into training (fit), validation (tune), and test (final estimate) to avoid leakage and optimism bias.

Evaluation & Benchmarking
Tool Use Intermediate

Letting an LLM call external functions/APIs to fetch data, compute, or take actions, improving reliability.

Agents & Autonomy
Fine-Tuning Intermediate

Updating a pretrained model’s weights on task-specific data to improve performance or adapt style/behavior.

Large Language Models