Results for "full pass through data"
Model execution path in production.
Cost to run models in production.
Mathematical foundation for ML involving vector spaces, matrices, and linear transformations.
Decomposes a matrix into orthogonal components; used in embeddings and compression.
Number of linearly independent rows or columns.
Devices measuring physical quantities (vision, lidar, force, IMU, etc.).
High-fidelity virtual model of a physical system.
Randomizing simulation parameters to improve real-world transfer.
RL using learned or known environment models.
Estimating robot position within a map.
AI that ranks patients by urgency.
AI-assisted review of legal documents.
Differences between training and deployed patient populations.
AI predicting crime patterns (highly controversial).
Predicting case success probabilities.
Requirement to reveal AI usage in legal decisions.
Identifying suspicious transactions.
AI applied to scientific problems.
AI proposing scientific hypotheses.
A conceptual framework describing error as the sum of systematic error (bias) and sensitivity to data (variance).
A robust evaluation technique that trains/evaluates across multiple splits to estimate performance variability.
Halting training when validation performance stops improving to reduce overfitting.
Nonlinear functions enabling networks to approximate complex mappings; ReLU variants dominate modern DL.
Randomly zeroing activations during training to reduce co-adaptation and overfitting.
Mechanism that computes context-aware mixtures of representations; scales well and captures long-range dependencies.
Attention where queries/keys/values come from the same sequence, enabling token-to-token interactions.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
Achieving task performance by providing a small number of examples inside the prompt without weight updates.
Architecture that retrieves relevant documents (e.g., from a vector DB) and conditions generation on them to reduce hallucinations.
Fine-tuning on (prompt, response) pairs to align a model with instruction-following behaviors.