Results for "multiple samples"
Sampling-based motion planner.
Measures a model’s ability to fit random noise; used to bound generalization error.
How well a model performs on new data drawn from the same (or similar) distribution as training.
A robust evaluation technique that trains/evaluates across multiple splits to estimate performance variability.
One complete traversal of the training dataset during training.
Randomly zeroing activations during training to reduce co-adaptation and overfitting.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
Measure of consistency across labelers; low agreement indicates ambiguous tasks or poor guidelines.
Training across many devices/silos without centralizing raw data; aggregates updates, not data.
Coordinating tools, models, and steps (retrieval, calls, validation) to deliver reliable end-to-end behavior.
Assigning labels per pixel (semantic) or per instance (instance segmentation) to map object boundaries.
Techniques to handle longer documents without quadratic cost.
Routes inputs to subsets of parameters for scalable capacity.
Transformer applied to image patches.
Distributed agents producing emergent intelligence.
Agents communicate via shared state.
Vector whose direction remains unchanged under linear transformation.
Matrix of curvature information.
One example included to guide output.
Coordinating models, tools, and logic.
Software pipeline converting raw sensor data into structured representations.
Computing joint angles for desired end-effector pose.
Deep learning system for protein structure prediction.
Agents fail to coordinate optimally.
Tendency to gain control/resources.
A system that perceives state, selects actions, and pursues goals—often combining LLM reasoning with tools and memory.