Results for "full pass through data"
Systematic error introduced by simplifying assumptions in a learning algorithm.
A narrow minimum often associated with poorer generalization.
Built-in assumptions guiding learning efficiency and generalization.
Learns the score (∇ log p(x)) for generative sampling.
Two-network setup where generator fools a discriminator.
Exact likelihood generative models using invertible transforms.
Startup latency for services.
Finding mathematical equations from data.
Tracking where data came from and how it was transformed; key for debugging and compliance.
Methods that learn training procedures or initializations so models can adapt quickly to new tasks with little data.
Letting an LLM call external functions/APIs to fetch data, compute, or take actions, improving reliability.
Reinforcement learning from human feedback: uses preference data to train a reward model and optimize the policy.
Model-generated content that is fluent but unsupported by evidence or incorrect; mitigated by grounding and verification.
Capabilities that appear only beyond certain model sizes.
Persistent directional movement over time.
Identifying abrupt changes in data generation.
Agents communicate via shared state.
Models effects of interventions (do(X=x)).
Maintaining alignment under new conditions.
Task instruction without examples.
Applying learned patterns incorrectly.
Centralized AI expertise group.
External sensing of surroundings (vision, audio, lidar).
Differences between simulated and real physics.
AI systems assisting clinicians with diagnosis or treatment decisions.
Training one model on multiple tasks simultaneously to improve generalization through shared structure.
Configuration choices not learned directly (or not typically learned) that govern training or architecture.
A scalar measure optimized during training, typically expected loss over data, sometimes with regularization terms.
Crafting prompts to elicit desired behavior, often using role, structure, constraints, and examples.
System design where humans validate or guide model outputs, especially for high-stakes decisions.