Results for "real-time"
Shift in feature distribution over time.
System that independently pursues goals over time.
Stability proven via monotonic decrease of Lyapunov function.
Finding control policies minimizing cumulative cost.
Modeling chemical systems computationally.
A mismatch between training and deployment data distributions that can degrade model performance.
A continuous vector encoding of an item (word, image, user) such that semantic similarity corresponds to geometric closeness.
When information from evaluation data improperly influences training, inflating reported performance.
Architecture that retrieves relevant documents (e.g., from a vector DB) and conditions generation on them to reduce hallucinations.
Model-generated content that is fluent but unsupported by evidence or incorrect; mitigated by grounding and verification.
Ensuring model behavior matches human goals, norms, and constraints, including reducing harmful or deceptive outputs.
Rules and controls around generation (filters, validators, structured outputs) to reduce unsafe or invalid behavior.
Systematic differences in model outcomes across groups; arises from data, labels, and deployment context.
Systematic error introduced by simplifying assumptions in a learning algorithm.
Simultaneous Localization and Mapping for robotics.
Recovering 3D structure from images.
Agent calls external tools dynamically.
Variable whose values depend on chance.
Optimization under equality/inequality constraints.
Maximizing reward without fulfilling real goal.
Maintaining alignment under new conditions.
Model behaves well during training but not deployment.
Using limited human feedback to guide large models.
RL using learned or known environment models.
Human-like understanding of physical behavior.
Testing AI under actual clinical conditions.
A robust evaluation technique that trains/evaluates across multiple splits to estimate performance variability.
Designing input features to expose useful structure (e.g., ratios, lags, aggregations), often crucial outside deep learning.
Controls the size of parameter updates; too high diverges, too low trains slowly or gets stuck.
Generates sequences one token at a time, conditioning on past tokens.