Results for "probability mapping"
Describes likelihoods of random variable outcomes.
Strategy mapping states to actions.
Simultaneous Localization and Mapping for robotics.
Samples from the k highest-probability tokens to limit unlikely outputs.
Probability of data given parameters.
Variable whose values depend on chance.
Stochastic generation strategies that trade determinism for diversity; key knobs include temperature and nucleus sampling.
Measures how one probability distribution diverges from another.
Models that define an energy landscape rather than explicit probabilities.
Penalizes confident wrong predictions heavily; standard for classification and language modeling.
Estimating parameters by maximizing likelihood of observed data.
Updating beliefs about parameters using observed evidence and prior distributions.
Graphical model expressing factorization of a probability distribution.
Average value under a distribution.
Learning a function from input-output pairs (labeled data), optimizing performance on predicting outputs for unseen inputs.
A continuous vector encoding of an item (word, image, user) such that semantic similarity corresponds to geometric closeness.
Designing input features to expose useful structure (e.g., ratios, lags, aggregations), often crucial outside deep learning.
A parameterized mapping from inputs to outputs; includes architecture + learned parameters.
The learned numeric values of a model adjusted during training to minimize a loss function.
Generator produces limited variety of outputs.
Exact likelihood generative models using invertible transforms.
Changing speaker characteristics while preserving content.
Visualization of optimization landscape.
Software pipeline converting raw sensor data into structured representations.
Algorithm computing control actions.
Learning action mapping directly from demonstrations.
Estimating robot position within a map.
Fast approximation of costly simulations.
Samples from the smallest set of tokens whose probabilities sum to p, adapting set size by context.
A model is PAC-learnable if it can, with high probability, learn an approximately correct hypothesis from finite samples.