Results for "probability over text"
Generating speech audio from text, with control over prosody, speaker identity, and style.
Converting text into discrete units (tokens) for modeling; subword tokenizers balance vocabulary size and coverage.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
Inputs crafted to cause model errors or unsafe behavior, often imperceptible in vision or subtle in text.
Converting audio speech into text, often using encoder-decoder or transducer architectures.
Joint vision-language model aligning images and text.
Generating human-like speech from text.
Samples from the k highest-probability tokens to limit unlikely outputs.
A model is PAC-learnable if it can, with high probability, learn an approximately correct hypothesis from finite samples.
A measure of randomness or uncertainty in a probability distribution.
Measures divergence between true and predicted probability distributions.
Measures how one probability distribution diverges from another.
Graphical model expressing factorization of a probability distribution.
Probability of treatment assignment given covariates.
Probability of data given parameters.
The relationship between inputs and outputs changes over time, requiring monitoring and model updates.
A scalar measure optimized during training, typically expected loss over data, sometimes with regularization terms.
A datastore optimized for similarity search over embeddings, enabling semantic retrieval at scale.
Observing model inputs/outputs, latency, cost, and quality over time to catch regressions and drift.
The shape of the loss function over parameter space.
Adjusting learning rate over training to improve convergence.
Persistent directional movement over time.
Shift in feature distribution over time.
System that independently pursues goals over time.
Eliminating variables by integrating over them.
Equations governing how system states change over time.
Describes likelihoods of random variable outcomes.