Results for "input change"
A high-priority instruction layer setting overarching behavior constraints for a chat model.
Achieving task performance by providing a small number of examples inside the prompt without weight updates.
Letting an LLM call external functions/APIs to fetch data, compute, or take actions, improving reliability.
Techniques to understand model decisions (global or local), important in high-stakes and regulated settings.
Raw model outputs before converting to probabilities; manipulated during decoding and calibration.
System design where humans validate or guide model outputs, especially for high-stakes decisions.
Generating speech audio from text, with control over prosody, speaker identity, and style.
Allows gradients to bypass layers, enabling very deep networks.
Early architecture using learned gates for skip connections.
Using same parameters across different parts of a model.
The range of functions a model can represent.
Techniques to handle longer documents without quadratic cost.
Inferring sensitive features of training data.
Models that define an energy landscape rather than explicit probabilities.
Models that learn to generate samples resembling training data.
Diffusion model trained to remove noise step by step.
Diffusion performed in latent space for efficiency.
Autoencoder using probabilistic latent variables and KL regularization.
Generator produces limited variety of outputs.
Assigning category labels to images.
Combining signals from multiple modalities.
Generating human-like speech from text.
Attention between different modalities.
CNNs applied to time series.
Optimal estimator for linear dynamic systems.
Model execution path in production.
Low-latency prediction per request.
Running new model alongside production without user impact.
Shift in model outputs.
Cost to run models in production.