Results for "perception input"
Letting an LLM call external functions/APIs to fetch data, compute, or take actions, improving reliability.
Techniques to understand model decisions (global or local), important in high-stakes and regulated settings.
Observing model inputs/outputs, latency, cost, and quality over time to catch regressions and drift.
Raw model outputs before converting to probabilities; manipulated during decoding and calibration.
System design where humans validate or guide model outputs, especially for high-stakes decisions.
Allows gradients to bypass layers, enabling very deep networks.
Generating speech audio from text, with control over prosody, speaker identity, and style.
Early architecture using learned gates for skip connections.
Using same parameters across different parts of a model.
The range of functions a model can represent.
Techniques to handle longer documents without quadratic cost.
Inferring sensitive features of training data.
Models that define an energy landscape rather than explicit probabilities.
Models that learn to generate samples resembling training data.
Diffusion model trained to remove noise step by step.
Diffusion performed in latent space for efficiency.
Autoencoder using probabilistic latent variables and KL regularization.
Generator produces limited variety of outputs.
Assigning category labels to images.
Combining signals from multiple modalities.
Attention between different modalities.
Generating human-like speech from text.
Optimal estimator for linear dynamic systems.
CNNs applied to time series.
Model execution path in production.
Low-latency prediction per request.
Running new model alongside production without user impact.
Shift in feature distribution over time.
Shift in model outputs.
Cost to run models in production.