Results for "nonverbal input"
Detects trigger phrases in audio streams.
Control without feedback after execution begins.
Running predictions on large datasets periodically.
Using markers to isolate context segments.
A branch of ML using multi-layer neural networks to learn hierarchical representations, often excelling in vision, speech, and language.
Reusing knowledge from a source task/domain to improve learning on a target task/domain, typically via pretrained models.
Learning where data arrives sequentially and the model updates continuously, often under changing distributions.
Training one model on multiple tasks simultaneously to improve generalization through shared structure.
A mismatch between training and deployment data distributions that can degrade model performance.
The relationship between inputs and outputs changes over time, requiring monitoring and model updates.
A structured collection of examples used to train/evaluate models; quality, bias, and coverage often dominate outcomes.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
Randomly zeroing activations during training to reduce co-adaptation and overfitting.
Converting text into discrete units (tokens) for modeling; subword tokenizers balance vocabulary size and coverage.
Networks with recurrent connections for sequences; largely supplanted by Transformers for many tasks.
Predicts masked tokens in a sequence, enabling bidirectional context; often used for embeddings rather than generation.
Maximum number of tokens the model can attend to in one forward pass; constrains long-document reasoning.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
A high-priority instruction layer setting overarching behavior constraints for a chat model.
Achieving task performance by providing a small number of examples inside the prompt without weight updates.
Letting an LLM call external functions/APIs to fetch data, compute, or take actions, improving reliability.
Techniques to understand model decisions (global or local), important in high-stakes and regulated settings.
Observing model inputs/outputs, latency, cost, and quality over time to catch regressions and drift.
Raw model outputs before converting to probabilities; manipulated during decoding and calibration.
System design where humans validate or guide model outputs, especially for high-stakes decisions.
Allows gradients to bypass layers, enabling very deep networks.
Generating speech audio from text, with control over prosody, speaker identity, and style.
Early architecture using learned gates for skip connections.
Using same parameters across different parts of a model.
The range of functions a model can represent.