Results for "grounded context"
Internal representation of environment layout.
Control shared between human and agent.
Hard constraints preventing unsafe actions.
Protection of private legal communications.
Quantifying financial risk.
Maximum expected loss under normal conditions.
Rules governing auctions.
Collective behavior without central control.
A formal privacy framework ensuring outputs do not reveal much about any single individual’s data contribution.
Physical form contributes to computation.
Mechanisms for retaining context across turns/sessions: scratchpads, vector memories, structured stores.
Mechanism that computes context-aware mixtures of representations; scales well and captures long-range dependencies.
Systematic differences in model outcomes across groups; arises from data, labels, and deployment context.
Samples from the smallest set of tokens whose probabilities sum to p, adapting set size by context.
Flat high-dimensional regions slowing training.
Willingness of system to accept correction or shutdown.
Multiple examples included in prompt.
AI used in sensitive domains requiring compliance.
Ability to correctly detect disease.
Using markers to isolate context segments.
Of predicted positives, the fraction that are truly positive; sensitive to false positives.
Of true positives, the fraction correctly identified; sensitive to false negatives.
Of true negatives, the fraction correctly identified.
The degree to which predicted probabilities match true frequencies (e.g., 0.8 means ~80% correct).
Number of samples per gradient update; impacts compute efficiency, generalization, and stability.
Attention where queries/keys/values come from the same sequence, enabling token-to-token interactions.
An RNN variant using gates to mitigate vanishing gradients and capture longer context.
Injects sequence order into Transformers, since attention alone is permutation-invariant.
The set of tokens a model can represent; impacts efficiency, multilinguality, and handling of rare strings.
The text (and possibly other modalities) given to an LLM to condition its output behavior.