Results for "grounded context"
Feature attribution method grounded in cooperative game theory for explaining predictions in tabular settings.
Maximum number of tokens the model can attend to in one forward pass; constrains long-document reasoning.
A model that assigns probabilities to sequences of tokens; often trained by next-token prediction.
Predicts masked tokens in a sequence, enabling bidirectional context; often used for embeddings rather than generation.
Techniques to handle longer documents without quadratic cost.
Stochastic generation strategies that trade determinism for diversity; key knobs include temperature and nucleus sampling.
Extending agents with long-term memory stores.
Control that remains stable under model uncertainty.
Learning without catastrophic forgetting.
Restricting distribution of powerful models.
Techniques that stabilize and speed training by normalizing activations; LayerNorm is common in Transformers.
Achieving task performance by providing a small number of examples inside the prompt without weight updates.
Estimating parameters by maximizing likelihood of observed data.
Updating beliefs about parameters using observed evidence and prior distributions.
Ensuring decisions can be explained and traced.
Legal or policy requirement to explain AI decisions.
Learns the score (∇ log p(x)) for generative sampling.
Directed acyclic graph encoding causal relationships.
Decomposing goals into sub-tasks.
Assigning a role or identity to the model.
Distributed agents producing emergent intelligence.
European regulation classifying AI systems by risk.
US framework for AI risk governance.
Review process before deployment.
Requirement to provide explanations.
Coordinating models, tools, and logic.
Limiting inference usage.
Motion of solid objects under forces.
Combining simulation and real-world data.
Modifying reward to accelerate learning.