Results for "subspace focus"
Learning physical parameters from data.
Mechanism that computes context-aware mixtures of representations; scales well and captures long-range dependencies.
Architecture based on self-attention and feedforward layers; foundation of modern LLMs and many multimodal models.
Samples from the k highest-probability tokens to limit unlikely outputs.
Hidden behavior activated by specific triggers, causing targeted mispredictions or undesired outputs.
Reconstructing a model or its capabilities via API queries or leaked artifacts.
Methods to protect model/data during inference (e.g., trusted execution environments) from operators/attackers.
Techniques to handle longer documents without quadratic cost.
A single attention mechanism within multi-head attention.
Separates planning from execution in agent architectures.
GNN using attention to weight neighbor contributions dynamically.
Models that learn to generate samples resembling training data.
Attention between different modalities.
Decomposing goals into sub-tasks.
Detects trigger phrases in audio streams.
Assigning a role or identity to the model.
Distributed agents producing emergent intelligence.
A system that perceives state, selects actions, and pursues goals—often combining LLM reasoning with tools and memory.
Using markers to isolate context segments.