Results for "meaning-based retrieval"
Methods for breaking goals into steps; can be classical (A*, STRIPS) or LLM-driven with tool calls.
Constraining model outputs into a schema used to call external APIs/tools safely and deterministically.
Models that process or generate multiple modalities, enabling vision-language tasks, speech, video understanding, etc.
Limiting gradient magnitude to prevent exploding gradients.
Built-in assumptions guiding learning efficiency and generalization.
Prevents attention to future tokens during training/inference.
A single attention mechanism within multi-head attention.
Encodes token position explicitly, often via sinusoids.
Routes inputs to subsets of parameters for scalable capacity.
Strategy mapping states to actions.
Expected return of taking action in a state.
Coordination arising without explicit programming.
Optimizing policies directly via gradient ascent on expected reward.
Categorizing AI applications by impact and regulatory risk.
Learning from data generated by a different policy.
Extracting system prompts or hidden instructions.
Models trained to decide when to call tools.
Neural networks that operate on graph-structured data by propagating information along edges.
GNN framework where nodes iteratively exchange and aggregate messages from neighbors.
Diffusion model trained to remove noise step by step.
Extension of convolution to graph domains using adjacency structure.
Assigning category labels to images.
GNN using attention to weight neighbor contributions dynamically.
Pixel-level separation of individual object instances.
Joint vision-language model aligning images and text.
Pixel motion estimation between frames.
Predicting future values from past observations.
Repeating temporal patterns.
Optimal estimator for linear dynamic systems.
Maintaining two environments for instant rollback.