Results for "tool-augmented LLM"
Enables external computation or lookup.
Letting an LLM call external functions/APIs to fetch data, compute, or take actions, improving reliability.
Methods for breaking goals into steps; can be classical (A*, STRIPS) or LLM-driven with tool calls.
Models trained to decide when to call tools.
Agent calls external tools dynamically.
Central log of AI-related risks.
Architecture that retrieves relevant documents (e.g., from a vector DB) and conditions generation on them to reduce hallucinations.
Breaking documents into pieces for retrieval; chunk size/overlap strongly affect RAG quality.
Constraining outputs to retrieved or provided sources, often with citation, to improve factual reliability.
Prompt augmented with retrieved documents.
A high-capacity language model trained on massive corpora, exhibiting broad generalization and emergent behaviors.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
A system that perceives state, selects actions, and pursues goals—often combining LLM reasoning with tools and memory.
A table summarizing classification outcomes, foundational for metrics like precision, recall, specificity.
Standardized documentation describing intended use, performance, limitations, data, and ethical considerations.
Updating beliefs about parameters using observed evidence and prior distributions.
Neural networks that operate on graph-structured data by propagating information along edges.
Formal model linking causal mechanisms and variables.
GNN using attention to weight neighbor contributions dynamically.
Interleaving reasoning and tool use.
Sampling-based motion planner.
Ability to correctly detect disease.
Testing AI under actual clinical conditions.
Simulating adverse scenarios.
Maximum expected loss under normal conditions.
Measures a model’s ability to fit random noise; used to bound generalization error.