Results for "generated samples"
Learning where data arrives sequentially and the model updates continuously, often under changing distributions.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
Crafting prompts to elicit desired behavior, often using role, structure, constraints, and examples.
Architecture that retrieves relevant documents (e.g., from a vector DB) and conditions generation on them to reduce hallucinations.
Breaking documents into pieces for retrieval; chunk size/overlap strongly affect RAG quality.
Constraining outputs to retrieved or provided sources, often with citation, to improve factual reliability.
Tracking where data came from and how it was transformed; key for debugging and compliance.
Search algorithm for generation that keeps top-k partial sequences; can improve likelihood but reduce diversity.
Stochastic generation strategies that trade determinism for diversity; key knobs include temperature and nucleus sampling.
Inputs crafted to cause model errors or unsafe behavior, often imperceptible in vision or subtle in text.
Measures divergence between true and predicted probability distributions.
Stores past attention states to speed up autoregressive decoding.
Required human review for high-risk decisions.
Formal model linking causal mechanisms and variables.
Asking model to review and improve output.
Differences between training and inference conditions.
Small prompt changes cause large output changes.
Ability to inspect and verify AI decisions.
RL using learned or known environment models.
Fabrication of cases or statutes by LLMs.
Rules governing auctions.
Learning only from current policy’s data.