Results for "examples in prompt"
Crafting prompts to elicit desired behavior, often using role, structure, constraints, and examples.
Small prompt changes cause large output changes.
Multiple examples included in prompt.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
A high-priority instruction layer setting overarching behavior constraints for a chat model.
Extracting system prompts or hidden instructions.
Prompt augmented with retrieved documents.
Achieving task performance by providing a small number of examples inside the prompt without weight updates.
Task instruction without examples.
Enables external computation or lookup.
Attacks that manipulate model instructions (especially via retrieved content) to override system goals or exfiltrate data.
Using markers to isolate context segments.
Inputs crafted to cause model errors or unsafe behavior, often imperceptible in vision or subtle in text.
Attacks that infer whether specific records were in training data, or reconstruct sensitive training examples.
One example included to guide output.
Assigning a role or identity to the model.
Breaking tasks into sub-steps.
Methods that learn training procedures or initializations so models can adapt quickly to new tasks with little data.
A structured collection of examples used to train/evaluate models; quality, bias, and coverage often dominate outcomes.
A function measuring prediction error (and sometimes calibration), guiding gradient-based optimization.
Number of samples per gradient update; impacts compute efficiency, generalization, and stability.
Ordering training samples from easier to harder to improve convergence or generalization.
A model is PAC-learnable if it can, with high probability, learn an approximately correct hypothesis from finite samples.
Techniques to handle longer documents without quadratic cost.
End-to-end process for model training.
Using limited human feedback to guide large models.
Goals useful regardless of final objective.
Updating a pretrained model’s weights on task-specific data to improve performance or adapt style/behavior.
A measure of a model class’s expressive capacity based on its ability to shatter datasets.
Fine-tuning on (prompt, response) pairs to align a model with instruction-following behaviors.