Results for "examples in prompt"
Achieving task performance by providing a small number of examples inside the prompt without weight updates.
Multiple examples included in prompt.
Crafting prompts to elicit desired behavior, often using role, structure, constraints, and examples.
Prompt augmented with retrieved documents.
Small prompt changes cause large output changes.
A structured collection of examples used to train/evaluate models; quality, bias, and coverage often dominate outcomes.
Attacks that infer whether specific records were in training data, or reconstruct sensitive training examples.
Task instruction without examples.
Fine-tuning on (prompt, response) pairs to align a model with instruction-following behaviors.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
A high-priority instruction layer setting overarching behavior constraints for a chat model.
Attacks that manipulate model instructions (especially via retrieved content) to override system goals or exfiltrate data.
Extracting system prompts or hidden instructions.
Using markers to isolate context segments.
Breaking tasks into sub-steps.
Enables external computation or lookup.