Results for "no examples"
Multiple examples included in prompt.
Achieving task performance by providing a small number of examples inside the prompt without weight updates.
Crafting prompts to elicit desired behavior, often using role, structure, constraints, and examples.
Inputs crafted to cause model errors or unsafe behavior, often imperceptible in vision or subtle in text.
Attacks that infer whether specific records were in training data, or reconstruct sensitive training examples.
Task instruction without examples.
Methods that learn training procedures or initializations so models can adapt quickly to new tasks with little data.
A structured collection of examples used to train/evaluate models; quality, bias, and coverage often dominate outcomes.
A function measuring prediction error (and sometimes calibration), guiding gradient-based optimization.
Number of samples per gradient update; impacts compute efficiency, generalization, and stability.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
Ordering training samples from easier to harder to improve convergence or generalization.
A model is PAC-learnable if it can, with high probability, learn an approximately correct hypothesis from finite samples.
Techniques to handle longer documents without quadratic cost.
End-to-end process for model training.
Using limited human feedback to guide large models.
One example included to guide output.
Goals useful regardless of final objective.
Updating a pretrained model’s weights on task-specific data to improve performance or adapt style/behavior.
A measure of a model class’s expressive capacity based on its ability to shatter datasets.