Results for "task-specific"
Reusing knowledge from a source task/domain to improve learning on a target task/domain, typically via pretrained models.
Updating a pretrained model’s weights on task-specific data to improve performance or adapt style/behavior.
Assigning category labels to images.
Reward only given upon task completion.
Fine-tuning on (prompt, response) pairs to align a model with instruction-following behaviors.
PEFT method injecting trainable low-rank matrices into layers, enabling efficient fine-tuning.
Tendency for agents to pursue resources regardless of final goal.
Training one model on multiple tasks simultaneously to improve generalization through shared structure.
Achieving task performance by providing a small number of examples inside the prompt without weight updates.
Measure of consistency across labelers; low agreement indicates ambiguous tasks or poor guidelines.
Pixel-level separation of individual object instances.
Pixel-wise classification of image regions.
Decomposing goals into sub-tasks.
Maximizing reward without fulfilling real goal.
Task instruction without examples.
Breaking tasks into sub-steps.
A high-capacity language model trained on massive corpora, exhibiting broad generalization and emergent behaviors.
Tradeoffs between many layers vs many neurons per layer.
Combining signals from multiple modalities.
Detects trigger phrases in audio streams.
The field of building systems that perform tasks associated with human intelligence—perception, reasoning, language, planning, and decision-making—via algori...
Studying internal mechanisms or input influence on outputs (e.g., saliency maps, SHAP, attention analysis).
Local surrogate explanation method approximating model behavior near a specific input.
Attacks that infer whether specific records were in training data, or reconstruct sensitive training examples.
Hidden behavior activated by specific triggers, causing targeted mispredictions or undesired outputs.
Embedding signals to prove model ownership.
Graphs containing multiple node or edge types with different semantics.
AI limited to specific domains.
Automatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
A scalar measure optimized during training, typically expected loss over data, sometimes with regularization terms.