Results for "continual learning"
When some classes are rare, requiring reweighting, resampling, or specialized metrics.
Expanding training data via transformations (flips, noise, paraphrases) to improve robustness.
Policies and practices for approving, monitoring, auditing, and documenting models in production.
Standardized documentation describing intended use, performance, limitations, data, and ethical considerations.
Structured dataset documentation covering collection, composition, recommended uses, biases, and maintenance.
Techniques that fine-tune small additional components rather than all weights to reduce compute and storage.
Observing model inputs/outputs, latency, cost, and quality over time to catch regressions and drift.
PEFT method injecting trainable low-rank matrices into layers, enabling efficient fine-tuning.
Inputs crafted to cause model errors or unsafe behavior, often imperceptible in vision or subtle in text.
Maliciously inserting or altering training data to implant backdoors or degrade performance.
Attacks that infer whether specific records were in training data, or reconstruct sensitive training examples.
Mechanisms for retaining context across turns/sessions: scratchpads, vector memories, structured stores.
AI focused on interpreting images/video: classification, detection, segmentation, tracking, and 3D understanding.
Optimization with multiple local minima/saddle points; typical in neural networks.
Variability introduced by minibatch sampling during SGD.
A narrow minimum often associated with poorer generalization.
Early architecture using learned gates for skip connections.
Empirical laws linking model size, data, compute to performance.
Chooses which experts process each token.
Set of all actions available to the agent.
Formal framework for sequential decision-making under uncertainty.
Fundamental recursive relationship defining optimal value functions.
Expected cumulative reward from a state or state-action pair.
Inferring sensitive features of training data.
Embedding signals to prove model ownership.
Models that define an energy landscape rather than explicit probabilities.
Models that learn to generate samples resembling training data.
Learns the score (∇ log p(x)) for generative sampling.
Assigning category labels to images.
Joint vision-language model aligning images and text.