Difficulty: Intermediate
System design where humans validate or guide model outputs, especially for high-stakes decisions.
Configuration choices not learned directly (or not typically learned) that govern training or architecture.
Assigning category labels to images.
Process for managing AI failures.
Built-in assumptions guiding learning efficiency and generalization.
Cost to run models in production.
Model execution path in production.
Reduction in uncertainty achieved by observing a variable; used in decision trees and active learning.
Patient agreement to AI-assisted care.
Pixel-level separation of individual object instances.
Measure of consistency across labelers; low agreement indicates ambiguous tasks or poor guidelines.
Studying internal mechanisms or input influence on outputs (e.g., saliency maps, SHAP, attention analysis).
International AI risk standard.
Optimal estimator for linear dynamic systems.
Stores past attention states to speed up autoregressive decoding.
Mechanism to disable AI system.
Measures how one probability distribution diverges from another.
Structured graph encoding facts as entity–relation–entity triples.
Converts constrained problem to unconstrained form.
A model that assigns probabilities to sequences of tokens; often trained by next-token prediction.
A high-capacity language model trained on massive corpora, exhibiting broad generalization and emergent behaviors.
Time from request to response; critical for real-time inference and UX.
Guaranteed response times.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
Controls the size of parameter updates; too high diverges, too low trains slowly or gets stuck.
Adjusting learning rate over training to improve convergence.
AI supporting legal research, drafting, and analysis.
Requirement to preserve relevant data.
Local surrogate explanation method approximating model behavior near a specific input.
Choosing step size along gradient direction.