Difficulty: Intermediate

412 terms

Trend Component Intermediate

Persistent directional movement over time.

Triage System Intermediate

AI that ranks patients by urgency.

Trust Region Intermediate

Restricting updates to safe regions.

Unauthorized Practice of Law Intermediate

AI giving legal advice without authorization.

Underfitting Intermediate

When a model cannot capture underlying structure, performing poorly on both training and test data.

Universal Approximation Theorem Intermediate

Neural networks can approximate any continuous function under certain conditions.

Unsupervised Learning Intermediate

Learning structure from unlabeled data, such as discovering groups, compressing representations, or modeling data distributions.

Use Case Approval Intermediate

Review process before deployment.

Use-Case Classification Intermediate

Categorizing AI applications by impact and regulatory risk.

Value at Risk Intermediate

Maximum expected loss under normal conditions.

Value Function Intermediate

Expected cumulative reward from a state or state-action pair.

Value Learning Intermediate

Inferring and aligning with human preferences.

Vanishing Gradient Intermediate

Gradients shrink through layers, slowing learning in early layers; mitigated by ReLU, residuals, normalization.

Variance Term Intermediate

Error due to sensitivity to fluctuations in the training dataset.

VC Dimension Intermediate

A measure of a model class’s expressive capacity based on its ability to shatter datasets.

Vector Database Intermediate

A datastore optimized for similarity search over embeddings, enabling semantic retrieval at scale.

Vision Transformer Intermediate

Transformer applied to image patches.

Vocabulary Intermediate

The set of tokens a model can represent; impacts efficiency, multilinguality, and handling of rare strings.

Voice Conversion Intermediate

Changing speaker characteristics while preserving content.

Wake Word Detection Intermediate

Detects trigger phrases in audio streams.

Warmup Intermediate

Gradually increasing learning rate at training start to avoid divergence.

Weight Initialization Intermediate

Methods to set starting weights to preserve signal/gradient scales across layers.