Difficulty: Intermediate

412 terms

Prompt Intermediate

The text (and possibly other modalities) given to an LLM to condition its output behavior.

Prompt Engineering Intermediate

Crafting prompts to elicit desired behavior, often using role, structure, constraints, and examples.

Prompt Injection Intermediate

Attacks that manipulate model instructions (especially via retrieved content) to override system goals or exfiltrate data.

Prompt Leakage Intermediate

Extracting system prompts or hidden instructions.

Prompt Sensitivity Intermediate

Small prompt changes cause large output changes.

Prosody Intermediate

Temporal and pitch characteristics of speech.

Pruning Intermediate

Removing weights or neurons to shrink models and improve efficiency; can be structured or unstructured.

Q-Function Intermediate

Expected return of taking action in a state.

Quantization Intermediate

Reducing numeric precision of weights/activations to speed inference and reduce memory with acceptable accuracy loss.

Rademacher Complexity Intermediate

Measures a model’s ability to fit random noise; used to bound generalization error.

RAG Intermediate

Architecture that retrieves relevant documents (e.g., from a vector DB) and conditions generation on them to reduce hallucinations.

Recall Intermediate

Of true positives, the fraction correctly identified; sensitive to false negatives.

Recurrent Neural Network Intermediate

Networks with recurrent connections for sequences; largely supplanted by Transformers for many tasks.

Red Teaming Intermediate

Stress-testing models for failures, vulnerabilities, policy violations, and harmful behaviors before release.

Regularization Intermediate

Techniques that discourage overly complex solutions to improve generalization (reduce overfitting).

Reinforcement Learning Intermediate

A learning paradigm where an agent interacts with an environment and learns to choose actions to maximize cumulative reward.

ReLU Intermediate

Activation max(0, x); improves gradient flow and training speed in deep nets.

Representation Learning Intermediate

Automatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.

Reproducibility Intermediate

Ability to replicate results given same code/data; harder in distributed training and nondeterministic ops.

Residual Connection Intermediate

Allows gradients to bypass layers, enabling very deep networks.

Responsible AI Intermediate

A discipline ensuring AI systems are fair, safe, transparent, privacy-preserving, and accountable throughout lifecycle.

Restricted Boltzmann Machine Intermediate

Simplified Boltzmann Machine with bipartite structure.

Reward Model Intermediate

Model trained to predict human preferences (or utility) for candidate outputs; used in RLHF-style pipelines.

Risk Model Intermediate

Quantifying financial risk.

Risk Register Intermediate

Central log of AI-related risks.

Risk Stratification Intermediate

Grouping patients by predicted outcomes.

RLHF Intermediate

Reinforcement learning from human feedback: uses preference data to train a reward model and optimize the policy.

Robust Control Intermediate

Control that remains stable under model uncertainty.

ROC Curve Intermediate

Plots true positive rate vs false positive rate across thresholds; summarizes separability.

Rotary Positional Embeddings Intermediate

Encodes positional information via rotation in embedding space.