Results for "model-based"

114 results

Concept Drift Intermediate

The relationship between inputs and outputs changes over time, requiring monitoring and model updates.

Foundations & Theory
Feature Intermediate

A measurable property or attribute used as model input (raw or engineered), such as age, pixel intensity, or token ID.

Foundations & Theory
Parameters Intermediate

The learned numeric values of a model adjusted during training to minimize a loss function.

Foundations & Theory
Overfitting Intermediate

When a model fits noise/idiosyncrasies of training data and performs poorly on unseen data.

Foundations & Theory
Underfitting Intermediate

When a model cannot capture underlying structure, performing poorly on both training and test data.

Foundations & Theory
Generalization Intermediate

How well a model performs on new data drawn from the same (or similar) distribution as training.

Foundations & Theory
Vocabulary Intermediate

The set of tokens a model can represent; impacts efficiency, multilinguality, and handling of rare strings.

Transformers & LLMs
Next-Token Prediction Intermediate

Training objective where the model predicts the next token given previous tokens (causal modeling).

Foundations & Theory
Context Window Intermediate

Maximum number of tokens the model can attend to in one forward pass; constrains long-document reasoning.

Transformers & LLMs
System Prompt Intermediate

A high-priority instruction layer setting overarching behavior constraints for a chat model.

Reinforcement Learning
Hallucination Intermediate

Model-generated content that is fluent but unsupported by evidence or incorrect; mitigated by grounding and verification.

Model Failure Modes
Fine-Tuning Intermediate

Updating a pretrained model’s weights on task-specific data to improve performance or adapt style/behavior.

Large Language Models
SFT Intermediate

Fine-tuning on (prompt, response) pairs to align a model with instruction-following behaviors.

Foundations & Theory
RLHF Intermediate

Reinforcement learning from human feedback: uses preference data to train a reward model and optimize the policy.

Optimization
Alignment Intermediate

Ensuring model behavior matches human goals, norms, and constraints, including reducing harmful or deceptive outputs.

Foundations & Theory
Bias Intermediate

Systematic differences in model outcomes across groups; arises from data, labels, and deployment context.

Foundations & Theory
Explainability Intermediate

Techniques to understand model decisions (global or local), important in high-stakes and regulated settings.

Foundations & Theory
LIME Intermediate

Local surrogate explanation method approximating model behavior near a specific input.

Foundations & Theory
Audit Intermediate

Systematic review of model/data processes to ensure performance, fairness, security, and policy compliance.

Governance & Ethics
Monitoring Intermediate

Observing model inputs/outputs, latency, cost, and quality over time to catch regressions and drift.

MLOps & Infrastructure
Distillation Intermediate

Training a smaller “student” model to mimic a larger “teacher,” often improving efficiency while retaining performance.

Foundations & Theory
Logits Intermediate

Raw model outputs before converting to probabilities; manipulated during decoding and calibration.

Foundations & Theory
Eval Harness Intermediate

System for running consistent evaluations across tasks, versions, prompts, and model settings.

Foundations & Theory
Adversarial Example Intermediate

Inputs crafted to cause model errors or unsafe behavior, often imperceptible in vision or subtle in text.

Foundations & Theory
Prompt Injection Intermediate

Attacks that manipulate model instructions (especially via retrieved content) to override system goals or exfiltrate data.

Foundations & Theory
Secure Inference Intermediate

Methods to protect model/data during inference (e.g., trusted execution environments) from operators/attackers.

Foundations & Theory
Human-in-the-Loop Intermediate

System design where humans validate or guide model outputs, especially for high-stakes decisions.

Foundations & Theory
Function Calling Intermediate

Constraining model outputs into a schema used to call external APIs/tools safely and deterministically.

Foundations & Theory
PAC Learning Intermediate

A model is PAC-learnable if it can, with high probability, learn an approximately correct hypothesis from finite samples.

AI Economics & Strategy
Rademacher Complexity Intermediate

Measures a model’s ability to fit random noise; used to bound generalization error.

AI Economics & Strategy