Results for "regulated use"
Techniques to understand model decisions (global or local), important in high-stakes and regulated settings.
Software regulated as a medical device.
Logged record of model inputs, outputs, and decisions.
Letting an LLM call external functions/APIs to fetch data, compute, or take actions, improving reliability.
Review process before deployment.
Standardized documentation describing intended use, performance, limitations, data, and ethical considerations.
Categorizing AI applications by impact and regulatory risk.
Requirement to inform users about AI use.
Predicting borrower default risk.
How well a model performs on new data drawn from the same (or similar) distribution as training.
Gradients shrink through layers, slowing learning in early layers; mitigated by ReLU, residuals, normalization.
Networks using convolution operations with weight sharing and locality, effective for images and signals.
A model that assigns probabilities to sequences of tokens; often trained by next-token prediction.
Training objective where the model predicts the next token given previous tokens (causal modeling).
Stepwise reasoning patterns that can improve multi-step tasks; often handled implicitly or summarized for safety/privacy.
When some classes are rare, requiring reweighting, resampling, or specialized metrics.
A hidden variable influences both cause and effect, biasing naive estimates of causal impact.
Techniques that fine-tune small additional components rather than all weights to reduce compute and storage.
Structured dataset documentation covering collection, composition, recommended uses, biases, and maintenance.
A dataset + metric suite for comparing models; can be gamed or misaligned with real-world goals.
AI subfield dealing with understanding and generating human language, including syntax, semantics, and pragmatics.
Variability introduced by minibatch sampling during SGD.
Learning from data generated by a different policy.
Framework for identifying, measuring, and mitigating model risks.
Ensuring decisions can be explained and traced.
Embedding signals to prove model ownership.
Generator produces limited variety of outputs.
Assigning category labels to images.
Aligns transcripts with audio timestamps.
Probability of treatment assignment given covariates.