Results for "bias"
Bias
IntermediateSystematic differences in model outcomes across groups; arises from data, labels, and deployment context.
Bias in AI is like having a favorite team that you always cheer for, even if they don't play well. In machine learning, this means that a model might perform better for some groups of people than for others, often because of the data it was trained on. For example, if an AI is trained mostly on d...
Systematic error introduced by simplifying assumptions in a learning algorithm.
Systematic differences in model outcomes across groups; arises from data, labels, and deployment context.
Built-in assumptions guiding learning efficiency and generalization.
A conceptual framework describing error as the sum of systematic error (bias) and sensitivity to data (variance).
Tendency to trust automated suggestions even when incorrect; mitigated by UI design, training, and checks.
Unequal performance across demographic groups.
Error due to sensitivity to fluctuations in the training dataset.
Differences between training and inference conditions.
Separating data into training (fit), validation (tune), and test (final estimate) to avoid leakage and optimism bias.
A structured collection of examples used to train/evaluate models; quality, bias, and coverage often dominate outcomes.
Techniques that discourage overly complex solutions to improve generalization (reduce overfitting).
When a model cannot capture underlying structure, performing poorly on both training and test data.
A robust evaluation technique that trains/evaluates across multiple splits to estimate performance variability.
A parameterized function composed of interconnected units organized in layers with nonlinear activations.
Networks with recurrent connections for sequences; largely supplanted by Transformers for many tasks.
Policies and practices for approving, monitoring, auditing, and documenting models in production.
Structured dataset documentation covering collection, composition, recommended uses, biases, and maintenance.
Raw model outputs before converting to probabilities; manipulated during decoding and calibration.
Systematic review of model/data processes to ensure performance, fairness, security, and policy compliance.
A narrow hidden layer forcing compact representations.
Framework for identifying, measuring, and mitigating model risks.
Probability of treatment assignment given covariates.
Artificial sensor data generated in simulation.
AI supporting legal research, drafting, and analysis.
AI predicting crime patterns (highly controversial).
Ensuring models comply with lending fairness laws.
Trend reversal when data is aggregated improperly.