Results for "aggregation bias"
Systematic error introduced by simplifying assumptions in a learning algorithm.
Systematic differences in model outcomes across groups; arises from data, labels, and deployment context.
Built-in assumptions guiding learning efficiency and generalization.
A conceptual framework describing error as the sum of systematic error (bias) and sensitivity to data (variance).
Tendency to trust automated suggestions even when incorrect; mitigated by UI design, training, and checks.
Unequal performance across demographic groups.
Trend reversal when data is aggregated improperly.
Error due to sensitivity to fluctuations in the training dataset.
Differences between training and inference conditions.
A broader capability to infer internal system state from telemetry, crucial for AI services and agents.
GNN framework where nodes iteratively exchange and aggregate messages from neighbors.
Probabilities do not reflect true correctness.
GNN using attention to weight neighbor contributions dynamically.
Separating data into training (fit), validation (tune), and test (final estimate) to avoid leakage and optimism bias.
A structured collection of examples used to train/evaluate models; quality, bias, and coverage often dominate outcomes.
Techniques that discourage overly complex solutions to improve generalization (reduce overfitting).
When a model cannot capture underlying structure, performing poorly on both training and test data.
A robust evaluation technique that trains/evaluates across multiple splits to estimate performance variability.
A parameterized function composed of interconnected units organized in layers with nonlinear activations.
Policies and practices for approving, monitoring, auditing, and documenting models in production.
Networks with recurrent connections for sequences; largely supplanted by Transformers for many tasks.
Raw model outputs before converting to probabilities; manipulated during decoding and calibration.
Structured dataset documentation covering collection, composition, recommended uses, biases, and maintenance.
A narrow hidden layer forcing compact representations.
Systematic review of model/data processes to ensure performance, fairness, security, and policy compliance.
Framework for identifying, measuring, and mitigating model risks.
Probability of treatment assignment given covariates.
Artificial sensor data generated in simulation.
AI supporting legal research, drafting, and analysis.
AI predicting crime patterns (highly controversial).